Sunteți pe pagina 1din 183

2

IRIS LOCALIZATION USING GRAYSCALE TEXTURE ANALYSIS AND RECOGNITION USING BIT PLANES

Abdul Basit

A thesis submitted to the College of Electrical and Mechanical Engineering National University of Sciences and Technology, Rawalpindi, Pakistan, in partial fulfillment of the requirements for the degree of Doctor of Philosophy

Department of Computer Engineering College of Electrical and Mechanical Engineering National University of Sciences and Technology, Rawalpindi, Pakistan 2009

Abstract
Identification and verification of human beings is very important because of todays security condition throughout the world. From the beginning of 19th century, iris is being used for recognition of humans. Recent efforts in computer vision have made it possible to develop automated systems that can recognize individuals efficiently and with high accuracy. The main functional components of existing iris-recognition systems consist of image acquisition, iris localization, feature extraction and matching. While designing the system, one must understand physical nature of the iris, image processing and their analysis to make an accurate system. The most difficult and time consuming part of iris recognition is iris localization. In this thesis, performance of iris localization and normalization processes in iris recognition systems has been enhanced through development of effective and efficient strategies. Bit plane and wavelet based features has been analyzed for recognition. Iris localization is the most important step in iris recognition systems. Iris is localized by first finding the boundary between pupil and iris using different methods for different databases. This is because the iris image acquiring devices and environment is different. Non-circular boundary of pupil is obtained by dividing the circular pupil into specific points and then these points are forced to shift at exact boundary position of pupil which are linearly joined. The boundary between iris and sclera is obtained by finding points of maximum gradient in radially outwards different directions. Redundant points are discarded by finding certain distance from the center of the pupil to the concerned relevant point. This is because the distance between center of pupil and center of iris is very small. The domain for different directions is left and right sectors of iris when pupil center is at the origin of the axes. Eyelids are detected by fitting parabolas using points satisfying specific criterions. Experimental results show that the efficiency of the proposed method is very high as compared to other existing methods. Improved localization results are reported using proposed methods. The experiments are carried out for four different iris image datasets. Correct localization rate of 100% (pupil circular boundary), 99.8% (non-circular pupil), 99.77% (iris outer -ii-

boundary), 98.91% (upper eyelid detection) and 96.6% (lower eyelid detection) has been achieved for different datasets. To compensate the change in size of the iris due to pupil constriction / dilation and camera to eye distance, different normalization schemes have been designed and implemented based on difference reference points. Mainly two different features extraction methodologies have been proposed. One is related to the bit planes of normalized image and other utilizes the properties of wavelet transform. Recognition results based on bit plane features of the iris have also been obtained and correct recognition rate of up to 99.64% has been achieved using CASIA version 3.0. Results on other databases have also provided encouraging performance with accuracy of 94.11%, 97.55% and 99.6% on MMU, CASIA version 1.0 and BATH iris databases respectively. Different wavelets have been applied to get best iris recognition results. Different levels of wavelet transforms (Haar, Daubechies, Symlet, Coiflet, Biorthogonal and Mexican hat) along with different number of coefficients have been used. Coiflet wavelet resulted in high accuracies of 99.83%, 96.59%, 98.44% and 100% on CASIA version 1.0, CASIA version 3.0, MMU and BATH iris databases respectively.

-iii-

Acknowledgement
First and foremost, I would like to express my deepest gratitude and innumerous thanks to the most merciful, the most beneficent, and the most gracious Almighty Allah who gave me the courage and motivation to undertake this challenging task. I would like to express my sincere gratitude to my advisor Prof. Dr. Muhammad Younus Javed for the continuous support of my PhD study and research, for his patience, motivation, enthusiasm, and immense knowledge. His guidance helped me in all the time of research and writing of this thesis. Without his excellent guidance and tremendous support, this research work would have been impossible. Besides my advisor, I would like to thank the rest of my thesis committee: Prof. Dr. Azad Akhter Siddiqui, Prof. Dr. Shoab Ahmad Khan, and Dr. Muid Mufti, for their insightful comments and encouragement. I am grateful to Mr. Saqib Masood for his continuous motivation throughout the degree and Mr. Muhammad Abdul Samad for his valuable suggestions, generous help and ideas in completing this thesis. In particular, I would like to thank Mr. Haroon-ur-Rasheed for enlightening me the first glance of the research area. I am deeply thankful to my parents, my wife, and siblings for their tremendous moral support and uncountable prayers to support me spiritually throughout my life. I am indebted to my many of my colleagues, Dr. Qamar-ul-Haq, Dr. Salman, Dr. Almas, for their support. I would like to thank Higher Education Commission for the award of scholarship and my office which grant me leave for higher studies.

Lastly, I offer my regards and blessings to all of those who supported me in any respect during the completion of the degree.

-iv-

Dedication

This work is dedicated to my family.

-v-

Table of Contents
Chapter 1: Introduction......................................................................................... 1 1.1 Biometrics ................................................................................................................. 1 1.1.1 Properties for a Biometric.................................................................................. 2 1.2 Some Biometrics....................................................................................................... 3 1.2.1 Face Recognition ............................................................................................... 3 1.2.2 Fingerprint.......................................................................................................... 3 1.2.3 Hand Geometry.................................................................................................. 4 1.2.4 Retina ................................................................................................................. 4 1.2.5 Signature Verification........................................................................................ 5 1.2.6 Voice Authentication ......................................................................................... 5 1.2.7 Gait Recognition ................................................................................................ 6 1.2.8 Ear Recognition ................................................................................................. 6 1.2.9 Iris Recognition.................................................................................................. 6 1.3 Location of Iris in Human Eye.................................................................................. 7 1.3.1 Color of the eye.................................................................................................. 8 1.3.2 Working of the Eye............................................................................................ 8 1.3.2 Anatomy and Structure of Iris............................................................................ 9 1.4 Research on Iris Recognition .................................................................................. 10 1.5 Iris Recognition System.......................................................................................... 10 Chapter 2: Existing Iris Recognition Techniques.............................................. 11 2.1 Background ........................................................................................................... 11 2.2 Iris Image Acquisition........................................................................................... 11 2.3 Iris Localization .................................................................................................... 12 2.3.1 Edge Detectors ................................................................................................ 12 2.3.2 Existing Iris Localization Methods................................................................. 17 2.4 Iris Normalization ................................................................................................. 20 2.4.1 Existing Methods ............................................................................................ 20 2.5 Feature Extraction................................................................................................. 22 2.5.1 Gabor Filter..................................................................................................... 22 2.5.2 Log Gabor Filter ............................................................................................. 23 2.5.3 Zero Crossings of 1D Wavelets ...................................................................... 23 2.5.4 Haar Wavelet .................................................................................................. 24 2.6 Matching Algorithms ............................................................................................ 24 2.6.1 Normalized Hamming Distance...................................................................... 24 2.6.2 Euclidean Distance.......................................................................................... 25 2.6.3 Normalized Correlation .................................................................................. 25 Chapter 3: Proposed Methodologies................................................................... 27 3.1 Proposed Iris Localization Method....................................................................... 27 3.1.1 Pupil Boundary Detection............................................................................... 27 3.1.2 Non-Circular Pupil Boundary Detection ........................................................ 31 3.1.3 Iris Boundary Detection.................................................................................. 33 3.1.4 Eyelids Localization........................................................................................ 34 3.2 Proposed Normalization Methods......................................................................... 37 3.2.1 Normalization via Pupil Center ...................................................................... 37

-vi-

3.2.2 Normalization via Iris Center.......................................................................... 39 3.2.3 Normalization via Minimum Distance............................................................ 40 3.2.4 Normalization via Mid-point between Iris and Pupil Centers ........................ 41 3.2.5 Normalization using Dynamic Size Method................................................... 42 3.3 Proposed Feature Extraction Methods .................................................................. 43 3.3.1 EigenIris Method or Principal Component Analysis ...................................... 44 3.3.2 Bit Planes ........................................................................................................ 45 3.3.3 Wavelets.......................................................................................................... 46 3.4 Matching ............................................................................................................... 48 3.4.1 Euclidean Distance.......................................................................................... 48 3.4.2 Normalized Hamming Distance...................................................................... 49 Chapter 4: Design & Implementation Details.................................................... 50 4.1 Iris Localization .................................................................................................... 50 4.1.1 Circular Pupil Boundary Detection................................................................. 50 4.1.2 Non-Circular Pupil Boundary Detection ........................................................ 60 4.1.3 Iris Boundary Detection.................................................................................. 61 4.1.4 Eyelids Localization........................................................................................ 67 4.2 Normalization Methods ........................................................................................ 71 4.2.1 Normalization From Pupil Module................................................................. 71 4.2.2 Normalization From Iris Module .................................................................... 71 4.2.3 Normalization From Minimum Distance Module .......................................... 72 4.2.4 Normalization From Mid-point Module ......................................................... 72 4.2.5 Normalization With Dynamic Size Module ................................................... 72 4.3 Feature Extraction Methods.................................................................................. 74 4.3.1 Principal Component Analysis ....................................................................... 74 4.3.2 Bit planes ........................................................................................................ 74 4.3.3 Wavelets.......................................................................................................... 75 4.4 Matching ............................................................................................................... 77 4.4.1 Euclidean Distance.......................................................................................... 77 4.4.2 Normalized Hamming Distance...................................................................... 77 Chapter 5: Results & Discussions ....................................................................... 79 5.1 Databases Used for Evaluation ............................................................................. 79 5.2 CASIA Version 1.0............................................................................................... 81 5.2.1 Pupil Localization ........................................................................................... 81 5.2.2 Non-circular Pupil Localization...................................................................... 82 5.2.3 Iris Localization .............................................................................................. 83 5.2.4 Eyelids Localization........................................................................................ 83 5.3 CASIA Version 3.0............................................................................................... 84 5.3.1 Pupil Localization ........................................................................................... 85 5.3.2 Non-circular Pupil Localization...................................................................... 86 5.3.3 Iris Localization .............................................................................................. 86 5.3.4 Eyelids Localization........................................................................................ 86 5.4 University of Bath Iris Database (free version) .................................................... 88 5.4.1 Pupil Localization ........................................................................................... 88 5.4.2 Non-circular Pupil Localization...................................................................... 89 5.4.3 Iris Localization .............................................................................................. 89

-vii-

5.4.4 Eyelids Localization........................................................................................ 89 5.5 MMU Version 1.0................................................................................................. 91 5.5.1 Pupil Localization ........................................................................................... 91 5.5.2 Non-circular Pupil Localization...................................................................... 92 5.5.3 Iris Localization .............................................................................................. 92 5.5.4 Eyelids Localization........................................................................................ 93 5.6 Errors in Localization ........................................................................................... 95 5.6.1 Errors in Circular Pupil Localization.............................................................. 95 5.6.2 Errors in Non-circular Pupil Localization....................................................... 96 5.6.3 Errors in Iris Localization ............................................................................... 97 5.6.4 Errors in Eyelids Localization ........................................................................ 99 5.7 Comparison with Other Methods........................................................................ 100 5.7.1 Accuracy ....................................................................................................... 100 5.7.2 Computational Complexity........................................................................... 104 5.8 Normalization ..................................................................................................... 105 5.9 Feature Extraction and Matching........................................................................ 108 5.9.1 Principal Component Analysis ..................................................................... 108 a. Experiment Set 1 (Dimension Reduction) .................................................... 109 b. Experiment Set 2 (Training Images)............................................................. 113 c. Experiment Set 3 (Training Classes) ............................................................ 117 5.9.2 Bit planes ...................................................................................................... 119 a. Results on BATH.......................................................................................... 120 b. Results on CASIA version 1.0 ...................................................................... 123 c. Results on CASIA version 3.0 ...................................................................... 125 d. Results on MMU........................................................................................... 127 5.9.3 Wavelets........................................................................................................ 128 a. Results on CASIA version 1.0 using Daubechies 2...................................... 129 b. Results using other wavelets on CASIA version 1.0 .................................... 131 c. Results on CASIA version 3.0 ...................................................................... 138 d. Results on MMU........................................................................................... 138 e. Results on BATH.......................................................................................... 139 Chapter 6: Conclusions and Future Research Work...................................... 141 6.1 Design & Implementation Methodologies.......................................................... 141 6.2 Performance of the Developed System............................................................... 142 6.3 Future Research Work ........................................................................................ 143 Appendix I ..................................................................................................................... 145 Appendix II.................................................................................................................... 153 References...................................................................................................................... 163

-viii-

List of Figures
Figure 1.1: Location of Iris ................................................................................................. 7 Figure 1.2: Different colors of Iris...................................................................................... 8 Figure 1.3: Structure of the eye........................................................................................... 9 Figure 3.1: Schematic diagram of iris recognition system ............................................... 28 Figure 3.2: Finding non-circular boundary of pupil ......................................................... 32 Figure 3.3: Normalization using pupil center as reference point...................................... 38 Figure 3.4: Normalization using iris center as reference point......................................... 39 Figure 3.5: Minimum distance between the points at same angle. ................................... 40 Figure 3.6: Mid-point of centers of iris and pupil as reference point ............................... 42 Figure 3.7: Concentric circles at pupil center P and dynamic iris normalized image ...... 43 Figure 3.8: Haar Wavelet.................................................................................................. 47 Figure 3.9: Daubechies Wavelets ..................................................................................... 47 Figure 3.10: Coiflets Wavelts ........................................................................................... 48 Figure 3.11: Symlets Wavelets ......................................................................................... 48 Figure 4.1: Flow chart for detection of pupil boundary module....................................... 51 Figure 4.2: Steps for Pupil Localization CASIA version 1.0 ........................................... 54 Figure 4.3: Used symmetric lines for finding points on circle ......................................... 56 Figure 4.4: Steps involved in Pupil Localization CASIA Version 3.0 ............................. 57 Figure 4.5: Steps involved in Pupil Localization for MMU Database ............................. 59 Figure 4.6: Non-circular pupil boundary .......................................................................... 62 Figure 4.7: Steps for Iris Localization CASIA version 1.0............................................... 64 Figure 4.8: Steps for Iris Localization CASIA version 3.0............................................... 65 Figure 4.9: Steps for Iris Localization MMU Iris database .............................................. 66 Figure 4.10: Steps for Iris Localization MMU iris database ............................................ 68 Figure 4.11: Steps for Upper Eyelid localization CASIA Ver 1.0 Iris database .............. 70 Figure 4.12: Normalized images with different methods ................................................. 73 Figure 4.13: One step decomposition of an image ........................................................... 76 Figure 5.1: Images in different datasets............................................................................ 80 Figure 5.2: Some correctly localized images in CASIA version 1.0 ................................ 84 Figure 5.3: Some correctly localized images in CASIA version 3.0 ................................ 87 Figure 5.4: Some correctly localized images in BATH Database free version ................ 90 Figure 5.5: Some correctly localized images in MMU Database version 1.0 .................. 94 Figure 5.6: Comparison of steps in iris localization in different databases ...................... 94 Figure 5.7: Inaccuracies in circular pupil localization...................................................... 95 Figure 5.8: Inaccuracies in non-circular pupil localization .............................................. 97 Figure 5.9: Inaccuracies in iris localization ...................................................................... 98 Figure 5.10: Inaccuracies in eyelid localization ............................................................... 99 Figure 5.11: Time comparison of Normalization methods............................................. 106 Figure 5.12: Time comparison of normalization using iris center as reference point .... 107 Figure 5.13: Results of Normalized 4 using PCA for CASIA version 3.0 iris database 111 Figure 5.14: Results of Normalized 4 using PCA for BATH iris database .................... 113 Figure 5.15: PCA using different training image on CASIA version 1.0....................... 114 Figure 5.16: PCA using different training image on CASIA version 3.0....................... 115 Figure 5.17: PCA using different training image on MMU............................................ 115

-ix-

Figure 5.18: PCA using different training image on BATH........................................... 116 Figure 5.19: Accuracy of PCA on all databases using three training images................. 117 Figure 5.20: Training time of PCA on all databases using three training images .......... 118 Figure 5.21: Recognition time of PCA on all databases using three training images .... 118 Figure 5.22: ROC curves for different features with six enrolled images ...................... 122 Figure 5.23: Results of iris recognition on CASIA version 3.0 using bit plane 5 .......... 126 Figure 5.24: Iris recognition rate using bit plane 5 on MMU iris database .................... 127 Figure 5.25: Results of iris recognition using Daubechies 2 on CASIA version 1.0 ..... 129 Figure 5.26: Results of iris recognition including average training images ................... 130 Figure 5.27: ROC using Coiflet 5 wavelets for CASIA version 1.0............................... 137 Figure 5.28: Iris recognition results on CASIA version 3.0 using Coiflet 5 wavelet ..... 138 Figure 5.29: Results of Coiflet 5 wavelet on MMU iris database .................................. 139 Figure 5.30: Results of Coiflet 5 wavelet on BATH iris database.................................. 140

-x-

List of Tables
Table 5.1: Some attributes of the datasets ........................................................................ 79 Table 5.2: Results of Iris localization in CASIA version 1.0 ........................................... 83 Table 5.3: Results of Iris localization in CASIA version 3.0 ........................................... 87 Table 5.4: Results of Iris localization in BATH (free version)......................................... 90 Table 5.5: Results of Iris localization in MMU version 1.0 ............................................. 93 Table 5.6: Results of iris localization for CASIA version 1.0 ........................................ 100 Table 5.7: Results of Pupil localization for CASIA version 1.0..................................... 102 Table 5.8: Results of iris localization for CASIA version 3.0 ........................................ 102 Table 5.9: Results of iris localization for BATH iris database ....................................... 103 Table 5.10: Results of iris localization for MMU Iris Dataset ....................................... 104 Table 5.11: Radii of pupil and iris in the databases........................................................ 107 Table 5.12: Iris recognition rate with Normalized 2 using PCA for CASIA version 1.0110 Table 5.13: Accuracy with Normalized 2 using PCA for MMU iris database ............... 112 Table 5.14: Results of recognition for BATH Iris dataset .............................................. 121 Table 5.15: Effect of image resolution on accuracy on CASIA version 1.0 .................. 124 Table 5.16: Results with 50*256 image resolution on CASIA version 1.0.................... 124 Table 5.17: Result of CASIA version 3.0 when normalized iris width is 49 pixels....... 126 Table 5.18: Results of iris recognition with image resolution 58*256 on MMU ........... 128 Table 5.19: Results of iris recognition with different wavelets on CASIA version 1.0 . 132 Table 5.20: Iris recognition results on CASIA version 1.0 including average image .... 135 Table 5.21: Results with Coiflet 5 wavelet at image resolution 43*256 ........................ 137

-xi-

Chapter 1

Introduction

Chapter 1:
1.1 Biometrics

Introduction

History of identification of humans is as old as human beings. With the development in science and technology in the todays modern world, human activities and transactions have been growing tremendously. Authenticity of users has become an inseparable part of all transactions involving human computer interaction. Most conventional modes of authentication are based on knowledge based systems i.e. what we know (e.g. passwords, PIN code etc) and / or token based systems i.e. what we have (e.g. ID cards, passports, driving license etc.)[1]. Biometrics bring in stronger authentication capabilities by adding a third factor, who we are based on our inherent physiological or behavioral characteristics. The term "biometrics" is derived from the Greek words bio (life) and metric (to measure). In other words, bio means living creature and metrics means the ability to measure an object quantitatively [2]. The use of biometrics has been traced back as far as the Egyptians, who measured people to identify them. Biometric technologies are hence becoming the foundation of an extensive array of highly protected identification and personal verification systems. Biometrics is the branch of science which deals in automated methods of recognizing a person based on a physiological or behavioral characteristic. This technology involves in capturing and processing an image of a unique feature of an individual and comparing it with a processed image captured previously from the database. The behavioral characteristics are voice, odor, signature, gait, and voice whereas physiological characteristics are face, fingerprint, hand geometry, ear, retina, palm prints and iris. All biometric identification systems rely on forms of random variation among persons based on these characteristics. More complex is the randomness, the more unique features for identification; because more dimensions of independent variation produce code having greater uniqueness. Every biometric system has the following layout. First, it captures a sample of the feature, such as recording a digital sound signal for voice recognition, or taking a digital color image for face recognition or iris recognition, or retina scan for retina recognition.

-1-

Chapter 1

Introduction

The sample is then transformed using some sort of mathematical function into a biometric template. The biometric template will provide a normalized, efficient and highly discriminating representation of the features, which then can be compared with other templates in order to determine identity. Most biometric systems allow two modes of operation. An enrolment mode for adding templates to a database, and matching mode, where a template is created for an individual and then a match is searched for in the database of pre-enrolled templates in two ways. One is called verification in which one-to-one comparison is carried out and other is identification in which one template is compared throughout the database. If any physiological part has the following properties then it would be considered as a biometric [3].

1.1.1 Properties for a Biometric


Universality Each person should have the characteristic. Distinctiveness Any two persons should be sufficiently different in terms of the characteristic. Permanence The characteristic should be sufficiently invariant (with respect to the matching criterion) over a period of time. Collect-ability The characteristic can be measured quantitatively. User-friendliness People must be willing to accept the system, the scanning procedure does not have to be intrusive and the whole system should be easy to use. Accuracy Accuracy of the system must be high enough, there must be a balance between FAR (False Accept Rate) and FRR (False Reject Rate) depending upon the use of the system.

-2-

Chapter 1

Introduction

However, in a biometric system these should be practically implemented [4]. In addition to that, there are number of other issues that should be considered, such as: Performance: It refers to the achievable recognition accuracy and speed, the resources required to achieve the desired recognition accuracy and speed, as well as the operational and environmental factors that affect the accuracy and speed. Acceptability: It indicates the extent to which people are willing to accept the use of a particular biometric identifier (characteristic) in their daily lives. Circumvention: It reflects how easily the system can be fooled using fraudulent methods. Cost: It is always a concern. In this case, the life-cycle cost of system maintenance must also be taken into account.

1.2 Some Biometrics


Based on some basic definitions of biometrics as illustrated above, this section will give a brief description of different biometric systems [5] as elaborated below.

1.2.1 Face Recognition


Face recognition is one of the most active research areas in computer vision and pattern recognition [6-14]. A wide range of applications that includes forensic identification, access control, face-based video indexing and browsing engines, biometric identity authentication, human-computer interaction and multimedia monitoring/surveillance. The task of a face recognition system is to compare an input face image against a database containing a set of face samples with known identity [15-22]. Facial recognition has had some shortcomings, especially when trying to identify individuals in different environmental settings (such as changes in lighting, changes in the physical, facial features of people, such as new scars, beard etc.).

1.2.2 Fingerprint
Fingerprint imaging technology has been in existence for centuries. The use of fingerprints as a unique human identifier starts back in second century B.C. in China,

-3-

Chapter 1

Introduction

where the identity of the sender of an important document could be verified by his fingerprint impression in the wax seal. Fingerprint imaging technology looks to capture or read the unique pattern of lines on the tip of one's finger. These unique patterns of lines can either be in a loop, whorl or arch pattern. The most common method involves recording and comparing the fingerprint's minutiae points. Minutiae points can be considered the uniqueness of an individual's fingerprint [23]. In a typical fingerprint [24] that has been scanned by a fingerprint identification system, there are generally between 30 and 40 minutiae. The research in fingerprint identification technology has improved the identification rate to greater than 98 percent and a false positive (false reject) rate to smaller than one percent within the Automated Fingerprint Identification System (AFIS) criminal justice program.

1.2.3 Hand Geometry


Hand geometry is essentially based on the fact that virtually every individual's hand is shaped differently than another individual's hand and with the passage of time the shape of the person's hand does not significantly change [25]. The basic principle of operation behind the use of hand geometry is to measure or record the physical geometric characteristics of an individual's hand. From these measurements, a profile is constructed that can be used to compare against subsequent hand readings by the user [26]. There are many benefits to use hand geometry as a solution to general security issues including speed of operation, reliability, accuracy, small template size, ease of integration into an existing system, and user-friendliness. Now, there are thousands of locations all over the world that use hand geometry devices for access control and security purposes.

1.2.4 Retina
Retinal biometric involves analyzing the layer of blood vessels situated at the back of the eye. Retinal scans involve a low-intensity infrared light that is projected through the back

-4-

Chapter 1

Introduction

of the eye and onto the retina. Infrared light is used based on the fact that the blood vessels on the retina absorb the infrared light faster than surrounding eye tissues. The infrared light with the retinal pattern is reflected back to a video camera. The video camera captures the retinal pattern and converts it into data that is 35 elements in size [27]. This is not particularly convenient if you are wearing glasses or concerned about having close contact with the reading device. For these reasons, retinal scanning is not warmly accepted by all users, although the technology itself can work well. The current hurdle for retinal identification is the acceptance by the users. Retinal identification has several disadvantages including susceptible to disease damage (i.e. cataracts), viewed as intrusive and not very user friendly, high amount of both user and operator skill required.

1.2.5 Signature Verification


Signatures are analyzed in the way a user signs his / her name. Signing features include speed, velocity and pressure on writing material. These features are as important as the finished signature's static shape [28-31]. Signature verification enjoys a synergy with existing processes that other biometrics do not. People are used to signatures as a means of transaction-related identity verification and most would see nothing unusual in extending this to encompass biometrics. Surprisingly, relatively few significant signature applications have emerged compared with other biometric methodologies.

1.2.6 Voice Authentication


Despite the inherent technological challenges, voice recognition technologys most popular applications will likely provide access to secure data over telephone lines. Voice biometrics has potential for growth because it requires no new hardware. However, poor quality and surrounding noise can affect verification process. In addition, the enrollment procedure is more complicated than other biometrics being not

-5-

Chapter 1

Introduction

user-friendly. Speaker recognition systems [32] fall into two basic types: text-dependent and text-independent. In text-dependent recognition, the speaker says a predetermined phrase. This technique inherently enhances recognition performance, but requires a cooperative user. In text independent recognition, the speaker neither says a predetermined phrase nor cooperates or even not to be aware of the recognition system. Speaker recognition suffers from several limitations. Different people can have similar voices [33-35], and anybodys voice can vary over time because of changes in health, emotional state and age. Furthermore, variation in handsets or in the quality of a telephone connection complicates the recognition process.

1.2.7 Gait Recognition


Gait recognition is relatively a new field in biometrics. A unique advantage of gait as a biometric is that it offers potential for recognition at a distance or at low resolution when other biometrics might not be perceivable [36-41]. Recognition can be based on the (static) human shape as well as walking, suggesting a richer recognition cue. Further, gait can be used when other biometrics are obscured. It is difficult to conceal and/or disguise motion as this generally impedes movement.

1.2.8 Ear Recognition


Ear recognition is carried out by three different methods: (i) taking a photo of an ear, (ii) taking earmarks by pushing ear against a flat glass and (iii) taking thermogram pictures of the ear [42-45]. The most interesting parts of the ear are the outer ear and ear lope, but the whole ear structure and shape is used [46]. Taking photo of the ear is the most commonly used method in research. The photo is taken and it is combined with previous taken photos for identifying a person. Ear database is publicly available via internet [47].

1.2.9 Iris Recognition


Iris recognition is a method of biometric authentication that uses pattern recognition techniques based on images of the irises of an individual's eyes [1, 48-64]. Iris

-6-

Chapter 1

Introduction

recognition uses camera technology and subtle IR illumination to reduce specular reflection from the convex cornea to create images of the detail-rich intricate structures of the iris. These unique structures are converted into digital templates. They provide mathematical representations of the iris that yield unambiguous positive identification of an individual. Iris recognition efficacy is rarely impeded by glasses or contact lenses. Iris technology has the smallest outlier (those who cannot use/enroll) group of all biometric technologies. The only biometric authentication technology has been designed for use in a one-to-many search environment. A key advantage of iris recognition is its stability or template longevity as barring trauma and a single enrollment can last a lifetime [65]. Among the physiological characteristics, iris is the best biometric. It has all the capabilities of a good biometric.

1.3 Location of Iris in Human Eye


Iris is the colored part of eye which is visible when eye is open. If we observe an eye image then blackish round shaped part is pupil. Iris is the only internal organ which can be seen externally. Iris can be seen around the pupil and inside sclera, as shown in Figure 1.1.

Figure 1.1: Location of Iris -7-

Chapter 1

Introduction

1.3.1 Color of the eye


The iris gives color to the eye which depends on the amount of pigment present. If the pigment is dense, the iris is brown. If there is a little pigment, the iris is blue. In some cases, there is no pigment at all. So, the eye is light. Different pigments color eyes in various ways to create the eye colors such as gray, green, etc. In bright light, the iris muscles constrict the pupil thereby reducing the amount of light entering the eye. Conversely, the pupil enlarges in dim light in order to allow greater amount of light to enter in retina. Some irises with different colors are shown in Figure 1.2 [66].

Figure 1.2: Different colors of Iris

1.3.2 Working of the Eye


Light passes through the front structures of the eye (i.e. the cornea, lens and so forth). These structures focus the light on the retina, a layer of light receptors at the back of the eye. These receptors translate the image into a neural message which travels to the brain via the optic nerve [67]. Light passes through a layer of transparent tissues at the front of the eye called the cornea. The cornea bends the light and it is the first element in the eye's focusing system. The light then passes through the anterior chamber, a fluid-filled space just behind the cornea. This fluid is called the aqueous humor and it is produced by a gland called the ciliary body. The light then passes through the pupil. The iris is a ring of pigmented

-8-

Chapter 1

Introduction

muscular tissue that controls the size of the pupil. It regulates how much light enters the eye - the pupil grows larger in dim light and shrinks to a smaller hole in bright light. The light passes through the lens that helps focus the light from the pupil onto the retina. Light from the lens passes through the vitreous body which is a clear jelly-like substance that fills the back part of the eyeball. It is focused onto the retina that is a layer of lightsensitive tissue at the back of the eye. The retina contains light-sensitive cells called photoreceptors. It translates the light energy into electrical signals. These electrical signals travel to the brain via the optic nerve. The retina is nourished by the choroids (a highly vascularized membrane that exists just behind the retina). Aside from the transparent cornea at the front of the eye, the eyeball is encased by a tough, white and opaque membrane called the sclera [68].

Figure 1.3: Structure of the eye

1.3.2 Anatomy and Structure of Iris


The iris is a circular and adjustable diaphragm with the pupil. It is located in the chamber behind the cornea. The iris is the extension of a large and smooth muscle which also connects to the lens via a number of suspensor ligaments. These muscles expand and contract to change the shape of the lens and to adjust the focus of images onto the retina [26]. A thin membrane beyond the lens provides a light-tight environment inside the eye. Thus, preventing stray light from confusing or interfering with visual images on the retina. This is extremely important for clear high-contrast vision with good resolution or -9-

Chapter 1

Introduction

definition. The most frontal chamber of the eye, immediately behind the cornea and in front of the iris, contains a clear watery fluid that facilitates good vision. It helps to maintain eye shape, regulates the intra-ocular pressure, provides support for the internal structures, supplies nutrients to the lens and cornea and disposes off the eye's metabolic waste. The rear chamber of the front cavity lies behind the iris and in front of the lens. It helps provide optical correction for the image on the retina. Some recent optical designs also use coupling fluids for increased efficiency and better correction.

1.4 Research on Iris Recognition


Apparently, the first use of iris recognition as a basis for personal identification goes back to efforts to distinguish inmates in the Parisian Penal System by visually inspecting their irises, especially the patterning of color. In 1936, ophthalmologist Frank Burch proposed the concept of using iris patterns as a method to recognize an individual [69]. By the 1980s, the idea had appeared in James Bond films but it still remained in science fiction and conjecture [70]. In 1985, Leonard Flom and Aran Safir, ophthalmologists, proposed the concept that no two irises are alike and were awarded a patent for the iris identification concept in 1987 [63]. Flom approached John Daugman to develop an algorithm to automate identification of the human iris. In 1993, the Defense Nuclear Agency began work to test and deliver a prototype unit which was successfully completed by 1995 with their combined efforts. In 1994 [64], Daugman was awarded a patent for his automated iris recognition algorithms.

1.5 Iris Recognition System


The iris recognition system consists of an automatic segmentation system that is based on the edge detector and is able to localize the circular iris and pupil region, occluding eyelids, eyelashes and reflections. The extracted iris region is then normalized into a rectangular block with constant dimensions to account for imaging inconsistencies. Features are extracted with different feature extraction methods to encode the unique pattern of the iris into biometric template. The Hamming distance was employed for classification of iris templates and two templates were found to match if hamming distance is grater than a specific threshold.

-10-

Chapter 2

Existing Iris Recognition Techniques

Chapter 2:

Existing Iris Recognition Techniques

2.1 Background
A complete iris recognition system is composed of four parts: image acquisition, iris localization, feature extraction and matching. The image acquisition step captures the iris images. Infrared illumination is used in most iris image acquisition. The iris localization step localizes the iris region in the image. For most algorithms, assuming near-frontal presentation of the pupil, the iris boundaries are modeled as two circles which are not necessarily concentric. The inner circle is the pupillary boundary or iris inner boundary (i.e. between the pupil and the iris). The outer circle is the limbic boundary or iris outer boundary (i.e. between the iris and the sclera). The noise processing is often included in the segmentation stage. Possible sources of noise are eyelid occlusions, eyelash occlusions and specular reflections. Most localization algorithms are gradient based in order to find edges between the pupil & iris and the iris & sclera. The feature extraction stage encodes the iris image features into a bit vector code. In most algorithms, filters are utilized to obtain information about the iris texture. Then the outputs of the filters are encoded into a bit vector code. The corresponding matching stage calculates the distance between iris codes and decides whether it is a match (in the verification context) or recognizes the submitted iris from the subjects in the data set (in the identification context).

2.2 Iris Image Acquisition


Iris recognition has been an active research area for the last few years due to its high accuracy and the encouragement of both the government and private entities to replace traditional security systems, which suffer noticeable margin of error. However, early research was obstructed by the lack of iris images. Now several free databases exist on the internet for testing usage. A well known database is the CASIA Iris Image Database (version 1.0 and 3.0) provided by the Chinese Academy of Sciences [71]. The CASIA version 1.0 iris image database includes 756 iris images from 108 eyes collected over two sessions over a period of two months. The images, taken in almost perfect imaging

-11-

Chapter 2

Existing Iris Recognition Techniques

conditions, are noise-free with size 320*280 pixels. The CASIA Iris Image Database Version 3.0 includes 2655 iris images of size 320*280 pixels from 396 eyes. Iris image dataset of University of Bath (BATH) free version contains 1000 iris images from 50 different eyes. Another iris database of Multi-Media University (MMU) is also used for experiments. MMU iris database [72] contains 450 images from 45 people. Left and right eyes are captured five times each that makes a total of 90 classes. Each image has 320*240 pixels resolution in grayscale.

2.3 Iris Localization


Iris localization is the most important step in iris recognition systems because all the subsequent steps depend on its accuracy. In general, this step involves in detecting edges using some edge detectors followed by boundary detection algorithms. Following section describe some commonly used edge detectors.

2.3.1 Edge Detectors


An edge operator is a neighborhood operation which determines the extent to which each pixel's neighborhood can be partitioned by a simple arc passing through the pixel. Pixels in the neighborhood on one side of the arc have one predominant value and pixels in the neighborhood on the other side of the arc have a different predominant value [73, 74]. Usually gradient operators, Laplacian operators, zero-crossing operators are used for edge detection. The gradient operators compute some quantity related to the magnitude of the slope of the underlying image gray tone intensity surface of the image. The Laplacian operators calculate some quantity related to the Laplacian of the underlying image gray tone intensity surface. The zero-crossing operators determine whether or not the digital Laplacian or the estimated second direction derivative has a zero-crossing within the pixel [75]. 2.3.1.1 Gradient Based

In this edge detection method, the assumption is that edges are the pixels with a high gradient. A fast rate of change of intensity at some direction given by the angle of the gradient vector is observed at edge pixels. The magnitude of the gradient indicates the

-12-

Chapter 2

Existing Iris Recognition Techniques

strength of the edge. Natural images do not have the ideal discontinuity or the uniform regions and the magnitude of the gradient is calculated to detect the edge pixels. A threshold is fixed with respect to magnitude. If the gradient magnitude is larger than the threshold then the corresponding pixel is an edge pixel. An edge pixel is described using two important features: Edge strength : magnitude of the gradient Edge direction : angle of the gradient

Actually, gradient is not defined at all for a discrete function. Instead the gradient, which can be defined for the ideal continuous image, is estimated using some operators. Among these operators "Roberts, Sobel and Prewitt" are commonly used. 2.3.1.1.1 Roberts Operator

The Roberts method finds edges using the Roberts approximation to the derivative. It returns edges at those points where the gradient of image is maximum. The Roberts operator provides a simple approximation to the gradient magnitude using the following equation [76].
R = Rx + Ry

2.1

Where Rx and Ry are calculated using following convolution filters.


Rx = 1 0
0 1 0 and Ry = 1 1 0

2.3.1.1.2

Sobel Operator

The Sobel operator is one of the most commonly used edge detectors. In this operator, gradient is calculated in 3 x 3 neighborhood pixels for the gradient calculations. The Sobel operator is magnitude of the gradient computed by the following equation [76]. Mag = Sx 2 + Sy 2 2.2

Where Sx and Sy are the first order partial derivatives in x and y direction respectively. If 3 x 3 neighborhood of pixel (i,j) is as follows: a1 a4 a6 a2 a7 a3 a8 [i,j] a5

-13-

Chapter 2

Existing Iris Recognition Techniques

Then Sx and Sy are computed using the equations 2.3 and 2.4. S x = (a3 + ca5 + a8 ) (a1 + ca4 + a6 ) 2.3

S y = (a1 + ca2 + a3 ) (a6 + ca7 + a8 ) where the constant c = 2. These are implemented using convolution masks: Sx = 2
1 0 1 0 2 1 0 1 1 2 1 0 0 1 2 1

2.4

and Sy = 0

This operator places an emphasis on pixels that are closer to the center of the mask.
2.3.1.1.3 Prewitt Operator

The Prewitt method [76] finds edges using the Prewitt approximation to the derivative. It returns edges at those points where the gradient of I is maximum. Unlike the Sobel operator, this operator does not place any emphasis on pixels that are closer to the center of the masks. It uses the equations 2.3 and 2.4 for computing the partial derivatives along x and y-directions using the constant c = 1. These are implemented using following convolution masks: Px = 1
1 0 1 0 1 1 0 1 1 1 1 0 0 1 1 1

and Py = 0

2.3.1.2

Laplacian Based or Zero Crossing Based

The Laplacian of Gaussian (LoG) method finds edges by looking for zero crossings after filtering image with a LoG filter [76]. The edge points of an image can be detected by finding the zero crossings of the second derivative of the image intensity. However, second derivative is very sensitive to noise. This noise should be filtered out before edge detection. To achieve this, Laplacian of Gaussian is used [77]. This method combines Gaussian filtering with the Laplacian for edge detection. Following equation is used to obtain LoG:

1 x2 + y2 LoG( x, y) = 4 1 e 2 2
significant:
-14-

x2 + y 2 2 2

2.5

where is the smoothing factor. In LoG edge detection, following three steps are

Chapter 2 Filtering Enhancement Detection

Existing Iris Recognition Techniques

Gaussian filter is used for smoothing and the second derivative of which is used for the enhancement step. The detection criterion is the presence of a zero crossing in the second derivative to the corresponding large peak in the first derivative. Those pixels having locally maximum gradient are considered as edges by the edge detector in which zero crossings of the second derivative are used. To avoid detection of insignificant edges, only the zero crossings, whose corresponding first derivative is above some threshold, are selected as edge point. The edge direction is obtained using the direction in which zero crossing occurs. In the LoG, there are two methods which are mathematically equivalent [77]: Convolve the image with a Gaussian smoothing filter and compute the Laplacian of the result. Convolve the image with the linear filter that is the LoG filter. This is also the case in the LoG. Smoothing (filtering) is performed with a Gaussian filter. Enhancement is done by transforming edges into zero crossings and detection is done by detecting the zero crossings.
2.3.1.3 Canny Operator

This edge detection method is optimal for step edges corrupted by white noise. Canny [78] used three criteria to design his edge detector. The first requirement is reliable detection of edges with low probability of missing true edges and a low probability of detecting false edges. In second requirement, the detected edges should be close to the true location of the edge. For last requirement, there should be only one response to a single edge [79]. The Canny method finds edges by looking for local maxima of the gradient of the image intensity. The gradient is calculated using the derivative of a Gaussian filter. The method uses two thresholds to detect strong and weak edges. It includes the weak edges in the output only if they are connected to strong edges. This method is, therefore, less likely than the others to be fooled by noise and more likely to detect true weak edges.

-15-

Chapter 2

Existing Iris Recognition Techniques

The Canny operator works in a multi-stage process. First of all, the image is smoothed by Gaussian convolution. Then, a simple 2-D first derivative operator (somewhat like the Roberts Cross) is applied to the smoothed image to highlight regions of the image with high first spatial derivatives. Edges give rise to ridges in the gradient magnitude image. The algorithm then tracks along the top of these ridges and sets to zero all pixels that are not actually on the ridge top so as to give a thin line in the output (a process known as non-maximal suppression). The tracking process exhibits hysteresis controlled by two thresholds: T1 and T2 (with T1 > T2). Tracking can only begin at a point on a ridge higher than T1. Tracking then continues in both directions out from that point until the height of the ridge falls below T2. This hysteresis helps to ensure that noisy edges are not broken up into multiple edge fragments.
2.3.1.4

Hough Transform

To find the simple shapes (like lines, circles, ellipse in the images), Hough transform is a nice option. The simplest case of Hough transform is a Hough linear transform. In the image, the straight line can be described as:
y = mx + c

2.6

It is plotted for each pair of values (x, y), where m is the slope of the line and c is yintercept. For computational purposes, however, it is better to parameterize the lines in the Hough transform with two other parameters, commonly called r and . The parameter r represents the smallest distance between the line and the origin, while is the angle of the locus vector from the origin to this closest point. Using this parameterization, the equation of the line can be written as:
r = x cos + y sin 2.7

It is, therefore, possible to associate to each line of the image, a couple (r, ) which is unique if belongs to [0, ] and r is real or if belongs [0, 2 ] and r is greater than 0. The (r, ) plane is sometimes referred to as Hough space [80]. It is well known that an infinite number of lines can go through a single point of the plane. If that point has coordinates ( x0 , y0 ) in the image plane, all the lines that go through it obey the following equation: r ( ) = x0 cos + y0 sin 2.8

-16-

Chapter 2

Existing Iris Recognition Techniques

This corresponds to a sinusoidal curve in the (r, ) plane, which is unique to that point. If the curves corresponding to two points are superimposed, the locations (in the Hough space) where they cross, correspond to lines (in the original image space) that pass through both points. More generally, a set of points that form a straight line will produce sinusoids which cross at the parameters for that line. Thus, the problem of detecting colinear points can be converted to the problem of finding concurrent curves. Hough transform algorithm uses an array called accumulator to detect the existence of a line. The dimension of the accumulator is equal to the number of unknown parameters of Hough transform problem. For each pixel and its neighborhood, Hough transform algorithm determines if there is enough evidence of an edge at that pixel. If so, it will calculate the parameters of that line, and then look for the accumulator's bin that the parameters fall into, and increase the value of that bin. By finding the bins with the highest value, the most likely lines can be extracted and their geometric definitions can be read off. The simplest way of finding these peaks is by applying some form of threshold.

2.3.2 Existing Iris Localization Methods


In 1993, Daugman [81] built an iris recognition system. The localization accuracy was 98.6% using an integro-differential operator to locate the boundaries of the iris. Wildes [55] system used border detection based on the gradient and Hough transforms to locate the iris in the image. Cui [59] used course to find strategy and modified Hough transform. Shen [57] applied wavelet analysis for localization of iris.
2.3.2.1 Daugmans Method

Daugman [81] presented the first approach to computational iris recognition, including iris localization. An integro-differential operator is proposed for locating the inner and outer boundaries of an iris. The operator assumes that pupil and limbus are circular contours and performs as a circular edge detector. Detecting the upper and lower eyelids is also performed using the Integro-differential operator by adjusting the contour search from circular to a designed accurate [54]. Integro-differential operator is defined as

-17-

Chapter 2

Existing Iris Recognition Techniques

( r , x0 , y0 )

ma x G ( r ) *

r , x0 , y0

I ( x, y ) ds 2 r

2.9

where I(x, y) is an image containing an eye. The integro-differential operator searches over the image domain (x, y) for the maximum in the blurred partial derivative with respect to increasing radius r of the normalized contour integral of I(x, y) along a circular arc ds of radius r and center coordinates (x0, y0). The symbol denotes convolution and G (r ) is a smoothing function such as a Gaussian of scale
1 G (r ) = e 2 ( r r0 )2 2 2

and is defined as:

2.10

The integro-differential operator behaves as a circular edge detector. It searches for the gradient maxima over the 3D parameter space, so there are no threshold parameters required as in the Canny edge detector [78]. Daugman simply excludes the upper and lower most portions of the image, where eyelid occlusion is expected to occur.
2.3.2.2 Wildess Method

Wildes [55] had proposed an iris recognition system in which iris localization is completed by detecting edges in iris images followed by use of a circular Hough transform [82] to localize iris boundaries. In a circular Hough transform, images are analyzed to estimate the three parameters of one circle ( x0 , y0 , r ) using following equations:

H ( x0 , y0 , r ) = h( xi , yi , x0 , y0 , r )
i

2.11

where ( xi, yi ) is an edge pixel and i is the index of the edge pixel

1, if g ( xi , yi , x0 , y0 , r ) = 0 h( xi , yi , x0 , y0 , r ) = 0, otherwise
where

g ( xi , yi , x0 , y0 , r ) = ( xi x0 ) 2 + ( yi y0 ) 2 r 2

2.12

-18-

Chapter 2

Existing Iris Recognition Techniques

The location ( x0 , y0 , r ) with the maximum value of H ( x0 , y0 , r ) is chosen as the parameter vector for the strongest circular boundary. Wildes system models the eyelids as parabolic arcs. The upper and lower eyelids are detected by using a Hough transform based approach similar to that described above. The only difference is that it votes for parabolic arcs instead of circles. One weak point of the edge detection and Hough transform approach is the use of thresholds in edge detection. Different settings of threshold values may result in different edges that in turn affect the Hough transform results significantly [58].
2.3.2.3 Boless Method

Boles at el. [52] proposed an iris recognition method. Iris localization is started by locating the pupil of the eye, which was done by using some edge detection technique. As it was a circular shape, the edges defining it are connected to form a closed contour. The centroid of the detected pupil is chosen as the reference point for extracting the features of the iris. Iris outer boundary is also detected by using the edge-image.
2.3.2.4 Li Mas Method

Ma et. al. [53] estimated the pupil position using pixel intensity value projections and thresholding. Centroid of the specific region is calculated to obtain the center of pupil. After that a circular Hough transform is applied to detect the iris outer boundary.
2.3.2.5 Other methods

Some other methods have been proposed for iris localization but most of them are minor variants of integro-differential operator or combination of edge detection and Hough transform. For example, Cui et. al. [59] computed a wavelet transform and then used the Hough transform to locate the iris inner boundary while using integro-differential operator for the outer boundary. Tain et. al. [60] used Hough transform after preprocessing of edge image. Masek et. al. [56] implemented an edge detection method slightly different from the Canny operator and then used a circular Hough transform for iris boundary extraction. Rad et. al. [61] used gradient vector pairs at various directions to coarsely estimate positions of the circle and then used integro-differential operator to refine the iris boundaries. Kim et. al. [51] used mixtures of three Gaussian distributions to -19-

Chapter 2

Existing Iris Recognition Techniques

coarsely segment eye images into dark, intermediate & bright regions and then used a Hough transform for iris localization. All previous research work on iris localization used only image gradient information and the rate of iris extraction is not high in practice.

2.4 Iris Normalization


In this section, a brief description of different iris recognition system with respect to iris normalization is provided. Iris normalization is a step in which iris is unwrapped to a rectangular strip for feature extraction. Iris images of the same eye have different iris sizes due to the difference between camera and eye. Illumination has direct impact on pupil size and causes non-linear variations of iris patterns. A proper normalization technique is expected to transform the iris image to compensate these variations.

2.4.1 Existing Methods


Existing techniques for iris normalization are explained in the succeeding sections.

2.4.1.1

Daugmans Method

Daugmans system [49] uses radial scaling to compensate for overall size as well as a simple model of pupil variation based on linear stretching. This scaling serves to map Cartesian image coordinates (x, y) to dimensionless polar coordinates (r , ) according to the following equations:

x ( r , ) = (1 r ) x p ( ) + rxi ( ) y (r , ) = (1 r ) y p ( ) + ryi ( )
where
x p ( ) = x p 0 ( ) + rp cos ( ) y p ( ) = y p 0 ( ) + rp sin( ) xi ( ) = xi 0 ( ) + ri cos( ) yi ( ) = yi 0 ( ) + ri sin( )

2.13 2.14

2.15 2.16 2.17 2.18

This model is called rubber sheet model which assumes that in radial direction, iris texture change linearly. This maps the iris texture from pupil to iris outer boundary into the interval [0, 1] and is cyclic over [0, 2 ]. Here ( x p ( ), y p ( )) and ( xi ( ), yi ( )) are

-20-

Chapter 2

Existing Iris Recognition Techniques

the coordinates of the iris inner and outer boundaries in the direction and ( x p 0 ( ), y p 0 ( )) and ( xi 0 ( ), yi 0 ( )) are the coordinates of pupil and iris centers respectively. Daugman compensates rotation invariance in matching process by circular shifting the normalized iris linearly in different directions.

2.4.1.2

Wildess Method

Wildes [55] has proposed a technique in which image is normalized to compensate both scaling and rotation in matching step. This approach geometrically warps a newly acquired image I a ( x, y ) into alignment with a selected database image I d ( x, y ) according to a mapping function (u ( x, y ), v( x, y )) such that for all the image intensity value at ( x, y ) (u ( x, y ), v( x, y )) in Ia is close to that at ( x, y ) in I d . More precisely, the mapping function (u, v) is taken to minimize the following error function:
errfn = ( I d ( x, y ) I a ( x u , y v)) 2 dxdy
x y

2.19

Constrained is to capture a similarity transformation of image coordinates ( x, y ) to ( x, y) , i.e. x x x = sR( ) y y y parameters s and are recovered by an iterative minimization procedure [83]. 2.20

Where s is scaling factor and R ( ) is a matrix representing rotation by . The

2.4.1.3

Boless Method

Boles [52] proposed the normalization of images at the time of matching. When two images are considered, one image is considered as a reference image. The ratio of the maximum diameter of the iris in this image to that of the other image is calculated. This ratio is used to make the virtual circles on which data for feature extraction is picked up. The dimensions of the irises in the images are scaled to have the same constant diameter regardless of the original size in the image.

-21-

Chapter 2

Existing Iris Recognition Techniques


Li Mas Method

2.4.1.4

Ma [53] used a combination of methods of iris normalization that were proposed by Daugman [54] and Bole [52]. In this method, the normalization process is carried out by using center of the pupil as a reference point.

2.4.1.5

Other Methods

Other methods of iris normalization are almost the same as proposed by Daugman. The normalization method makes the iris invariant to scale, translation and pupil dilation changes. The rectangular image after normalization is not rotation invariant. In general, circular shift in different directions is used for achieving rotation invariance during matching process.

2.5 Feature Extraction


Features are extracted using the normalized iris image. The most discriminating information in an iris pattern must be extracted. Only the significant features of the iris must be encoded so that comparisons between templates can be made.

2.5.1 Gabor Filter


A Gabor filter is constructed by modulating a sine/cosine wave with a Gaussian. This is able to provide the optimum conjoint localization in both space and frequency, since a sine wave is perfectly localized in frequency, but not localized in space. Modulation of the sine with a Gaussian provides localization in space, though with loss of localization in frequency. Decomposition of a signal is accomplished using a quadrature pair of Gabor filters. A real part is specified by a cosine modulated by a Gaussian and an imaginary part is specified by a sine modulated by a Gaussian. The real and imaginary filters are also known as the even symmetric and odd symmetric components respectively. The centre frequency of the filter is specified by the frequency of the sine/cosine wave. The bandwidth of the filter is specified by the width of the Gaussian. Daugman [49, 54, 64, 81] makes uses of a 2D version of Gabor filters in order to encode iris pattern data. A 2D Gabor filter over an image domain (x,y) is represented as:

-22-

Chapter 2
( x x0 )2 ( y y0 )2 + ] 2 2

Existing Iris Recognition Techniques

G( x, y) = e

e2 i[u0 ( x x0 )+v0 ( y y0 )]

2.21

where ( x0 , y0 ) specify position in the image, ( , ) specify the effective width and length and (u0 , v0 ) specify modulation.

2.5.2 Log Gabor Filter


A disadvantage of the Gabor filter is that the even symmetric filter will have a DC component whenever the bandwidth is larger than one octave [84]. However, zero DC components can be obtained for any bandwidth by using a Gabor filter which is Gaussian on a logarithmic scale. It is known as the Log-Gabor filter. The frequency response of a Log-Gabor filter is given as:

G( f ) = e

(log( f / f 0 )) 2 2 (log( / f 0 )) 2

2.22

where f 0 represents the centre frequency and gives the bandwidth of the filter.

2.5.3 Zero Crossings of 1D Wavelets


Boles et. al. [52] made use of 1D wavelets [85] for encoding iris pattern data. The mother wavelet is defined as the second derivative of a smoothing function ( x) .

( x) =

d 2 ( x) dx 2

2.23

The zero crossings of dyadic scales of these filters are then used to encode features. The wavelet transform of a signal f(x) at scale s and position x is given by: 2 d 2 ( x) Ws f ( x) = f * s ( x) dx 2 d2 = s 2 2 ( f * s )( x) dx Where

2.24

s = (1/ s) ( x / s )

2.25

-23-

Chapter 2

Existing Iris Recognition Techniques

Ws f ( x) is proportional to the second derivative of f ( x) smoothed by s ( x) and the zero crossings of the transform correspond to points of inflection in f * s ( x) . The motivation for this technique is that zero-crossings correspond to significant features with the iris region.

2.5.4 Haar Wavelet


Lim et. al. [50] also used the wavelet transform to extract features from the iris region. Both the Gabor transform and the Haar wavelet are considered as the mother wavelet. From multi-dimensionally filtering, a feature vector with 87 dimensions is computed. Since each dimension has a real value ranging from -1.0 to +1.0, the feature vector is sign quantized so that any positive value is represented by 1 and negative value as 0. This results in a compact biometric template consisting of only 87 bits. Lim et. al. [50] compared the use of Gabor transform and Haar wavelet transform and showed that the recognition rate of Haar wavelet transform was slightly better than Gabor transform (i.e. by 0.9%).

2.6 Matching Algorithms


Once features are extracted and template that is generated in the feature extraction process will also need a corresponding matching metric, which gives a measure of similarity between two iris templates. This metric should give one range of values when comparing templates generated from the same eye (known as intra-class comparisons) and another range of values when comparing templates created from different irises (known as inter-class comparisons). These two cases should give distinct and separate values so that a decision can be made with high confidence as to whether two templates are from the same iris or from two different irises.

2.6.1 Normalized Hamming Distance


In iris recognition systems, the most widely used similarity metric is normalized Hamming distance. In information theory, the Hamming distance between two strings of equal length is the number of positions for which the corresponding symbols are

-24-

Chapter 2

Existing Iris Recognition Techniques

different. In other words, the number of digit positions in which the corresponding digits of two binary words of the same length are different. In feature extraction module, if the features are converted in binary format then the Hamming distance is used to find the match. A threshold is defined regarding to normalized Hamming distance. Hamming distance less than the threshold value is assumed as match. The minimum the normalized Hamming distance, maximum is the matching factor. Normalized Hamming distance is defined as follows [76]:
HD = 1 n X i ( XOR)Yi n i =1 2.26

where X and Y are strings with length of n bits.

2.6.2 Euclidean Distance


Euclidean distance between two points in p-dimensional space is a geometrically shortest distance on the straight line passing through both the points. For a distance between two p-dimensional features

x = ( x1 , x2 , , x p )

and

y = ( y1 , y2 , , y p ) , the Euclidean metric is defined as [86]:


d ( x, y ) = ( xi yi ) 2 i =1
p 1 2

2.27

In matrix notation, this is written as the following:

d ( x, y) = ( x y)t ( x y)
2.6.3 Normalized Correlation

2.28

Normalized correlation is also used as classification metric. Correlation addresses the relationship between two different factors (variables). The statistic is called a correlation coefficient. A correlation coefficient can be calculated when there are two (or more) sets of scores for the same individuals or matched groups. A correlation coefficient describes direction (positive or negative) and degree (strength) of relationship between two variables. Higher the correlation coefficient means that stronger the relationship between the quantities. The coefficient is also used to obtain a p value indicated whether the degree of relationship is greater than expected by chance.

-25-

Chapter 2

Existing Iris Recognition Techniques

Normalized correlation is advantageous over standard correlation since it is able to account for local variations in image intensity that corrupt the standard correlation calculation used by Wildes [55]. This is represented as:

1 =

1 n m p1 (i, j ) mn i =1 j =1

2.29

1 =

1 n m ( p1 (i, j) 1 )2 mn i =1 j =1
1 mn 1 2

2.30

And then normalized correlation between p1 and p2 is defined as:

NormCorr ( p1 , p2 ) =

( p (i, j) )( p (i, j) )
i =1 j =1 1 1 2 2

2.31

where p1 and p2 are two images of size n by m pixels, 1 and 1 are the mean and standard deviation of p1 , and 2 and 2 are the mean and standard deviation of p2 .

-26-

Chapter 3

Proposed Methodologies

Chapter 3:

Proposed Methodologies

Iris localization is the most important step in iris recognition systems. All the subsequent steps (feature extraction, encoding and matching) depend on its accuracy [48]. If iris is not correctly localized, then performance of the system is degraded. In the iris localization step, iris region in the image is separated by means of different algorithms. In the algorithms, assuming frontal presentation of the pupil, the iris boundaries are modeled as two circles, which are not necessarily concentric. The inner circle is the pupil boundary or iris inner boundary (i.e. between the pupil and the iris). The outer circle is the limbic boundary or iris outer boundary (i.e. between the iris and the sclera). The noise processing is often included in the segmentation stage. Possible sources of noise are eyelid occlusions, eyelash occlusions and specular reflections. Most localization algorithms are gradient based involving in finding the edges between the pupil & iris and the iris & sclera. After localization, next step is normalization of the iris. Iris controls the amount of light entering the eye. Its response to different light conditions is non-linear because of distribution of iris muscles [67].

3.1 Proposed Iris Localization Method


In iris localization, pupil boundary is detected by using the following methods. The schematic diagram of iris localization system is shown in Figure 3.1. First step in the iris localization method is detection of pupil which is followed by localization of the pupil in which parameters of pupil are determined and non-circular boundary is calculated. After that, iris outer boundary is localized in which iris parameters are found. Then, eyelids are detected to completely localize the iris [48].

3.1.1

Pupil Boundary Detection

Detection of pupil boundary is the first step towards iris localization. Pupil parameters (center and radius) are calculated by assuming pupil as a circular region. Algorithms 1 and 2 are proposed to find the pupil parameters.
a. Algorithm 1 1. Read the image of iris. 2. Apply decimation algorithm.

-27-

Chapter 3

Proposed Methodologies

Input Image

Eyelashes Removal RGB to Grayscale Conversion (if colored image) Normalization via reference point as Pupil Center Iris Center Minimum Distance Mid-point of iris and pupil centers Dynamic Size Eyelids Detection Get features using anyone method PCA Bit plane

Pupil Detection

Pupil Localization

Statistical Features Wavelets Features

Iris Localization

Feature Extraction Iris Normalization


Test Image Training Image

Iris Localization

Decision

Hamming Distance Euclidean Distance

Save Features

Yes

No

Matching

Database

Figure 3.1: Schematic diagram of iris recognition system 3. Find a point in the pupil. 4. Initialize previous centroid with the point in pupil. 5. Repeat until single pixel accuracy is achieved

Select the region. Obtain centroid. Compare the previous centroid with current.

6. Calculate radius of the pupil.

Explanation of the algorithm 1 is given after decimation algorithm.

-28-

Chapter 3
Decimation Algorithm

Proposed Methodologies

Following equation is used to decimate the image. Before applying this equation, a parameter L is assigned as integer value, which is the size of the squared mask W.

1 D (i, j ) = 2 i =1 j =1 L
M N

I ( x + i, y + j ).W ( x, y)
y =1 x =1

3.1

Where I(x, y) is the original image, D (i, j ) is the decimated image of size MN and W is defined as:

1 1 1 1 W = 1 1
Explanation

1 1 1 L L

3.2

After applying decimation algorithm following formulas are used to find a point inside the pupil. Px = arg min D( x, y )
col

3.3 3.4

Py = arg min D( x, y )
row

where D(i, j ) is the decimated image. Once a point in the pupil is found, next step is to make binary image. For making the processing fast, a squared region is selected assuming the point ( Px , Py ) as point of intersection of two diagonals of the square. A threshold is selected adaptively based on maximum value in histogram of the region. Centroid of the region is obtained using the following equations:

Cx =
Cy =
where

Mx A
My A

3.5 3.6

-29-

Chapter 3

Proposed Methodologies

My = xdA
w

3.7 3.8

Mx = ydA
w

and

A = dxdy
w

3.9

where A is the area of window w. Centroid of the binary image provides the center of the pupil. This procedure (i.e. selecting squared region, obtaining histogram, making binary image, calculating centroid) is repeated till the single pixel accuracy is achieved. This point is the exact center of the pupil. As exact center is determined, radius of pupil is calculated by finding the average of maximum number of consecutive non-zeros in four different directions from the center of the pupil. Radius = mean{no. of con sec utive non zero pixels}
b. Algorithm 2

3.10

Algorithm 2 is for CASIA iris database version 3.0 in which each image have eight white small circles in the pupil [48]. These small circles are making a circular shape. Following algorithm is used to obtain pupil parameters.
1. Read the image of iris 2. Apply decimation algorithm 3. Find a point in the pupil 4. Apply edge detector 5. Remove the small edges 6. Repeat until exact pupil center is reached

Evolve lines in different directions. Find the same edge intersection with maximum lines Adjust the location for pupil center

7. Calculate radius of the pupil

In algorithm 2, steps are same till a point in pupil is obtained. Edge detector canny is applied to original image as the pupil has small circle. To remove the effect of edges of

-30-

Chapter 3

Proposed Methodologies

those circles, edges with small length are deleted so that an image with pupil edge, containing no edge inside pupil, is obtained. As location of a point in pupil is known, lines in different directions are evolved from this point. Points of intersection between edges and these lines are calculated. As these lines are emerging from the point inside the pupil so an edge in circular form has maximum number of intersecting lines. This edge is the pupil boundary. Other edges are deleted. Average of each line shifts the point in pupil towards the center of the pupil. This process of emerging lines and finding average of intersection of two points on each line to shift the center of the pupil to new center is repeated till the single pixel accuracy is achieved. This is the exact center of the pupil.

3.1.2

Non-Circular Pupil Boundary Detection

Pupil boundary is not circular due to non-linear behavior of iris muscles with respect to different illumination conditions, even if the images are acquired at orthogonal to the eye. After finding the pupil center and radius, following method/procedure is adopted to get the non-circular pupil boundary. Points with calculated radius of the pupil on the circle are used to form the non-circular boundary. These points are same degree apart from each other where center of the pupil is assumed as origin. Following procedure is applied to each point. To find the exact boundary of the pupil, points on the pupil are forced to change their position towards the exact boundary points. This change is carried out by inspecting maximum gradient on the line with equation 3.13 of length 25 pixels. Mid-point of the line is the point

( x1 , y1 ) on the circle. Let ( xc , yc )

be the center of the circle and r

be its radius, then equation of circle is:

x 2 + y 2 2( xc x + yc y ) = r 2 xc 2 yc 2
Therefore, slope of tangent to the circle at any point ( x, y ) is:

3.11

m=

x xc y yc

3.12

Equation of line passing through a point ( x1 , y1 ) and perpendicular to the tangent is:

-31-

Chapter 3

Proposed Methodologies

y y y= 1 c x1 xc

x1 yc y1 xc x+ x1 xc

3.13

Distance from the point to the position of the maximum gradient value is termed as d (say). If maximum gradient value is outside the circle then d is added to the point otherwise it is subtracted from the points. After addition or subtraction, distance from the neighbouring points is measured. If this distance is noticeably different then this change is reverted and new point is the mid-point of the neighbouring points. This new point is on the exact boundary of the pupil. The change of point, from circle to maximum gradient is applied after dividing the pupil circular boundary into a specific number "PtPupilBoundary given in equation 3.14.

PtPupilBoundary = round ( r )

3.14

All the points are adjusted to their new positions and then joined linearly. This joined curve is the non-circular boundary of the pupil. In Figure 3.2, the process of finding exact boundary of pupil is displayed.

Lines Perpendicular to tangent at circular boundary Estimated circular boundary Actual pupil boundary

Iris boundary

Figure 3.2: Finding non-circular boundary of pupil

-32-

Chapter 3

Proposed Methodologies

3.1.3

Iris Boundary Detection

For iris localization, iris outer boundary detection is the most difficult step because the contrast between iris and sclera is low as compared to the contrast between iris and pupil. This contrast is so low that sometimes it is hardly possible to detect the boundary by human eye observation. Algorithm 3 is used to find the iris boundary.
Algorithm 3 1. Gaussian filter is applied to the image. 2. From the center of the pupil two virtual circles depending upon the radius of pupil are drawn, boundary between iris and sclera lies in these circles. 3. An array of pixels is picked from the lines radially outwards within the virtual circles. 4. Each array is convolved with 1D Gaussian filter. 5. On each of these convolved lines, three points with highest gradient are chosen to draw the circle of iris. 6. Redundant points are discarded using Mahalanobis distance. 7. Call the draw Circle module. Explanation

To reduce the effect of sharp features in determining iris/sclera boundary, Gaussian filter of size 2727 with standard deviation sigma of value three is applied to the image and filtered image is used for further estimation of the boundary. Different sizes of the filter are experimented. Smaller size does not provide the image with sufficient blurring and larger size filter blends the iris boundary too much. After that a band of two circles is calculated within which iris boundary falls. This band is used to reduce the computation time. The radii of outer and inner circles of the band are based upon the radius of pupil and the distance of first crest along horizontal line passing through the center of pupil in the filtered image respectively. Lets assume pupil center as origin of coordinate axes in the image. In lower left and lower right quadrants, a sequence of different one dimensional signals (radially outwards) are used to pick the boundary pixel coordinates which has significant gradient. Mahalanobis distance [86] is determined from these points to the center of the pupil by using the following formula.

-33-

Chapter 3

Proposed Methodologies

Dist =

( x c)

1 ( x c )
t

3.15

where is the covariance matrix and is defined as follows.

= ( x c )( x c ) p( x)dx

3.16

where x are boundary points and c is the coordinates of center of pupil and p( x) is the probability of point x . Maximum number of points with almost same Mahalanobis distance in a band of eight pixels are used as an adapted threshold to select the points on iris. This threshold is reckoned on the fact that iris and pupil centers are near to each other and selected points are passing through a circle. Therefore, remaining (noisy) points are deleted. Parameters of iris circle A, B and C are calculated from the selected points using the following equation: x 2 + y 2 + Ax + By + C = 0 Center of the iris is (-A/2, -B/2) and radius of iris is: r= 1 A2 + B 2 4C 2 3.18 3.17

This method is effectively applied to both datasets.

3.1.4

Eyelids Localization

After localizing the iris with non-circular pupil boundary and circular iris boundary, now eyelids are to be detected and removed for further processing. So the region of interest is inside the iris boundary. Eyelids outside the iris boundary have no effect on the system. Both upper and lower eyelids are checked for their presence inside the iris.
a. Upper Eyelid Detection

Upper eyelashes are normally heavy and affect the eyelid boundary detection process. Detection of upper eyelids is carried out by using following algorithm.
Algorithm 4 1. Iris is cropped vertically from the image. 2. Upper half image is taken for further processing. 3. A virtual parabola above the half pupil is drawn. 4. Data from the virtual parabola to upper end of the image is taken.

-34-

Chapter 3
5. Moving average filter is applied.

Proposed Methodologies

6. Points with maximum sharp change of rate in intensity values are selected. Redundant points are deleted using three conditions. 7. If points are greater than fifteen then least square fit parabola is applied on the remaining points otherwise eyelid does not cover the iris. Explanation

Iris from the image is pruned from left and right boundaries. Upper image portion is not deleted whereas lower portion from center of pupil is discarded and the remaining part (i.e. upper semicircle of iris) of the image is used for upper eyelid processing. A virtual parabola near pupil upper boundary with following equation is drawn.

y 2 = 4ax

3.19

where a is some positive number representing the distance of directrix, from the vertex of the parabola and (x, y) is a point on the parabola. Parabola is a set of points that are equidistant from a fixed point and a fixed line. This fixed line is called directrix [87]. The virtual parabola passes through three non-linear points. Two points are near the left and right iris boundary and third is three pixels above the pupil boundary in vertical line of pupil center. This virtual parabola makes the processing fast by letting less number of points in further processing. One dimensional signals starting from first row going vertically downwards till virtual parabola are picked from the original image and are smoothed by applying moving average filter of five taps. This smoothness is to reduce the effect of single eyelashes in the image. Maximum three points on each signal are selected based on rate of change in the intensity value. If the selected points are not in the iris region and less than a significant number then it is assumed that iris is not occluded by the upper eyelid. Among these points, exact eyelid points are selected using following criterion. (a) P(x, y) < 120 The intensity value of image P(x, y) at the point (x, y) must be less than 120 as eyelid is darker part of the image so it has values in the range from 0 to 119. If the value is 120 or higher then that point will not be considered as eyelid point. (b) P(x, y) { P(x-1, y-1) or P(x-1, y) or P(x-1, y+1)} -35-

Chapter 3

Proposed Methodologies

Among the left three neighboring points (i.e. upper left, immediate left or lower left), at lease one point should have almost same intensity value as of the point under consideration because eyelids are horizontal convex up or concave up curves. (c) P(x, y) { P(x+1, y-1) or P(x+1, y) or P(x+1, y+1)} Among the right three neighboring points (upper right, immediate right or lower right), one point should be of the same intensity value as of the point under consideration. If a point satisfies all criterion, then it will be a candidate point for parabola. In this way, points, which are not on eyelid boundary, are deleted and the effect of eyelashes in finding upper eyelid is minimized. Afterwards, a parabola is fitted recursively passing through the remaining points using least square curve fit method which determines exactly the upper eyelid.
b. Lower Eyelid Detection To detect the lower eyelids, same algorithm is used but with minor differences. These

differences are described below. Vertically cropped lower half iris image from the center of pupil is used for lower eyelid detection. Third point for virtual parabola is three pixels below the pupil boundary. Parabolic equation for lower eyelid is:

y 2 = 4ax
vertex of the parabola and (x, y) is a point on the parabola.

3.20

where a is some positive number representing the distance of directrix, from the One dimensional signals are picked in the opposite direction (i.e. from last row to the parabola). Remaining algorithm is same and these changes are specified in the parameters of the function.
c. Eyelashes Removal

After localizing the iris and detection of eyelids, as a last step eyelashes are removed from the image. This step is done after iris normalization as shown in Figure 3.1. In the first part of eyelash removal, the histogram of the localized iris is taken. As eyelashes are

-36-

Chapter 3

Proposed Methodologies

of low intensity values, therefore, initial part of the histogram reflects the presence of eyelashes. If the number of pixels in initial part of histogram are within the specified threshold value then the eyelash removal is carried out otherwise it is considered that localized iris is free from eyelashes. Once presence of eyelashes in localized, iris image is verified. Then, image is passed through a high pass filter whose cut off frequency is defined by the maximum intensity value inside the initial part of the histogram. The resultant image is completely localized iris image free from all noises (i.e. eyelids, pupil, sclera, etc.).

3.2 Proposed Normalization Methods


When iris becomes fully segmented, then it is normalized to make it persistent and unvarying in nature against the effect of camera to eye distance and variation in size of the pupil within iris. Iris is normalized using some reference point in the pupil. In general, majority of the methods [1, 49, 53, 62, 64, 81] use pupil center as a reference point. Reference point acts as the center of the swapping ray like the center point in radars. Iris is sampled under the swapping ray based upon the width of the iris at a particular ray position. For example, if iris has width of 128 pixels and normalized image has width of 64 pixels, then every second pixel is picked as iris data. If iris has width of 32 pixels and normalized image has width of 64 pixels, then every pixel is picked twice to keep the size of the normalized image constant. In this way the iris is normalized.

3.2.1 Normalization via Pupil Center


Before normalization, image pixels above upper and below lower eyelids are turned black because these parabolic curves are ignored during the process of un-wrapping the iris. Figure 3.3 shows a model of iris with two non-concentric circles with different radii. Inner circle represents boundary between pupil and iris whereas outer circle represents boundary between iris and sclera. Right side triangle is representing the same triangle

( X IP ) as in circles but zoomed. C P is an horizontal line segment. In this


processing, normalized image of size R S pixels is obtained, where R and S are numbers of rows and columns respectively.

-37-

Chapter 3

Proposed Methodologies

In the previous processing, parameters of pupil and iris are calculated. Coordinates of points P and I are known in preprocessing step since they represent the centers of pupil and iris respectively. X is the point on the boundary of the iris (between iris and sclera) and is rotated throughout the outer circle in counter clockwise direction. The concerned part of the line (which is normalized to unity every time for un-wrapping the iris) is between points A and X. For finding the length of line segment mathematics is used.
AX

, following 3.21 3.22 3.23

=
PA = PB = r1
AX = dcos + r2 2 + d 2 sin 2 r1

X X B A d P
A

B I

r2

I P

r1
C

Figure 3.3: Normalization using pupil center as reference point

On each line, R equidistant samples are picked and then an unwrapped normalized iris image of size R S pixels is inputted to other module for feature extraction. If there are curves on the left and right end of normalized iris, it will represent presence of lower eyelid. Whereas centered parabola is mapped to upper eyelid. A normalized image without any parabola implies that iris corresponding to this image is not occluded by any eyelid. This mapping technique is applied when pupil boundary is assumed as circle. When pupil boundary is non-circular, then following changes are made in the above method of iris normalization. Pupil boundary is assigned maximum grayscale value in the method of non-circular pupil boundary detection. As the reference point is known and the coordinates of point X are with respect to the angle at which iris will be normalized. The line joining the point P and X have a point with maximum grayscale value. This value is -38-

Chapter 3

Proposed Methodologies

searched. Distance between its coordinates (maximum grayscale value) and reference point is normalized to unity. Subsequently, samples of the iris are picked up.

3.2.2 Normalization via Iris Center


Let I and P are the iris and pupil center respectively. X is a point on the boundary of the iris at certain angle as depicted by Figure 3.4. A is the point of intersection between the line I X and the pupil circle. Now the line segment AX is normalized to unity and

samples are picked up from the iris.

X A

B A I d

I P

r2

r1

Figure 3.4: Normalization using iris center as reference point

To find the length of line segment AX following formula is used

AB = r1 sin IB = r1 cos d
AX = r2 r12 + d 2 2r1d cos

3.24 3.25 3.26

Where d is the distance between pupil and iris centers. Algorithm 5


1. Read the iris image. 2. Find circular parameter of pupil. 3. Find parameters of iris. 4. Take reference point as iris center and find S number of points on iris boundary. 5. Repeat for each point on iris boundary

-39-

Chapter 3

Proposed Methodologies

Find point of intersection A on pupil circle and the line joining the points X and I. Normalize the distance between point A and X. Pick up R number of equidistant sample points.

This algorithm results in a normalized iris image of size R S pixels.

3.2.3 Normalization via Minimum Distance


A normalization method based on the minimum distance of the points on the pupil boundary from the ones on the iris boundary is proposed. In this method, S equidistant points are chosen on the pupil boundary and the corresponding points on the iris boundary are selected. These points are calculated using the angle difference of 2 / S radians (i.e. points at zero degree on pupil and zero degree at iris boundary are corresponding to each other), where S is the number of columns of normalized image. Similarly, points at 90 degree to pupil boundary and same at iris boundary are related to each other. Normalized iris is obtained based on the minimum distance between the corresponding points at the same angle. This minimum distance is divided into R number of equidistant points and iris samples are picked up from these point. Figure 3.5 shows the points A and X corresponding to angle at from horizontal line on pupil and iris boundaries respectively. d is the distance between iris and pupil centers and

is the angle between the line joining pupil and iris centers to horizontal line. r1 and r2
are radii of pupil and iris respectively.
X

r2
I P A I d P
A

r1
C

Figure 3.5: Minimum distance between the points at same angle.

-40-

Chapter 3

Proposed Methodologies

In order to get the length of line segment AX, the following mathematical formula is applied.

AX = (d cos + r2 cos r1 cos ) 2 + (d sin + r2 sin r1 sin ) 2


line segment is obtained using equation 3.28.

3.27

In the case, when iris and pupil centers are on the same position (i.e. I = P), the length of

AX = r2 r1
The proposed normalization method is given in Algorithm 6.
Algorithm 6 1. Read the iris image. 2. Find circular parameter of pupil. 3. Find parameters of iris.

3.28

4. Find S number of points on iris and pupil boundary with 2 / S degree angle difference. 5. Repeat for each point on iris boundary

Find a corresponding point A on pupil boundary. Normalize the distance between point A and X. Pick up R number of equidistant sample points.

3.2.4 Normalization via Mid-point between Iris and Pupil Centers


Another method of normalization is proposed. In this method reference points is taken as the mid-point between the lines joining the two centers (i.e. pupil center and iris center). Figure 3.6 represents the pupil center P, iris center I and their mid-point M. Point X is determined by the angle which changes from zero to 2 with a difference of 2 / S , where S is the length of the normalized image. A is the point of intersection between the pupil boundary circle and the line joining the points X and M. The distance between A to X is subdivided into R equal distances to pick up the data. R is the width of the normalized image. Experiments with this reference point have also been conducted to find out which normalization method performs well.

-41-

Chapter 3

Proposed Methodologies

I M P

Figure 3.6: Mid-point of centers of iris and pupil as reference point

Algorithm 7 is used for normalization of iris via mid-point M as a reference point.


Algorithm 7 1. Read the iris image. 2. Find circular parameter of pupil. 3. Find parameters of iris. 4. Take reference point as mid-point of pupil and iris centers. 5. Find S number of points on iris boundary by finding intersection of the circle and line at S different angles. 6. Repeat for each point on iris boundary.

Find point of intersection A on pupil circle and the line joining the point X on iris boundary. Normalize the distance between point A and X. Pick up R number of equidistant sample points.

This algorithm results in a normalized iris image of size R S pixels.

3.2.5 Normalization using Dynamic Size Method


In addition to above mentioned methods, another method of iris normalization has been implemented. In this method, size of the normalized image is dynamic. It is based on the radii of the pupil and iris. Samples of iris are picked up in circular form, from each point on the pupil boundary with an increment of one pixel in the radius till the first point on iris boundary. In this case, size of the normalized image is like a trapezium as shown in Figure 3.7. For elaboration purpose, each line in the dynamically normalized image is

-42-

Chapter 3

Proposed Methodologies

representing single pixel boundary. The trapezium has two parallel edges, short parallel side is the data sampled from pupil boundary and gradual increase represents the data samples towards iris boundary. Algorithm 8 is proposed to achieve this type of normalization.
Algorithm 8 1. Read the iris image. 2. Find parameters of pupil. 3. Find parameters of iris. 4. Initialize radius r with pupil radius. 5. Repeat till a point on iris boundary is picked.

Pick each point on the circle of radius r, total number of points are approximately

2 r.

Increment one pixel in radius r.

Figure 3.7: Concentric circles at pupil center P and dynamic iris normalized image

3.3 Proposed Feature Extraction Methods


Any classification method uses a set of features or parameters to characterize each object, where these features should be relevant to the task at hand. For supervised classification, a human expert determines categorization of object classes and also provides a set of sample objects with known classes. The set of known objects is called the training set because it is used by the classification programs to learn how to classify objects [88]. There are two phases to construct a classifier. In the training phase, the training set is

-43-

Chapter 3

Proposed Methodologies

used to decide how the parameters ought to be weighted and combined in order to separate the various classes of objects. In the application phase, the weights determined in the training set are applied to a set of objects that do not have known classes in order to determine what their classes are likely to be. For unsupervised classification, only spectral features are extracted without use of ground truth data. Clustering is an unsupervised classification in which a group of the spectral values will regroup into a few clusters with spectral similarity. In the present case, features are extracted to make a template of the image. Efforts are made to use minimum number of features with maximum accuracy of the system.

3.3.1 EigenIris Method or Principal Component Analysis


Principal Component Analysis (PCA) or Hotelling transform is a method of dimensionality reduction by combining the features of normalized iris images, identifying patterns in data and expressing the data to highlight their similarities and differences. Since in high dimension data it is hard to find patterns (where the luxury of graphical representation is not available), PCA is a powerful tool for analyzing data. Once patterns have been extracted from the data, and one needs to compress the data (i.e. by reducing the number of dimensions) without much loss of information. In terms of information theory, the idea of using PCA is to extract the relevant information in a normalized iris image, encode it as efficiently as possible and compare test iris encoding with a database of similarly encoded models. A simple approach to extract the information, contained in an image, is to somehow capture the variations in a collection of images independent of judgment of features and use this information to encode and compare individual iris images [89]. In mathematical terms, the purpose of using PCA is to find the principal components of distribution of iris textures or the eigenvectors of the covariance matrix of the set of iris images, treating each image as a point (vector) in a very high dimensional space. The eigenvectors are ordered, each one accounting for a different amount of variation among the normalized iris images. These eigenvectors can be thought of as a set of features that together characterize the variation between iris images. Each image location contributes more or less to each eigenvector. Each individual iris can be represented exactly in terms of a linear combination of the eigenirises and can also be

-44-

Chapter 3

Proposed Methodologies

approximated using only the best eigeniris those that have the largest eigenvalues, and which therefore account for the most variance within the set of normalized iris images. The algorithm 9 has been implemented for this purpose as shown below:
Algorithm 9 1. Input all training images 2. Image preprocessing

Call pupil segmentation module Call iris localization module Call eyelid detection module Call iris normalization module

3. Calculate eigenvalues and eigenvectors Calculate mean of training images Carry out image centering Find out covariance matrix of centered images Obtain eigenvalues and eigenvectors of covariance matrix

4. Sort the eigenvalues and corresponding eigenvectors in ascending order 5. Carry out dimension reduction through selection of highest eigenvalues and eigenvectors 6. Project the image in PCA subspace 7. Carry out image recognition

Load test image Repeat the steps 2 to 6 Obtain Euclidean distance of test projection with training images projection Find out closest match

8. Display image with closest match

3.3.2 Bit Planes


A binary image is a digital image that has only two possible values for each pixel. Binary images are, also called bi-level or two-level. The names black-and-white (B&W),

-45-

Chapter 3

Proposed Methodologies

monochrome or monochromatic are often used for this concept, but may also designate any images that have only one sample per pixel, such as grayscale images. Binary images often arise in digital image processing as masks or as the result of certain operations such as segmentation, thresholding and dithering. Some input/output devices such as laser printers, fax machines and bi-level computer displays can only handle bilevel images. Digital medium (such as image or sound) is a set of bits having the same position in the respective binary numbers [90]. For example, for 16-bit data representation there are 16 bit planes: the first bit plane contains the set of the most significant bit and the 16th contains the least significant bit. It is possible to see that the first bit plane gives the roughest but the most critical approximation of values of a medium. The higher is the number of the bit plane, the lesser is its contribution to the final stage. So, addition of bit plane gives a better approximation [91]. Thus, bit planes of normalized iris image are used as features of the iris.

3.3.3 Wavelets
Feature extraction is one of the most important part in recognition systems. Different experiments have been conducted using Haar, Daubechies, Symlet, Biorthogonal and Mexican hat wavelets to extract features. Approximation coefficients as well as details coefficients at different levels of the wavelets have been used as features. CWT function is used to implement the Mexican hat to get features. Wavelets are applied on normalized iris images and then combined to make one dimensional feature vector.

a. Haar Wavelet
Any discussion of wavelets begins with Haar which is discontinuous and resembles a step function as shown in Figure 3.8. This wavelet has been used for extracting the features of normalized iris and comparing the results with other wavelets.

-46-

Chapter 3

Proposed Methodologies

Figure 3.8: Haar Wavelet

b. Daubechies
Daubechies, called compactly supported orthonormal wavelets, make discrete wavelet analysis practicable. The names of the Daubechies family wavelets are written dbN, where N is the order, and db the surname of the wavelet. The db1 wavelet is same as Haar. Next nine members of the Daubechies family are shown in Figure 3.9.

Figure 3.9: Daubechies Wavelets

c. Coiflets
Coeiflets were built by Daubechies at the request of Coifman [92]. The wavelet function has 2N moments equal to 0 and the scaling function has 2N-1 moments equal to 0. The

-47-

Chapter 3

Proposed Methodologies

two functions have a support of length 6N-1. Coiflets wavelets of different lengths are shown in Figure 3.10.

Figure 3.10: Coiflets Wavelts

d. Symlets
The Symlets are nearly symmetrical wavelets proposed by Daubechies as modifications to the db family. The properties of the two wavelet families are similar. Shapes of Symlets are shown in Figure 3.11.

Figure 3.11: Symlets Wavelets

3.4 Matching
In order to match the feature vector, the following two commonly used metrics are used in the proposed iris recognition system.

3.4.1 Euclidean Distance


For obtaining the distance between feature vectors extracted by using PCA, the used similarity measure is Euclidean distance. Euclidean distance between two points in pdimensional space is a geometrically shortest distance on the straight line passing through both the points. Euclidean distance is defined in equation 2.27 and its matrix notation is given in equation 2.28.

-48-

Chapter 3

Proposed Methodologies

3.4.2 Normalized Hamming Distance


Hamming distance is defined as the number of bits by which two n-bit vectors differ. For example, the Hamming distance between 001101 and 001110 is 2. To find the normalized Hamming distance, the result of Hamming distance is divided by the number of total number of bits. In case of above example, the total number of bits is 6. Therefore, normalized Hamming distance is 2/6 = 0.33 that means the two bit strings differ in a fraction of 0.33. Normalized Hamming distance is used so frequently in iris recognition area that it is commonly known as Hamming distance. In feature extraction module, the features are converted in binary format so that it is used efficiently to find the match. A threshold for matching the two feature vectors is defined. Hamming distance less than the threshold value is assumed as match. The minimum is the hamming distance, the maximum is the matching factor.

-49-

Chapter 4

Design & Implementation Details

Chapter 4:

Design & Implementation Details

Different modules have been implemented to ensure error-free and correct operation of the proposed system. MATLAB 7.04 is used as tool for development of algorithms. The system comprises of the following four parts:

Iris localization Normalization Feature extraction Iris matching

4.1 Iris Localization


Iris localization is the main part of the research work in which pupil boundary detection is the first step.

4.1.1 Circular Pupil Boundary Detection


A number of pupil detection modules have been developed based upon the properties of images in different databases. Parameters of the pupil are calculated using the following modules.

a. Detection of Pupil Boundary Module


A module named PupilFind detects the pupil circular boundary. This module uses a number of functions (mean2, min, ind2sub, max and sum) to determine the parameters of pupil. A function known as Centroid has also been developed to obtain centroid of the given region. Input of the module is an image containing iris of defined size and output contains pupil radius and pupil center. In initialization step; size of the image is obtained in m rows and n columns. bod variable is border size initialized with integer which describes number of pixels to exclude from the border in computation. wd variable is used for finding the size of decimation mask. Size of the mask is (2wd+1)(2wd+1) and winsize variable is initialized with integer which is used for size of window for finding the centroid. Size of the window is (winsize + 1)(winsize +1).

-50-

Chapter 4 Flow chart of this module is shown in Figure 4.1.


START

Design & Implementation Details

Input Image from Database and Initialize variables NO

A square window is binarized using histogram

Colored Image YES Conversion to Grayscale

Calculate Centroid of binary window

Centroid == Pupil center NO

YES

Decimation Mask Generation

Assigned Centroid to Pupil center

Convolving Mask with Image

Calculate Centroid of binary window

Image Scanning for Minimum Value

Pupil center is assigned point inside the pupil

Scan for black pixels to obtain radius of pupil

END

Figure 4.1: Flow chart for detection of pupil boundary module

The procedure adopted to find a point in pupil by using the mentioned parameters is as under. Decimation mask is applied to the image excluding border. The position of the minimum intensity value in the masked image is determined. The border width is added to get the exact position of the point. This minimum value always exists inside the pupil. This point is used to find centroid of the image using the function Centroid which gives new center of pupil and a binarized image. Centroid is called iteratively till single pixel accuracy is achieved in finding center of the pupil. Now to calculate the radius of the pupil, binarized image as variable binarizewindow is used in which pupil is white and remaining part is black. After completion of the iteration process, pupil is in the center of

-51-

Chapter 4

Design & Implementation Details

binarizewindow. From center, pixels are counted till a black pixel is found in left, right,
upward and downward directions and mean of these is taken as radius of pupil.

b. Centroid Module
Centroid of a region is center of mass of that region and is defined as a point at which the center of mass is located if the region is constructed using material of constant density.

Image, winsize and PointInPupil are its input parameters whereas output parameters are newcenter and binarizewindow. Binary image is used to find the centroid to make the
pupil of constant density. Equations 3.5 to 3.9 have been employed to find centroid. Before calculating the center of mass of the window sized (winsize + 1)(winsize +1) with center at PointInPupil, density of the area is smoothed to constant by converting the gray scale image into binary image. Usually, this window is inside the image but if the center of the window is at some corner then maximum possible image is taken for finding the centroid. Histogram of the image is obtained. Highest peak in histogram of this window is taken and this gray scale value plus fifteen is used as threshold for making it as binary image. Fifteen pixels are included because values around pupil edge have such gray scale value. This binary image is used for calculating the centroid. Actual center coordinates, in the main image, are found by adding Cx and Cy using equations 3.5 and 3.6 in x and y-coordinates of pupil respectively and subtracting winsize. These coordinates are sent as output newcenter along with the binary image binarizewindow. This process is repeated till single pixel accuracy is achieved. Newcenter and radius are included in variable pcr (i.e. pupil center & radius) to output from the module PupilFind. These parameters are fine tuned using the function FineTuneExactPupil. Fine tuning means that the parameters of the pupil are changed to exact position by inspecting change in the values of center & radius of the pupil pixel by pixel.

c. Fine Tuning Module


Original image, pupil radius and center are used as input parameters of this module (i.e. FineTuneExactPupil). Output of the module is the pupil parameter but with more accuracy. In this function, existing parameters of pupil are fine tuned to get exact

-52-

Chapter 4

Design & Implementation Details

parameters in two ways. One is current radius that changes from -5 pixels to 10 pixels (i.e. pupil radius varies from smaller to current radius and then larger up to 10 pixels). Second, these variations (in radius) are studied at every 10th degree. This module uses other modules to find the exact parameters of the pupil.

d. Confirm Pupil Module


Another module has been incorporated to study the change (known as ConfirmPupil). It takes parameters of pupil and original image as input parameters and returns a score which is an estimate of how well the input parameters are for the given image. This score is based on inner and outer band of current radius of pupil. Width of each band is three pixels and number of sample points on each band is same. Sum of the intensity values from the inner band are subtracted from the sum of intensity values of original image from outer band to get the score. Larger the score is, more accurate the pupil parameters are. So during the fine tuning, position of pupil center is checked by 3616 times. Center and radius of pupil are shifted to some new positions where score of ConfirmPupil module is maximum. Steps for pupil localization are shown by different snapshots of the image as in Figure 4.2. Pupil Detection Method 2

e. Scanning for Pupil Radius Module


Another method of pupil detection has also been developed. Pupil parameters (i.e. pupil radius and pupil center) are obtained using the function named as ScanForPupilRadius. Input parameter is the eye image. Image size is obtained using MATLAB function size. A variable slice is initialized with 10. First and last 60 columns are not used for processing because pupil is inside the iris. Data of every 10th row is passed to a function called FindMaxNoOfZeros which outputs the maximum number of consecutive zeros. These zeros are counted for every 10th row then maximum of them is taken along-with the row number that corresponds to maximum number of zeros. Then the function FindMaxNoOfZeros is called 9 rows above and 9 rows below the previously obtained row of maximum number of consecutive zeros. Row number according to the maximum of these calculations is found which is the x-coordinate of the center of the pupil. Same procedure is carried out column-wise to get the y-coordinate of the pupil. Maximum

-53-

Chapter 4

Design & Implementation Details

number of zero pixels gives the diameters along the coordinate axes. Radius of the pupil is calculated by adding the number of zero pixels on two diameters and dividing it by four.
Original Image Radius of Pupil : r = 38 pixels Coordinates of : x = 136 Pupil Center : y = 183

Applying Moving Average Filter

Fine Tuning Parameters

Finding Point inside Pupil

Retrieving Radius

Calculating Centroid

Figure 4.2: Steps for Pupil Localization CASIA version 1.0

-54-

Chapter 4

Design & Implementation Details

f. Finding Maximum Zeros Module


Input parameter of this module (i.e. FindMaxNoOfZeros) is an array containing the row or column data from the image. Output of the module is a positive integer mentioning the number of consecutive zeros. One dimensional array is convolved with a filter [1 -1] to find the derivative of the data. Since the pupil is a smooth area so the first derivative converts that area into zeros. One more condition is applied that is values less than 0.3 are considered as zero.

g. Draw Circle Module


Two modules for drawing circle have been utilized in order to identify correct parameters of pupil and iris boundary. Input parameters for both the modules are image on which to draw circle, center, radius of the circle and a positive integer newvalue. In the first module, points are obtained using equations 4.1 & 4.2 and pixel values corresponding to these points are changed to newvalue which show a circle in the image.

x = xc + r cos y = yc + r cos

4.1 4.2

Where xc and yc are the coordinates of the center. Values of x and y-coordinates are obtained using radius r of the circle at specific angles. Value of angle varies from 0 to
2 at an interval of 2 / N where N is the total number of points on the circle. Larger

value of N draws better circle whereas smaller value does not draw the circle in a perfect manner. For example, if the value of N is four then only four points are drawn to serve as a circle. Second module uses the property of symmetry. As circle is symmetric about coordinate axes and lines y = x. Values of the coordinates are calculated for segment 1 shown in Figure 4.3 and coordinates in remaining segments are calculated. Coordinates of points in Segment 1 are symmetric to:

Segment 2 about the line y = x Segment 6 about the line y = -x Segment 8 about x-axis Segment 4 about y-axis

-55-

Chapter 4

Design & Implementation Details

Then using Segment 2, one can find Segment 3, Segment 5 and Segment 7 with symmetry about y-axis, x-axis and line y = -x respectively.

Figure 4.3: Used symmetric lines for finding points on circle

h. Finding Pupil in CASIA version 3.0 Iris Database


For finding the pupil in iris database CASIA version 3.0, following method has been proposed. The images in this database are having eight white circles, nearly in the center of the pupil. These small circles are making a round shape. To find the pupil in such images, the following modules have been designed.

(1) Find Pupil Module


Input to this module (i.e. PupilFindV3) is the image containing iris and output of this module is the parameters of pupil. A point in pupil is obtained using the module a. Edge image is taken and edges with small length are deleted to obtain an image in which there is no edge inside pupil edge. As location of a point in pupil is known, horizontal and vertical lines are used to find the first crossing point in left, right, upward and downward directions. Average of first left and first right intersection points on horizontal line from the point in pupil is new x-coordinate and average of first upper and lower intersections is new y-coordinate of the pupil. Number of these lines originating from the new center

-56-

Chapter 4

Design & Implementation Details

Original Image

Radius of Pupil : r = 59 pixels Coordinates of : x = 152 Pupil Center : y = 165

Applying Edge Detector Operator (Canny)

Drawing pupil parameters

Removing edges of length greater than 90 pixels

Calculating pupil parameters

Finding point inside pupil

Figure 4.4: Steps involved in Pupil Localization CASIA Version 3.0

coordinates is increased, gradually from four to sixteen (i.e. number of sectors are increased from four to sixteen) and new center coordinates are average of bisecting these

-57-

Chapter 4

Design & Implementation Details

chords. Figure 4.4 shows the steps used to determine boundary of pupil in CASIA Iris Database version 3.0.

(2) Finding Pupil in MMU Database


By observing an eye image, it can be seen that pupil is a dark region as compared to iris and iris is darker than sclera. A white spot is present inside the pupil for the images of MMU database because of the image acquiring device. In order to detect pupil boundary in this database following steps are performed:

First significant peak of the histogram of image is selected that represents majority of pupil area pixels. This peak always lies in low intensity values. In the case of MMU dataset, it is between 15 to 30 index values of histogram.

In order to find full region of pupil, intensity threshold value is shifted k local minima forward. The value of k is determined through experiments. On MMU iris database, value of k is 7. Minima is defined as: (freq(x-1) >= freq(x)) && (freq(x) < freq(x+1)) It means that gray value x will be called local minima if frequency of its previous gray value is greater than or equal to its value and frequency of next gray value is also greater than its value.

Image is binarized by threshold value and resultant binary image has gap due to the reflection of light source in the image acquisition process inside the pupil. Gaps in the pupil area are filled by white color. Gap means closed region surrounded by pupil area. To find center of the pupil, row and column with maximum number of connected black pixels are found and the point of cross over is considered as initially estimated center of pupil.

Initially estimated center coordinates of the pupil are adjusted using the property of intersecting chords method passing through center of each other. In other words chords passing through the center of a circle bisect each other.

Radius = (Length of chord1+Length of chord2)/4

These steps are applied to each image in the database and result of the steps discussed above is depicted in Figure 4.5.

-58-

Chapter 4

Design & Implementation Details

(a) Original image from MMU iris database.

(b) Histogram of the image is taken to find out the threshold value. Initial part of the histogram is shown.

(c) & (d) Binarized image is obtained using threshold value. Reflection in the pupil area is shown in left image which is filled in right image.

(e) Pupil center and radius are calculated by intersection chord and finding the length of the chord.

(f) Original iris image with pupil center and pupil boundary is shown in white circle.

Figure 4.5: Steps involved in Pupil Localization for MMU Database

-59-

Chapter 4

Design & Implementation Details

4.1.2 Non-Circular Pupil Boundary Detection


Modules written to obtain non-circular boundary of pupil are IrregularPupil,

InternalPoints and WindowRangeGp. Detail of these modules is given below.

a. Irregular Pupil Points Module


Input parameters are original image variable (imo), pupil parameters variable (pcenter), number of points variable (N) and half width variable (wid). wid number of pixels are checked inside for maximum gradient and same number of pixels are checked from pupil circular boundary to outer-side. Output is the image with non-circular boundary and two dimensional array containing coordinates of points on the pupil boundary. First column represents x-coordinate of each point and second column is the corresponding ycoordinate of each point. N defines the total number of points on the pupil boundary to change towards inside or outside. Points are obtained by using equations 4.1 and 4.2. Data of size 2*wid+1, perpendicular to the tangent at that point is picked up and is smoothed. Module WindowRangeGp is called to obtain distance to the point of maximum gradient from dark to bright portion as pupil is darker than iris. This distance is added to the point under consideration radially to get new position of the point. The selected N points are repositioned. Then, these new points are linearly interpolated using the module

InternalPoints to get the non-circular boundary of the pupil.

b. Internal Points Module


Points are interpolated in this module. Some of the obtained points lie at incorrect position because some eyelashes are near the pupil boundary. To avoid this problem, criteria for ignoring a point is proposed based upon the distance between the points as the distance between the points before repositioning was same. If the distance from the point under consideration to second point is less than 80% of the distance from the point to first point then first point is considered as noise and is ignored. Similarly, percentages of 80%, 65%, 50% and 30% are used to ignore the next points with reference to the point under consideration. It means that if the distance from the point to third point is less than 65% of the distance between the point and first point, then both first and second points are

-60-

Chapter 4

Design & Implementation Details

ignored and third point is joined with the point under discussion. This criterion makes circular shaped pupil and avoids unnecessary zigzag pattern in the detection of pupil.

c. Window Range Module


Input parameter of this module named as WindowRangeGp is a one dimensional array of numbers whereas output is position of values which are greater than a specific statistical range. After taking derivative of the input values, positions of values greater than mean of the values plus Standard Deviation of the values are stored in a variable out. This variable represents the positions of maximum gradient in the input array which corresponds to the edge boundary. Figure 4.6 (a) and (b) show the result of non-circular pupil boundary for CASIA iris database version 1.0 and version 3.0 respectively.

4.1.3 Iris Boundary Detection


Iris boundary is obtained by finding its parameters (i.e. iris center and radius). A module called IrisRadiusCenter has been implemented to localize the iris. It is a robust method which performs well on all datasets.

a. Detection of Iris Parameters Module


A module named IrisRadiusCenter has been developed to determine the parameters of iris outer boundary. Input parameters for this module are original image as imo and pupil parameters (center and radius) as pcr. Output of the module is iris center and radius. In obtaining iris parameters, sharpness of the iris diverts the efforts towards wrong judgment. For this purpose, a Gaussian filter is applied to the image. Size of the filter is an important factor, neither it should be too small to sharp the iris patterns nor it should be very large to merge the iris with sclera. With repeated experiments, size of Gaussian filter is selected as 2525 and value of sigma equal to three is used. Image is convolved with the filter. A band of circles is determined for fast computation such that outer iris boundary lies between them. Two radii are necessary to make a band in shape of donut. Radius of outer circle (rout) is based on the pupil radius. Its value is pi times the radius

-61-

Chapter 4

Design & Implementation Details

of pupil. For inner circle radius (rin), an estimated distance from the center of pupil to first valley along the horizontal line is taken. Position of valleys is calculated Original images with drawn circular pupil

Obtaining new points for non-circular pupil

Joining obtained points

(a) CASIA version 1.0

(b) CASIA version 3.0

Figure 4.6: Non-circular pupil boundary

-62-

Chapter 4

Design & Implementation Details

by taking the data values from the horizontal line passing through the center of the pupil. Information about maximum and minimum (valleys and peaks) is gathered using a function FindMaxMin1D on the line. Starting from the center column, location of first valley which ever comes first, either on left side or right side, is taken as radius of the inner circle rin. If this radius of inner circle is less than 1.4 times of the radius of pupil than it is assigned a value equal to 1.4 times the radius of pupil. Number of lines noofline are selected on which tentative points on iris boundary are to be determined. If the difference between these radii is less than fifteen pixels then outer radius is set at fifteen pixels away from the inner radius in order to assure that iris boundary lies in it. It is assumed that center of the pupil is on the origin so that angles for these lines are in two sets. First set has equally spaced noofline lines in polar coordinates from the line with equation = / 6 to the line with equation = /12 , whereas second set has angle range from the line = to the line = 6 / 5 with same number of equally spaced lines as in first set. These lines are virtually drawn between the circles. On each line data is picked and is convolved with a 1D moving average filter and a maximum of three points are obtained when this filtered data is input in the function WindowRangeGp. These points are candidate points for iris boundary. Mahalanobis distance [86] is applied to these points using equation 3.15. Maximum points with same Mahalanobis distance in a band of eight pixels are used as an adapted threshold to select the points on iris. This threshold is reckoned on the fact that iris and pupil centers are near to each other and selected points are passing through a circle. Therefore, remaining (noisy) points are deleted. Parameters of the circles are obtained by using MATLAB function for solving simultaneous equations when the values of the points are substituted in the general equation 3.11.

b. Finding Maximum & Minimum Module


In this module (known as FindMaxMin1D), input is an array of numbers and output is also an array of numbers -1 and 1 of same size as that of input array. Number -1 shows the position of a valley and +1 shows position of a peak. In other words -1 and 1 represent the positions of local minima and maxima respectively. Figure 4.7, Figure 4.8

-63-

Chapter 4
Original Image Low Pass Filter

Design & Implementation Details


Radius of Iris : r = 96 pixels Coordinates : x = 135 of Iris Center : y = 179

Convolution
Adding Pupil Boundary

Graph of horizontal line passing through pupil center

Displaying Iris

Boundary

Intensity values

First trough

Number of columns

Calculating Band of Circle

Deleting extra

points

Finding points for iris boundary

Figure 4.7: Steps for Iris Localization CASIA version 1.0

-64-

Chapter 4
Original Image Low Pass Filter

Design & Implementation Details


Radius of Iris : r = 106 pixels Coordinates : x = 150 of Iris Center : y = 164

Convolution
Adding Pupil Boundary

Graph of horizontal line passing through pupil center

Displaying Iris

Boundary

Intensity value

First trough

Column number

Calculating Band of Circle Finding points for iris boundary

Deleting extra points

Figure 4.8: Steps for Iris Localization CASIA version 3.0

-65-

Chapter 4
Original Image Low Pass Filter

Design & Implementation Details


Radius of Iris : r = 50 pixels Coordinates : x = 123 of Iris Center : y = 182

Convolution

Adding Pupil Boundary

Graph of horizontal line passing through pupil center

Displaying Iris Boundary

Intensity value

First trough
Columns number

Calculating Band of Circle

Deleting extra points

Finding points for iris boundary

Figure 4.9: Steps for Iris Localization MMU Iris database

-66-

Chapter 4

Design & Implementation Details

and Figure 4.9 show the steps involved for CASIA Iris Database version 1.0, CASIA Iris Database version 3.0 and MMU Iris Database version 1.0 respectively.
c. Iris Localization Using Histogram Processing

Another method of iris localization has been designed and implemented based upon histogram processing. This method is implemented to find the outer iris boundary for MMU iris dataset. After finding the pupil center and radius, two sub-images are converted to polar coordinates to obtain three points on the iris boundary. These subimages are parts of the original image, outside the pupil region. These sub-images are binarized using adaptive threshold which is obtained from the histogram processing. Maximum of first hundred entries from histogram are found and threshold is seven valleys (minimas) ahead of maximum value. Binarization process gives a clear boundary between sclera and iris in polar images. Three points are picked up from these images which are mapped back to original image as shown in Figure 4.10 (e). These three points are nonlinear and it is well known fact that a unique circle passes through three nonlinear points. Once three points are obtained using general equation of circle parameters of the circle are obtained and circle is drawn to depict the iris boundary.

4.1.4 Eyelids Localization


To detect the eyelids in the iris region following modules have been developed. Upper and lower eyelids are determined by the function named Eyelids. However, inputs are different for upper and lower eyelids.

a. Eyelids Module
Inputs of the module are original image mo and parameters of the iris icrpcr (i.e. centers and radii of iris and pupil). Output choice can be: (a) image with eyelid taken as white pixels or (b) image is filled black above the eyelid. First of all region of half iris is cropped. For upper eyelid, this cropping is done using the MATLAB colon operator with the following command. ilidportion = imo(1:icrpcr(1)-1, icrpcr(2)-icrpcr(3):icrpcr(2)+icrpcr(3));

-67-

Chapter 4

Design & Implementation Details

Where first colon operator is cropping the image from row one to icrpcr(1) which is the x-coordinate of the iris center (i.e. it takes the upper half iris image). Second colon operator crops the iris part only. For lower eyelid, following command is applied. (a) Each right and left sector comprising of 30 degrees (as shown in the image) is converted to polar coordinates assuming origin at pupil (b) Polar images. transformed

Number of pixels

Number of pixels

(c) Histogram of the respective images and threshold is obtained for both images.

Grayscale value

Grayscale value

(d) Binarized images

(e) Three points are picked using the binary images. One point from the middle of left image and two points from right image at top and bottom are taken.

(f) Circle is drawn passing through the three points that localized the iris image.

Figure 4.10: Steps for Iris Localization MMU iris database

-68-

Chapter 4

Design & Implementation Details

ilidportion = imo(icrpcr(1):end, icrpcr(2)-icrpcr(3):icrpcr(2)+icrpcr(3)); where first colon is cropping the lower half of the image and second colon operator is same as for upper eyelid. As iris has very sharp changes near pupil so to nullify this effect a virtual parabola is drawn by using three points in the image. These points are shown in Figure 4.11 (c). To draw the parabola passing through these points, MATLAB function polyfit is used. The main purpose of this function is to fit in a polynomial passing through the input points. As parabola is a polynomial of degree two so polyfit is passed with the parameters three points and a number two. Its output is a curve of degree two with two variables. Points on the curve are obtained by varying one variable. When this virtual parabolic curve is drawn, points for upper eyelid are picked on each column between first row to the virtual parabolic curve and points for lower eyelid are picked from last row to virtual parabola in upward direction. The intensity values in an array along with a variable string mentioning upper or lower are inputted to a function named as MaxDiffEyelids. At most three points are output from the function based on the maximum difference with respect to minimum distance. When points are selected, there come some redundant points which are deleted. Now if the remaining points are less than fifteen, it means that eyelid is not covering the iris otherwise a parabola is fitted statistically on the points with least square error.

b. Eyelids Extreme Values


A module named as MaxDiffEyelids has been implemented to obtain extreme points for eyelids localization. Its input is an array of numbers from the image and output is the location of points where gradient is maximum while going from brighter to darker. After a close view of the image, it is clear that mostly upper portion of upper eyelid in the image is brighter. Positions of maximas and minimas are obtained by function FindMaxMin1D and difference of the positions & relative values in the array are multiplied to get weighted results according to change in the intensity values. These positions of maximum weight in an array are output of the module.

-69-

Chapter 4

Design & Implementation Details

(a)

(b)

(c)

(d)

(e)

(f)

(g)
Figure 4.11: Steps for Upper Eyelid localization CASIA Ver 1.0 Iris database (a) is part of the original image (b) after applying moving average filter (c) result of sobel horizontal filter (d) deleting pupil edge from (c), (e) image in which points near iris boundary are deleted (f) least square fit parabola (g) Image with upper eyelid

-70-

Chapter 4

Design & Implementation Details

4.2 Normalization Methods


Normalization is necessary to avoid the effects of dilation and constriction of human pupil under different illumination conditions which changes the size of iris. Also the camera to eye distance changes the size of the iris . Before this step, iris is localized and its parameters are stored in a variable icrpcr and image is turned black above to upper eyelid and below to lower eyelid. Following normalization modules have been developed for un-wrapping the iris into a rectangular array.

4.2.1

Normalization From Pupil Module

Inputs to this module are variables, icrpcr: iris and pupil parameters, imo: original image,

widthrect: width of the rectangular strip and noofpoints: length of the rectangular strip
which is same as the number of points picked up from each circle on the iris. Output of the module is the normalized image in variable nor. In the processing first of all, output variable is initialized with zeros. Theta, a new variable is defined and is assigned

noofpoints angles on a circle with equal spacing. A point on the pupil boundary is
calculated. Then, a line passing through the point with slop equal to tangent of the angle

Theta is obtained. Afterwards, a point of intersection between iris outer circle and the line
is worked out. Distance between this point of intersection and point on pupil boundary is normalized to one and widthrect number of equidistant points are used to pick up the grayscale intensity value of the iris. This data is assigned to the subsequent columns (starting from first to on-wards) of the output normalized iris image. After completion of this process for each angle, normalized iris image is obtained.

4.2.2

Normalization From Iris Module

In this module, normalized iris image is obtained using the reference point as center of iris. Input and output of the module are same as of module Normalization From Pupil. Points on boundary of the pupil are given intensity value 255. Using the linspace function of MATLAB, number of points is selected on iris outer boundary. Intensity values from each point to the reference point are pickup and 255 is searched within these values which represents pupil boundary. MATLAB function improfile computes pixel value cross sections along line segments in an image. Improfile selects equally spaced points

-71-

Chapter 4

Design & Implementation Details

along the path specified, and then uses interpolation to find the intensity value for each point. This function is used to obtain normalized pixel values from pupil boundary to iris boundary.

4.2.3

Normalization From Minimum Distance Module

In this module, centers of both iris and pupil are playing a role. Input and output variables to this module are same as that of Normalization From Pupil Module. Noofpoints on the pupil circle with equal distance are stored in a variable A and the same number of points are picked from iris boundary and stored in B. First point in variable A refers to a point on pupil boundary at 0 degree and the first point in variable B is on iris boundary at 0 degree angle. Minimum distance between the two points is normalized to unity and widthrect numbers of equal spaced points are used to obtain data from the iris. Similarly second point in the variable A & B refer to next points on pupil and iris boundary respectively. Middle point in variables A & B represent the points along ve x-axis.

4.2.4

Normalization From Mid-point Module

Input and output of this module is same as discussed for module 4.2.1. Both centers of pupil and iris are used to find the reference point in this case. Mid-point of the centers is taken as new reference point and normalization is carried out by finding the points of intersection of circles with lines at different angles. Each line starts from reference point and ends at the boundary of the image. Each line initially intersects the pupil boundary and then intersects the iris boundary. Distance between these points of intersection is normalized to unity and data values are picked up to make the normalized iris image.

4.2.5

Normalization With Dynamic Size Module

This is very different type of normalization. In this normalization module, size of the different normalized image is different. First row of the normalized image is the data values of first circle just after pupil circle and second row is data picked up from the next circle towards iris boundary and so on till the first point on the iris boundary is selected. This makes the normalized iris image like a trapezium, as normalized image is

-72-

Chapter 4

Design & Implementation Details

rectangular so remaining part is shown black. Normalized images by using all the described normalization processes are shown in Figure 4.12.

(a) Original image

(b) Pupil as reference point

(c) Iris as reference point

(d) Minimum Distance

(e) Mid-point of iris and pupil centers

(f) Dynamic size


Figure 4.12: Normalized images with different methods

-73-

Chapter 4

Design & Implementation Details

4.3 Feature Extraction Methods


Features from the normalized images are extracted by the following methods.

4.3.1

Principal Component Analysis

Principal component analysis (PCA) is used for finding features from a normalized iris image. It is a way of identifying patterns in data and expressing the data in such a way as to highlight their similarities and differences. Since patterns in data can be difficult to find in high dimensions, PCA is a powerful tool for analyzing data. A module named PCA is developed for obtaining features from the image. For implementation of PCA, first of all following variables are initialized. Total number of training samples are stored in a variable named samples, number of images of each eye are put in trained and number of dimensions to which variance in the data is taken into consideration is handled by variable dimension. A module named PCAmean is cultivated for calculating the mean of the training samples. During training, first of all, mean is subtracted from the image to delete the common features and then to make it square matrix, it is multiplied by its transpose because eigen vectors and eigen values of rectangular matrices is not possible. Eigen vectors and Eigen values are obtained and number of eigen vectors corresponding to highest eigen values are stored as features of the image. Minimum distance between the feature vectors of test image and trained dataset is taken to match test image with the corresponding image.

4.3.2

Bit planes

Every image is composed of unsigned integral values in general of type uint8. In case of colored image RGB, these values correspond to level of three colors Red, Green and Blue at a particular position. For grayscale images, these integral values range from zero to 255; zero corresponds to black and 255 represents white. So a maximum of 256 = 28 values in a variable of type uint8 are possible. These values are stored in 8 bits. Therefore, the image is composed of 8 bit planes. Each bit plane has its own contribution in the image, so based upon this concept; bit planes are used as features of the normalized iris image. In order to obtain a bit plane of the normalized iris image, a function named as bitget of MATLAB is used. The syntax of the function bitget is as follows:

-74-

Chapter 4 C = bitget(A, BIT);

Design & Implementation Details

It returns the value of the bit at position BIT in A. A must be an unsigned integer or an array of unsigned integers, and BIT must be a number between 1 and the number of bits in the unsigned integer class of A.

4.3.3

Wavelets

The Wavelet Transform (WT) is based on sub-band coding. It is easy to implement and reduces the computation time and resources required. The foundations of WT go back to 1976 [93] when techniques to decompose discrete time signals were devised. Similar work was done in speech signal coding which was named as sub-band coding. In 1983, a technique similar to sub-band coding was developed which was named pyramidal coding [93]. Later many improvements were made to these coding schemes which resulted in efficient multi-resolution analysis schemes. In continuous WT, the signals are analyzed using a set of basis functions which relate to each other by simple scaling and translation. In the case of discrete WT, a time-scale representation of the digital signal is obtained using digital filtering techniques. The signal to be analyzed is passed through filters with different cutoff frequencies at different scales. Filters are one of the most widely used signal processing functions. Wavelets can be realized by iteration of filters with rescaling. The resolution of the signal which is a measure of the amount of detail information in the signal is determined by the filtering operations and the scale is determined by upsampling and downsampling (subsampling) operations. At each decomposition level, the half band filters produce signals spanning only half the frequency band. This doubles the frequency resolution as the uncertainty in frequency is reduced by half. In accordance with Nyquists rule if the original signal has a highest frequency of which requires a sampling frequency of 2 radians then it now has a highest frequency of /2 radians. It can now be sampled at a frequency of radians, thus discarding half the samples with no loss of information. This decimation by 2 halves the time resolution as the entire signal is represented by only half the number of samples. Thus, while the half band low pass filtering removes half of the frequencies and halves the resolution, the decimation by 2 doubles the scale.

-75-

Chapter 4

Design & Implementation Details

Two-dimensional WT decomposes an image into subbands that are localized in frequency and orientation. A image is passed through a series of filter bank stages. The high-pass filter (wavelet function) and low-pass filter (scaling function) are finite impulse response filters. In other words, the output at each point depends only on a finite portion of the input. The filtered outputs are then down sampled by a factor of 2 in the horizontal direction. These signals are then filtered by an identical filter pair in the vertical direction. Decomposition of the image ends up into 4 subbands denoted by LL, HL, LH, HH. Each of these subbands can be thought of as a smaller version of the image representing different image properties. The band LL is a coarser approximation to the original image. The bands LH and HL record the changes of the image along horizontal and vertical directions respectively. The HH band shows the high frequency component of the image. Second level decomposition can then be conducted on the LL subband. Under frequencybased representation, only high-frequency spectrum is affected (called high-frequency phenomenon). This one step decomposition is shown in Figure 4.13. After decomposition of an image, LL subband is called Approximation coefficients whereas remaining three subbands are called details. LH, HL and HH are known as Horizontal, Vertical and Diagonal details.

Figure 4.13: One step decomposition of an image

A module named as wavelet_fn has been implemented to extract the features from the normalized iris images using different wavelets at different levels. Input parameters of this module are normalized iris image, wavelet name, level of decomposition and features

-76-

Chapter 4

Design & Implementation Details

to extract. Features to be extracted could be approximation coefficients or any detail coefficients or any of their combination of the specific level of wavelet. Output of the module is a feature vector which has been used in matching. The wavelets discussed previously have been incorporated in this module.

4.4 Matching
After extraction of features from normalized iris images, a matching metric is required to find the similarity between the two irises. This matching metric should have the property that results of matching the irises from the same class should be clearly separate and distinct than the results of matching the irises from different class. The metric used for the proposed system is normalized Hamming distance and Euclidean distance.

4.4.1

Euclidean Distance

When features are extracted using PCA then the used similarity measure is Euclidean distance. Euclidean distance between two points in p-dimensional space is a geometrically shortest distance on the straight line passing through both the points. Euclidean distance is defined in equation 2.27 and its matrix notation is given in equation 2.28.

4.4.2

Normalized Hamming Distance

It is widely used similarity metric in iris recognition systems. In information theory, the Hamming distance between two strings of equal length is the number of positions for which the corresponding symbols are different. In another way, it measures the minimum number of substitutions required to change one into the other, or the number of errors that transformed one string into the other [94]. If the result of Hamming distance is divided by the total length of the strings then it is called Normalized Hamming distance. In feature extraction module, features are converted to binary format. Then the Normalized Hamming distance is used to find the match. In iris recognition community, normalized Hamming distance is used so frequently that many researchers simply mention it as Hamming distance [95]. A threshold is defined for

-77-

Chapter 4

Design & Implementation Details

finding a match. Hamming distance less than the threshold value is assumed as a match. The minimum is the Hamming distance, the maximum is the matching factor.

-78-

Chapter 5

Results & Discussions

Chapter 5:

Results & Discussions

Different databases have been used to check the validity and efficiency of the proposed schemes. MATLAB 7.0.4 has been used as a tool for the implementation of methodologies. Results of each method applied to different datasets have been presented. First of all, the results of iris localization methods have been described followed by the results of normalization methods. After that, performance of feature extraction and recognition methods has been elaborated.

5.1

Databases Used for Evaluation

Different iris databases of different universities / institutes have been used for testing the implemented schemes. Two databases of Chinese Academy of Sciences, Institute of Automation (CASIA) Iris Database Version 1.0 and 3.0 [71], one from University of Bath (BATH), UK [96] and one from Multi Media University (MMU), Malaysia [72] have been used for evaluation. CASIA Version 3.0 (Interval) is the largest database which is publicly available via internet. Number of total images, file format, number of classes and dimension of images are given in Table 5.1 corresponding to each database name.
Table 5.1: Some attributes of the datasets S. Name of Iris No. Database File Format Number of images Number of classes Number of images in each class Dimensions of image in pixels (rowscolumns)

a.

CASIA Version 1.0 CASIA Version 3.0 (Interval) BATH (free version) MMU Version 1.0

bmp

756

108

280320

b.

jpg

2655

396

1-26

280320

c. d.

bmp bmp

1000 450

50 90

20 5

9601280 2403203

-79-

Chapter 5

Results & Discussions

As the acquiring device as well as environment is not unique for all databases, therefore, different types of pupil images are present in the databases. Some of the images are shown in Figure 5.1. Image (a) in Figure 5.1 belongs to CASIA version 1.0 in which the pupil is turned black automatically so that any light reflection is vanished and in Figure 5.1 (b) image taken from CASIA version 3.0 is shown (there are eight white small circles in round shape inside the pupil). Figure 5.1 (c) is the image from BATH iris database in which iris is not occluded by eyelids in most of images but there is a bright spot in the pupil in every image. MMU version 1.0 iris image database contains image shown in Figure 5.1 (d). It also contains a bright spot in the pupil.

(a) CASIA version 1.0

(b) CASIA version 3.0 (Interval)

(c) BATH

(d) MMU

Figure 5.1: Images in different datasets As exact results of localization are not available from the database providers, therefore, results presented here are obtained by observing the images.

-80-

Chapter 5

Results & Discussions

5.2

CASIA Version 1.0

There was not any public iris database whereas there are many face and fingerprint databases. Lack of iris data for testing has been a main hurdle for carrying out research on iris biometric. To promote the research, National Laboratory of Pattern Recognition (NLPR), Institute of Automation (IA), Chinese Academy of Sciences (CAS) has provided free iris database for evaluation of iris recognition systems [71]. Most of the research work has been conducted on this database because it is first database available via internet. It is widely distributed to a large number researchers/teams from many countries and regions of the world for research. The pupil regions of all iris images in CASIA version 1.0 were automatically detected and replaced with a circular region of constant intensity to mask out the specular reflections from the Near Infra Red (NIR) illuminators. CASIA version 1.0 iris image database contains 756 images from 108 different people. For each eye, 7 images have been captured in two sessions, where three samples are collected in the first session and four in the second session. Each iris image is in grayscale with a resolution of 280320 pixels.

5.2.1

Pupil Localization

The accuracy of pupil localization is the main phase of iris localization. Pupil detection and finding pupil parameters play pivotal role for pupil localization. Detection of pupil is very important because once a pupil is localized correctly; probability of correct iris localization is increased. To find the pupil, first of all a point inside the pupil is searched.

a. Point inside the pupil


The image acquiring setups are different for different dataset, so different methods are implemented to obtain pupil center and radius. To locate a point inside the pupil, a robust method has been used in this research work which performs well for all datasets. A point inside the pupil for all the images in CASIA version 1.0 is correctly detected for all images as mentioned in Table 5.2. This perfect detection is because of uniform intensity values inside the pupil. For finding a point inside the pupil, size of decimation filter and border width is obtained adaptively, which are dependent on dimensions of the image. Size of decimation filter is

-81-

Chapter 5

Results & Discussions

taken as ww where w is equal to 10% of the total number of rows of the image. In the case of CASIA version 1.0, its value is 28 pixels. Similarly, border width bw is 15% of the total rows and its value is 42 pixels. Border width is used to exclude the bw pixels in finding point inside the pupil. Thus, both w and bw are dependent on the size of image.

b. Pupil Parameters
As pupil is firstly assumed as circle which is later changed to non-circular boundary detection. Therefore, the term pupil parameters refer to center coordinates and radius of the pupil. Pupil region is replaced with a circular region of constant intensity to mask out the specular reflections from the NIR illuminators by the dataset provider [97]. The results of finding pupil parameters using methods discussed in Section 4.1.1 for the database are given in Table 5.2. Pupil parameters are found with 100% accuracy. These parameters affect the accuracy of iris localization.

5.2.2

Non-circular Pupil Localization

When bright light is shone on the eye, it is automatically constricted. This is the pupillary reflex [65]. Furthermore, the pupil will dilate if a person sees an object of interest. The oculomotor nerve, specifically the parasympathetic part coming from the EdingerWestphal nucleus, terminates on the circular iris sphincter muscle. When this muscle contracts, it reduces the size of pupil. Size of iris sphincter muscle is not necessarily equal. That is why, the pupil is of non-circular shape. Moreover, non-orthogonal images (i.e. the images acquired at an angle other than normal to the eye ball) or off-angle images have non-circular pupil. Non-circular pupil boundary is calculated by using the pupil parameters. A specific number of points on pupil circular boundary are selected and this number is calculated by equation 3.14. The results of correct non-circular boundary of pupil are given in Table 5.2. Accuracy of 98.28% is achieved for correct non-circular pupil boundary. These results depend on accurate pupil parameters and number of points on pupil circular boundary. If the numbers of points on the pupil are less than the points selected by equation 3.14, then the accuracy of non-circular pupil localization is decreased. Because

-82-

Chapter 5

Results & Discussions

of the large distance among the selected points and when these points are joined, they did not look like a circle. Similarly, if the numbers of points on the pupil are more than the specific numbers of points then accuracy is not increased because the mutual distance among the points is very small, even some of the points are connected to each other. 1.72% incorrect boundaries of the pupil are due to long eyelashes and very rich texture of iris near the pupil boundary which confuses with the pupil boundary.

5.2.3

Iris Localization

The boundary between iris and sclera is named as iris boundary which is the most important parameter for iris localization. Proposed method in Section 4.1.3 has been applied to the database and its results are shown in Table 5.2. High accuracy of iris localization is very important because this part of the image is actual iris data which is used for recognition. The proposed method yields correct iris localization rate of 99.6% for CASIA version 1.0. As pupil parameters are found with very high accuracy (i.e. 100%), so it plays a key role in finding such high accuracy in iris localization.

5.2.4

Eyelids Localization

Iris outer and inner boundaries have been worked out in Section 4.1.4. CASIA version 1.0 iris database has some images in which eyelashes are very dense and long. The proposed method in the module gives good response for finding eyelids as indicated in the results in Table 5.2. Upper eyelids are correctly localized with an accuracy of 98.91%. The results achieved for lower eyelids are accurate up to 97.8%
Table 5.2: Results of Iris localization in CASIA version 1.0 S. No. Name of Stage Total number of images Accuracy

a. b. c. d.

Point Inside Pupil Pupil Parameters Non-Circular Pupil Localization Iris Localization

756 756 756 756

100% 100% 98.28% 99.6%

-83-

Chapter 5

Results & Discussions

e. f.

Upper Eyelids Lower Eyelids

756 756

98.91% 97.8%

Some of the correctly localized images are shown in Figure 5.2. Eyelids in these images are masked so that normalized image may not contain noisy portion. Iris and pupil centers are shown clearly in the images.

Figure 5.2: Some correctly localized images in CASIA version 1.0

5.3

CASIA Version 3.0

CASIA Version 3.0 includes three subsets that are labeled as CASIA Version 3.0 Interval, CASIA Version 3.0 Lamp and CASIA Version 3.0 Twins. CASIA Version 3.0 Interval Iris database contains a total of 2655 iris images from 249 subjects. All iris images are 8 bit gray-level JPEG files which are collected under NIR illumination. Almost all subjects are Chinese except a few. CASIA version 3.0 Interval is used for evaluation of the proposed methods and from onward CASIA Version 3.0 Interval is referred as CASIA version 3.0 which is a superset of CASIA Version 1.0. Images of left and right eyes are stored in separate folders with a total of 498 (2492) folders. There are 102 empty folders. Therefore, total number of classes are 396 (498-102) as given in Table 5.1. -84-

Chapter 5

Results & Discussions

5.3.1

Pupil Localization

Inner boundary of iris is termed as pupil boundary. In this database, pupil has eight white small circles like a revolver chamber. These circles need another technique for pupil localization. Finding exact location of pupil is a main step in iris localization. Pupil detection and its parameters have been obtained for determining pupil localization. Good localization of iris depends on exact localization of pupil because its center is used for further processing. For pupil detection, a point inside the pupil is searched using the proposed algorithm.

a. Point inside the pupil


Presence of light reflection in pupil is different for different dataset as a result of implementation of different methods to obtain pupil center and radius. To locate a point inside the pupil, the proposed method has been used. In this method, number of rows in the image is used to find the size of decimation filter and border width which are taken as 10% and 15% of the total rows. In the case of CASIA version 3.0 (Internal) dataset, these values are 28 and 42 pixels. Border width of 42 pixels is excluded in finding point inside the pupil. Since the illumination and contrast in the image of the dataset is widely varying, so the results of 99.92% have been achieved for finding a point inside the pupil correctly whereas these results for CASIA version 1.0 database have been 100% correct.

b. Pupil Parameters
Coordinates of pupil center and length of radius are determined while assuming pupil as circle. Figure 5.1 (b) shows an image of this dataset. It has eight white circles inside the pupil. For eyes having small pupil, these white circles are on the boundary of pupil which makes it very difficult to find the pupil boundary. The results of finding pupil parameters using methods discussed in Section 4.1.1 are given in Table 5.3. The proposed method correctly calculated the pupil parameters of 2648 images out of 2653 images. In only two images, point inside the pupil was incorrectly detected. While calculating accuracy for the database, all the images instead of 2653 images were used. The accuracy achieved in this case is 99.69% whereas 100% correct results have been obtained for CASIA version 1.0. If two images are subtracted from the total number of images (because of incorrect

-85-

Chapter 5

Results & Discussions

point inside the pupil) then the result for pupil parameter increases from 99.69% to 99.81%.

5.3.2

Non-circular Pupil Localization

Pupil boundary is not an exact circle. So, a specific number of equidistant points from equation 3.14 are shifted to original boundary of the pupil and then joined linearly to get the exact boundary (non-circular) of the pupil. In non-circular pupil localization, pupil parameters plays vital role. If the circular boundary is incorrect (more than 13 pixels away from exact pupil boundary), then non-circular boundary will not be determined correctly. The result of correct non-circular boundary of pupil is given in Table 5.3. Accuracy achieved for non-circular pupil boundary is 99.35% in case of CASIA version 3.0 whereas it is 98.28% for CASIA version 1.0 [98].

5.3.3

Iris Localization

Iris boundary is relatively difficult to find because contrast between pupil and iris is higher than the contrast between iris and sclera. This boundary defines the region inside the iris. Proposed method in section 4.1.3 has been applied to the database and the results obtained are shown in Table 5.3. Since pupil has different light spots / reflections in different databases so different methods have been implemented for pupil boundary whereas for iris localization, a generic method has been proposed which gives equally better results for all databases. The results of correct iris localization have been achieved up to 99.21% for CASIA version 3.0.

5.3.4

Eyelids Localization

Iris outer and inner boundaries have been determined and the results of eyelid localization module as mentioned in Section 4.1.4 are presented. In this module, the eyelids are considered as parabolas. Obtaining eyelids boundary, particularly upper eyelid in an image, is very difficult because of the presence of eyelashes. Very dense eyelashes make detection of eyelid more challenging. The proposed method performs well and results have been achieved up to 90.02% and 91.9% for upper and lower eyelids respectively. These results of 98.91% and 97.8% have been obtained for CASIA version

-86-

Chapter 5

Results & Discussions

1.0. CASIA version 3.0 has more percentage of blurred images as compared to version 1.0. Therefore, overall results are better in CASIA version 1.0, particularly in the process of detection of upper and lower eyelids [48].
Table 5.3: Results of Iris localization in CASIA version 3.0 S. No. Name of Phase Total number of images Accuracy

a. b. c. d. e. f.

Point Inside Pupil Pupil Parameters Non-Circular Pupil Localization Iris Localization Upper Eyelids Lower Eyelids

2655 2655 2655 2655 2655 2655

99.92% 99.69% 99.35% 99.21% 90.02% 91.90%

Figure 5.3 contains some of the images in which iris is localized correctly. Parts of the image above upper eyelid and below lower eyelid have been masked because these parts contain noise and are not used for further processing.

Figure 5.3: Some correctly localized images in CASIA version 3.0

-87-

Chapter 5

Results & Discussions

5.4

University of Bath Iris Database (free version)

BATH iris dataset (free version) has 1000 iris images of high resolution from 50 different eyes in 25 folders. The folders are indexed numerically as 0001, 0002, etc. Within each folder, there are two subfolders - L (left) and R (right), each containing 20 images of the respective eyes. The free images are JPEG2000 which are compressed to 0.5 bits per pixel and are in grayscale with 1280960 resolution.

5.4.1

Pupil Localization

Pupil is very large in this iris image dataset because of high resolution of the images. A white reflection of light source is present in the pupil. Dataset contains images of eyes with lenses. Pupil localization means finding the location of pupil and its parameters. First step in pupil localization is detection of pupil location. For this purpose, a point inside the pupil is looked for using the algorithm. Exact localization of pupil plays major role in iris localization because center of pupil is exploited for further processing.

a. Point inside the pupil


Once a point inside the pupil is confirmed then it is easy to locate the pupil boundary because of high contrast between pupil and sclera. Light reflection in pupil is different for different datasets. As a result, different methods have been implemented to obtain pupil parameters. To locate a point inside the pupil, a proposed method is used which employs number of rows in the image to find the size of decimation filter and border width, which are taken as ten and fifteen percent of the total rows. For this database, size of the decimation filter is 96 by 96 pixels and border width is 144 pixels. 100% accurate results are achieved by finding a point inside the pupil for this database because in all of these images, pupil pixels have almost same intensity values.

b. Pupil Parameters
Different methods have been proposed for different database for pupil parameters detection. Pupil parameters are acquired while finding a square inscribing the pupil. In this method instead of finding the pupil circle, a square sub-image containing the pupil is separated. This square image has tightly fitted pupil in it and considering pupil as

-88-

Chapter 5

Results & Discussions

complete circle, coordinates of pupil center and length of radius are calculated. Figure 5.1(c) shows an image of this dataset. It has a white spot in the pupil. The results of finding pupil parameters for the database are shown in Table 5.4. Due to high resolution of the image and using a different approach, results attained for BATH iris database are 100% correct.

5.4.2

Non-circular Pupil Localization

A closer view of the iris image demonstrates that the pupil boundary is zigzag. Therefore, a number of points from equation 3.14 are forced to shift at genuine boundary of the pupil and then linearly joined to get the exact boundary of the pupil. The correct localization rate of non-circular boundary is given in Table 5.4. 98.8% accurate results have been achieved for BATH iris database whereas 98.28% and 99.35% are the results of correct non-circular pupil localization for CASIA version 1.0 and 3.0 respectively. Accuracy in non-circular pupil localization for BATH database is 0.52% better than CASIA version 1.0 and 0.55% worse than CASIA version 3.0. These 0.52% better results are because in BATH database less number of images have very high frequency iris patterns near pupil boundary and long eyelashes are also not present near pupil boundary.

5.4.3

Iris Localization

Images of this database have better contrast between iris and sclera as compared to other databases because of high resolution of images. Iris boundary is localized by applying the proposed method to this database. After finding pupil parameters, candidate points for iris boundary are selected using the defined procedure as discussed in Section 4.1.3 and the results are given in Table 5.4. The results of iris localization have been obtained with accuracy up to 99.4% for BATH iris database. The results for CASIA version 1.0 and 3.0 are 99.6% and 99.21% respectively.

5.4.4

Eyelids Localization

Iris and pupil boundaries have been processed. The results of eyelid localization for BATH iris database are given in Table 5.4. The accuracy of results for correct upper and lower eyelids detection is 84.5% and 96.6% respectively. The result of upper eyelid

-89-

Chapter 5

Results & Discussions

localization is the worst in the case of BATH database because the images in this database have very prominent upper eyelashes and the size of the image also affects the accuracy. In case of CASIA version 1.0 and 3.0, the correct upper eyelid detection percentages are 98.91% and 90.02% respectively whereas lower eyelids spotted well in these databases with accuracies of 97.8% and 91.9% respectively. The result for lower eyelid localization of BATH iris database is 4.7% higher than CASIA version 3.0 and is slightly 1.2% less than CASIA version 1.0 iris database.
Table 5.4: Results of Iris localization in BATH (free version) S. No. Name of Phase Total number of images Accuracy

a. b. c. d. e. f.

Point Inside Pupil Pupil Parameters Non-Circular Pupil Localization Iris Localization Upper Eyelids Lower Eyelids

1000 1000 1000 1000 1000 1000

100% 100% 98.80% 99.40% 84.50% 96.60%

Figure 5.4: Some correctly localized images in BATH Database free version

-90-

Chapter 5

Results & Discussions

5.5

MMU Version 1.0

MMU Version 1.0 iris database contains a total number of 450 iris images which have been taken using LG IrisAccess2200. This camera is semi-automated and it operates at the range of 7-25 cm. These iris images are contributed by 100 volunteers with different age and nationality. They come from Asia, Middle East, Africa and Europe. Each of them has 5 iris images for each eye. Five left eye iris images have been excluded from the database due to cataract disease.

5.5.1

Pupil Localization

The accuracy of pupil localization is the main phase of iris localization. Pupil detection and finding its parameters is the initial process. Exact localization of iris mainly depends upon accurate localization of pupil because its center is used for finding iris boundary. For pupil detection, a point inside the pupil is searched using the algorithm given in Section 4.1.1. The images of this database are colored so they are converted to grayscale as a first step of processing.

a. Point inside the pupil


For finding pupil parameters, a point inside the pupil is detected. As image acquiring devices are different for different datasets, therefore, the nature of pupil in the image is different for different databases. For instance, eight white small circles are present in pupil in CASIA version 3.0 iris dataset. As a result different methods have been proposed to find parameters of pupil. In order to locate a point inside the pupil, number of rows in the image is effectively used. To get the size of decimation filter and border width 10% and 15% of the total rows have been used. For this database these values are 24 and 36 pixels. Border width is excluded in finding point inside the pupil because pupil is almost in the center of iris. The results are presented in Table 5.5 and a point inside the pupil is detected with 100% accuracy. This very high accuracy is attained because the intensity level of the pupil is almost same for each image in this database although a white spot is present inside the pupil. The results to search a point inside the pupil for CASIA version 1.0 and BATH iris databases are also 100% whereas it is 99.92% for CASIA version 3.0.

-91-

Chapter 5

Results & Discussions

b. Pupil Parameters
Pupil parameters include coordinates of pupil center and length of radius. Accurate pupil center is very critical because it is used in finding iris boundary. Figure 5.1 (d) shows an image of this dataset in which a spot due to reflection of light source is present in the pupil. To find the pupil parameters for this database, complete procedure has been shown in Figure 4.5. The results achieved for calculated pupil parameters are up to 98.44% as shown in Table 5.5. These results are 1.42%, 1.25% and 1.56% less than the results of pupil localization of CASIA version 1.0, CASIA version 3.0 and BATH iris databases respectively. These inaccuracies in MMU database are due to large number of images in which pupil is occluded by eyelids and long dense eyelashes.

5.5.2

Non-circular Pupil Localization

Size of pupil changes constantly even in constant illumination and its boundary is not an exact circle. To localize it perfectly, a specific number of points based upon length of pupil radius using equation 3.14 are shifted to original boundary of the pupil. This shift enables us to get the exact (non-circular) boundary of the pupil. The result of correct noncircular boundary of pupil is given in Table 5.5. Non-circular boundary of the pupil has correct localization rate of 96.6% for MMU Iris database whereas 98.28%, 99.35% and 98.8% accurate results are achieved for CASIA version 1.0, CASIA version 3.0 and BATH iris databases respectively. Size of each image in MMU database is the smallest of all the studied databases. The reasons for low non-circular boundary results are large percentage of images with long eyelashes near the pupil boundary, occlusion of pupil with upper eyelid and eyelashes.

5.5.3

Iris Localization

Another method is proposed to localize the iris for this database [99] which is presented step by step in Figure 4.10. Correct localization of iris is a challenging task because of low contrast between iris and sclera. Using this method, the results attained for iris localization are 96.86%. When the proposed method of iris boundary detection with minor changes (algorithm 3) is applied to this database, correct results achieved are up to 99.77%. The results of iris localization are shown in Table 5.5. Although the image size

-92-

Chapter 5

Results & Discussions

is relatively small in this database but the proposed algorithm performs very well because it captures the gradient at the boundary of the iris. Correct iris localization of 99.6%, 99.21% and 99.4% has been obtained for CASIA version 1.0, CASIA version 3.0 and BATH iris databases respectively.

5.5.4

Eyelids Localization

Iris circular and pupil non-circular boundaries have been obtained and the results of eyelid localization module as mentioned in Section 4.1.4 are presented in Table 5.5. Upper eyelid normally has eyelashes curving down, which cover some part of iris as well as of pupil. Lower eyelids have eyelashes which in general do not cover the iris. Eyelids are considered as parabolas while detecting their boundaries. The results of correct upper and lower eyelids detection are 84.66% and 96.22% respectively for MMU iris database. Lower eyelids localization results of MMU database are 4.32% better than CASIA version 3.0 database whereas they are 0.38% and 1.58% less than BATH and CASIA version 1.0 iris databases respectively. Similarly, results of correct upper localization are slightly (0.16%) better than BATH iris database and are less than CASIA version 1.0 and 3.0. The results of upper eyelids are low (i.e. 84.66%) because of large number of images with long upper eyelashes which occlude the eyelid.
Table 5.5: Results of Iris localization in MMU version 1.0 S. No. Name of Phase Total number of images Accuracy

a. b. c. d. e. f.

Point Inside Pupil Pupil Parameters Non-Circular Pupil Localization Iris Localization Upper Eyelids Lower Eyelids

450 450 450 450 450 450

100% 98.44% 96.60% 99.77% 84.66% 96.22%

-93-

Chapter 5

Results & Discussions

Some of the correctly localized iris images are shown in Figure 5.5. A comparison of all steps of iris localization is graphically represented in Figure 5.6 . Accuracy of each step is

Figure 5.5: Some correctly localized images in MMU Database version 1.0

Iris Localization
CASIA version 1.0
100

CASIA version 3.0

BATH

MMU

95

Accuracy

90

85

80

75 Point Inside Pupil Pupil Parameters Pupil Non-circular Localization Iris Localization Upper Eyelid Lower Eyelid

Steps in Iris Localization

Figure 5.6: Comparison of steps in iris localization in different databases

-94-

Chapter 5

Results & Discussions

given on the y-axis and steps of iris localization are on x-axis. The most important step in iris localization is iris boundary detection, which has accuracy of more than 99.2% for all the databases with a total of 4861 images.

5.6

Errors in Localization

During the experiments, irises in many images were unable to be localized exactly. There are certain errors in difference phases of iris localization. In some images, pupil gets incorrect boundary due to white spot in it. Some times, iris in the image has incomplete boundary. These errors contribute significantly to other phases of iris recognition. These errors are described in the following sections.

5.6.1

Errors in Circular Pupil Localization

In the first phase of iris localization, pupil is localized by assuming it as a complete circle. There are two types of mistakes found during this process. First is inaccurate pupil center and second is inaccurate length of pupil radius. In Figure 5.7, inaccuracies in pupil localization are depicted. These errors are due to non-circular shape of the pupil, locating wrong point while finding a point inside the pupil, eyelashes on the boundary of pupil

(a) Non-circular pupil

(b) Wrong point inside the pupil

(c) Long eyelashes near pupil boundary

(d) Wong length of radius of pupil

(e) Pupil is occluded by eyelashes

(f) Pupil is occluded by upper eyelid

Figure 5.7: Inaccuracies in circular pupil localization

-95-

Chapter 5

Results & Discussions

and eyelid coving the pupil. In most of the cases, pupil boundary is not an exact circle. If a circle is drawn on the boundary of the pupil, there is high chance that it will either cover some part of the pupil or some part of iris. Figure 5.7 (a) is from CASIA version 1.0 with incorrect circular localization of pupil. In this case, some part of iris is covered with pupil estimated boundary. Figure 5.7 (b), (c) and (d) are from CASIA version 3.0 and Figure 5.7 (e) and Figure 5.7 (f) are from MMU iris database. If the point inside pupil is not correctly found then pupil boundary will not be localized correctly as shown in Figure 5.7 (b). In any of the image, point inside circle becomes the key location for finding the pupil circular boundary. White bright circles on the boundary of pupil also produce inaccuracies in pupil localization. Long eyelashes near pupil boundary as shown in Figure 5.7 (c), pupil occluded by long eyelashes (Figure 5.7 (e)) and half open eye or pupil covered with eyelid as shown in Figure 5.7 (f) are other sources of error in this process. Translation of center and adjustment of radius can remove majority of these errors.

5.6.2

Errors in Non-circular Pupil Localization

After finding the parameters of circular pupil localization, a number of points using equation 3.14 on the circular boundary of the pupil are picked up to adjust their position towards the exact boundary of pupil. The adjusted points are then linearly joined to get the exact boundary of the pupil. Errors in this phase are because of long eyelashes near the pupil boundary, white spots in the pupil, very sharp features close to pupil boundary and position of eyelid in the vicinity of pupil. Some of incorrect non-circular pupil images are shown in Figure 5.8. Image presented in Figure 5.8 (a) is from CASIA version 1.0 with inaccuracy because of long eyelashes near pupil boundary. Same inaccuracy is also shown in Figure 5.8 (b) which is from CASIA version 3.0. White circles in the pupil used by capturing device on and near pupil boundary divert the non-circular pupil finding module towards a mistake as show in Figure 5.8 (c) of CASIA version 3.0 iris database. Inaccuracies due to very sharp patterns of iris near pupil boundary turned out to be the main reason of non-circular pupil boundary in images of BATH iris database. One image with this inaccuracy is shown in Figure 5.8 (d). A white spot in the vicinity of the pupil

-96-

Chapter 5

Results & Discussions

(a) Long eyelashes near pupil boundary

(b) Long eyelashes near pupil boundary

(c) White circle near pupil boundary

(d) Very sharp pattern of iris near pupil boundary

(e) White spot near pupil boundary

(f) Eyelid near pupil boundary

Figure 5.8: Inaccuracies in non-circular pupil localization

boundary and upper eyelids occluding the iris near pupil boundary are the root causes of inaccuracies in MMU iris image datasets as indicated in Figure 5.8 (e) and Figure 5.8 (f).

5.6.3

Errors in Iris Localization

Obtaining iris boundary is a difficult task in the images where the contrast between iris and sclera is very low. Human visual power is marvelous; one can define a virtual circular boundary of iris even if it is mixed with sclera. Such detection using an algorithm is a challenging job. This challenge is fulfilled with the proposed method but there is a small number of images on which proposed algorithm fails. Main sources of errors in locating of iris boundary are long eyelashes parallel to iris boundary, incomplete iris in the image, very sharp pattern in iris, extremely low contrast between iris and sclera and another boundary outside iris boundary due to refection of light or curvature of eyeball. Some of the incorrect images are portrayed in Figure 5.9. Images in first and second row in Figure 5.9 are associated with CASIA version 1.0 and 3.0 iris image databases respectively. Images Figure 5.9 (g) and Figure 5.9 (h) are from BATH free version iris dataset and last image is from MMU version 1.0 database. Inaccurate iris boundary in -97-

Chapter 5

Results & Discussions

images Figure 5.9 (a), (b), and (i) is due to the presence of long eyelashes near the iris boundary. Iris boundary is not even visible in Figure 5.9 (a) and (c) on right side and Figure 5.9 (f) on left side that is why iris boundary is not localized perfectly. In Figure 5.9 (d), lens boundary is obtained instead of iris boundary on right side. Errors in Figure 5.9 (e) and Figure 5.9 (h) are due to the sharp iris patterns which guide the algorithm towards detection of wrong iris boundary. Figure 5.9 (g) has incorrect iris boundary because it has white shade concentric with iris center. These inaccuracies could be removed by changing the parametric values in modules.

(a) Long eyelashes

(b) Long eyelashes

(c) Iris boundary is not visible (right side)

(d) Lens boundary

(e) Sharp iris pattern

(f) Iris boundary is not visible (left side)

(g) White shade inside the iris

(h) Sharp iris pattern

(i) Long eyelashes

Figure 5.9: Inaccuracies in iris localization

-98-

Chapter 5

Results & Discussions

5.6.4

Errors in Eyelids Localization

For finding eyelids, the image portion between the vertical boundaries of iris is processed. As eyelid shape is parabolic, therefore, two parabolas; one for upper and other for lower eyelids are calculated. Points are selected as already discussed and parabolas are fitted through these points. Length and density of eyelashes affect the proposed method. There is a wide variety of eyelids in the images e.g. in some images upper eyelid is covered with eyelashes so much that the boundary of the eyelid on iris is occluded, some images have same case for lower eyelid. It has been observed that probability of occlusion of iris with upper eyelid is higher than lower eyelid. Main reason of inaccurate eyelid localization is selection of incorrect points which is due to multiple eyelashes, very dense eyelashes, eyelashes parallel to eyelids and a bright layer on the eyelid. Some inaccurate eyelids are shown in Figure 5.10 along with a reason of the inaccuracy. Each of these images can be converted to correctly localized image by varying the parameters in the eyelid detection module.

(a) Multiple eyelashes

(b) Very dense eyelashes

(c) Multiple eyelashes

(d) Multiple eyelashes

(e) Very dense eyelashes

(f) Eyelashes parallel to lower eyelid

Figure 5.10: Inaccuracies in eyelid localization

-99-

Chapter 5

Results & Discussions

5.7

Comparison with Other Methods

The best results in iris localization using proposed method have been achieved up to 99.6% for CASIA version 1.0 iris database which is most widely used iris database in research. The results of proposed scheme of iris localization are compared with the results of other researchers in terms of accuracy and computational complexity.

5.7.1

Accuracy

Upon comparing the proposed method with existing methods, proposed method performs better in accuracy and execution time. In terms of correct localization, proposed method has shown the best results. Hough transform method has been used by most of the researchers for iris localization. Edge detection followed by a Hough transform is a standard machine vision technique for fitting simple contour models to images [100]. For CASIA version 1.0 iris image database, results are very impressive as shown in Table 5.6. After applying canny edge detector to the image, Hough transform is used for iris localization on the same dataset and iris boundary is correctly localized with an accuracy of 83.45% and correct pupil localization is achieved up to 97.48% as given in Table 5.7. Average time consumed on each image is 129.3 seconds using Hough transform. Masek implementation of Daugmans Method has given accuracy of iris localization of 82.54% and pupil localization of 99.07%. The results of pupil localization for CASIA version 1.0 are given in Table 5.7.
Table 5.6: Results of iris localization for CASIA version 1.0 Time (seconds) Method Accuracy Mean Min Max

Daugman [81] Wildes [55] Masek [56] Cui et. al. [59]

98.6% 99.9% 82.54% 99.34%

6.56 8.28 17.5 0.24

6.23 6.34 6.3 0.18

6.99 12.54 43.3 0.33

-100-

Chapter 5 Hough Transform Shen et. al. [57] Zaim [101] Zhu et. al. [102] Narote et. al. [103]
Proposed

Results & Discussions 83.45% Not mentioned 92.7% 88% 97.22%


99.6%

129.3 3.8 5.83 0.5 0.96


0.33

77.1 0.24

192.3 0.41

It is obvious from the results given in Table 5.6 that proposed system has higher accuracy than Daugman, Masek, Cui, Hough transform, Zaim, Zhu and Narotes iris localization methods. Average time used by the proposed system is very low as compared to all other systems except Cui. Maximum time spent to localize iris is 0.41 seconds which is almost 17 times less than Daugman, 30 times less than Wildes and 105 times less than Masek whereas it takes only 0.08 seconds more than Cuis method. It is approximately 26 times faster than Daugman, Wildes & Masek and 321 times faster than Hough transform method while comparing minimum time usage. It has also been observed that accuracy of the proposed system is slightly less (i.e. 0.3%) than that of Wildes method but Wildes method is very time consuming. Average time used by Wildes system is 8.28 seconds per image. On the other hand, the proposed system is utilizing average time of only 0.33 seconds which is 25 times faster than that of Wildes. It is more than 19, 53, 391, 11, 17, 1.5 and 2.9 times quicker than Daugman, Masek, Hough transform, Shen, Zaim, Zhu and Narote methods respectively whereas Cuis method takes 0.09 seconds less but its accuracy is also less than the proposed method. Accuracy of pupil localization for CASIA version 1.0 iris image dataset is compared with other methods in Table 5.7. All methods perform circular localization of the pupil while the proposed method has also been extended to the non-circular boundary detection of the pupil. The correct results have been obtained with 100% accuracy in pupil circular localization using the proposed method. Narote et. al. [103] and Mehrabian et. al. [104] have also mentioned 100% results for finding pupil parameters. Hough transform and Maseks implementation of Daugman method are producing results with accuracy of -101-

Chapter 5

Results & Discussions

97.48% and 99.07% respectively. The results of non-circular pupil localization are 98.28% for this database.
Table 5.7: Results of Pupil localization for CASIA version 1.0 Methods Accuracy

Mehrabian et. al. [104] Hough Transform Narote et. al. [103] Masek [56]
Proposed

100% 97.48% 100% 99.07%


100% (circular) 98.28% (non-circular)

In view of the above results, the proposed method of iris localization for CASIA version 1.0 (the mostly widely used iris image database) has performed very well in terms of accuracy and efficiency. For CASIA version 3.0, results of iris localization are shown in Table 5.8. In this database, quantity of blur and defocused images is greater than CASIA version 1.0. Results of iris localization of Wildes method is producing correct rate of 89.09% and accuracy of iris localization with the Masek method is 82.56%. From the tabulated values, it is clear that the results of iris localization using the proposed method are the best for this database.
Table 5.8: Results of iris localization for CASIA version 3.0 Methods Accuracy

Masek [56] Wildes [55]


Proposed

82.56% 89.09%
99.21%

-102-

Chapter 5

Results & Discussions

The proposed algorithm has been successfully applied to BATH iris database. Results of iris localization are compared with the results of other researchers in Table 5.9. Kennell et. al. [105] applied binary morphology and local statistics to obtain pupil and iris boundaries localization with accuracy 96% and 92.5% respectively on the same database. Grabowski et. al. [106] achieved iris localization for BATH iris database with 96% correct results by finding zero-cross points in first derivative of histogram of the images. Guang-Zhu et. al. [107] used the property of local areas in the image and segmentation accuracy of 98% is reported for the same database. The proposed method has performed well as compared to Kennell, Grabowski and Guang-Zhus methods. Proposed method exhibited 6.9%, 3.4% and 1.4% better results than Kennell, Grabowski and Guang-Zhus methods for iris boundary localization and in case of pupil boundary localization proposed method has displayed 4% high accuracy as compared to Kennells method while others did not mention the accuracy of pupil boundary.
Table 5.9: Results of iris localization for BATH iris database Methods Accuracy

Kennell et. al. [105] Grabowski et. al. [106] Guang-Zhu et. al. [107]
Proposed

96% (Pupil boundary) 92.5% (Iris boundary) 96.0% (Iris boundary) 98.0% (Iris boundary)
100% (Pupil boundary) 99.4% (Iris boundary)

For MMU version 1.0 iris image database, results are obtained by using methods mentioned in Table 5.10. Teo et. al. [108] reported the results with accuracy of 98% on the same database for iris localization. The same accuracy has been achieved using the proposed method of histogram processing [99]. Result of iris localization using Wildes and Maseks method give correct iris localization of 92.66% and 96.7% respectively whereas the best iris localization results of 99.77% have been achieved using the proposed method. -103-

Chapter 5

Results & Discussions


Table 5.10: Results of iris localization for MMU Iris Dataset Methods Accuracy

Teo et. al. [108] Wildes [55] Masek [56]


Proposed (histogram processing) [99] (gradient processing) [48]

98.0% 92.66% 96.7%


98.0% 99.77%

5.7.2

Computational Complexity

If the methods are compared with respect to their computational complexity then it is evident from the tabulated results that the proposed method has less complexity. The Generalized Hough Transform (GHT) is useful for detecting or locating translated twodimensional objects. However, a weakness of the GHT is its storage requirements and hence the increased computational complexity [109]. In Hough transform, all the points in the edge image are considered as center and on each radius virtual circle is drawn. Points lying on the circle with specific radius are voted to the corresponding layer in Hough space. Then the point with maximum number of votes becomes the center of the circle and corresponding layer is the radius of the circle. So, Hough space is four-dimensional (i.e. x, y, z, v, where x, y are the coordinates of point in the image, z is the number of radii to look for and v is the value at position (x,y,z) in the space) which makes it less efficient. Let r and c be the rows and column of the image and n be the number of points in edge image. Let rad be number of radii used in Hough space then computational complexity of the Hough transform is O(nrad). As the number of points in the edge image and radii for which search is carried out are increased, the time and number of operations performed during the process are increased accordingly. Same computations are required while obtaining iris outer boundary. As far as memory consumption is concerned, it is O(rcrad) because dimensions of the image multiplied by number of

-104-

Chapter 5

Results & Discussions

radii must be in the work space along with other parameters most of the time during iris localization. In Daugman iris localization, Daugman used integro-differential operator to find the boundary of iris and pupil which act as circular edge detector. Let n be number of points selected on each arc/circle for finding boundaries of iris. Integro-Differential Operator (IDO) first sums the image points which are on the arch and then finds difference of subsequent sums followed by convolution with Gaussian function. Last step in the Daugman iris localization is to find the location of maximum value through the 3D space. Let rad be the radii in the domain of IDO and a be the size of Gaussian, then the computational complexity of the operator is O(nrada) whereas memory consumption is less than that of Hough space. Computation cost of the proposed algorithm is calculated by considering the following method. Let n the number of points obtained for finding circle on each radially outward line from pupil center. Outliers are deleted from the n points to reduce the points. Difference between the points on each line contributes towards selecting a point and a maximum of three points are selected on each line. Only 38 lines are processed so maximum of 114 (383) points are selected. Therefore, the computational complexity is O(k), where k is a constant. Thus, the time consumption in order to achieve iris localization is less than other algorithms.

5.8

Normalization

All the normalization methods perform correctly. This process is not only a transformation from rectangular to polar coordinates system but also compensation of width of irises. Methods have been explained in the previous chapters. Five different normalization methods have been implemented. Three are named as normalization using reference point as pupil center, iris center and mid-point of pupil & iris centers. The other two methods are named as normalization using minimum distance and normalization using dynamic size. Time utilization of four methods for each image is given in Figure 5.11. Time utilized in normalization using reference center as pupil center is 0.05 seconds per image for all the databases whereas normalization via mid-point of pupil and centers as reference point took 0.07 seconds per image for all the database images. Time

-105-

Chapter 5

Results & Discussions

Time Comparison of Normalization Methods


0.12 CASIA version 1.0

CASIA version 3.0

time (seconds)

0.1 0.08 0.06 0.04 0.02 0 Pupil center Mid-point Minimum distance Dynamic size

Normalization Method
Figure 5.11: Time comparison of Normalization methods

consumed for minimum distance normalization is 0.03 seconds per image for all databases. The differences among normalization methods using a reference point lie in the selection of reference point while normalization using minimum distance method exploits the property of minimum distance between two points. Dynamic size normalization method depends on the pupil radius and minimum width of the iris. If pupil and iris centers coincide, then normalization using reference point as pupil center, iris center, mid-point and minimum distance results in same normalized iris image. Time consumption depends on the width of the iris in normalization using dynamic size. Time consumed for dynamic size normalization increases with the increase in width of iris. Pupil and iris radii are given in Table 5.11. BATH iris database contains maximum average iris width. This is the main reason that normalization of iris images using dynamic size method in BATH database took more time (i.e. 0.107 seconds per image) as compared to other databases. As the width of irises is almost same in CASIA versions 1.0 and 3.0 iris databases so time required for normalization is almost same i.e. 0.022 seconds per image. Iris width is the smallest in MMU iris database so it took the minimum time (i.e. 0.007 seconds per image). Time utilized in normalizing an iris image using iris center as reference point is given in Figure 5.12.

-106-

Chapter 5

Results & Discussions

CASIA version 1.0

CASIA version 3.0

Bath

MMU

18.38 18 16 14 12 10 8 6 4 2 0
time (seconds)

1.322

1.329 Iris Center Method

1.401

Figure 5.12: Time comparison of normalization using iris center as reference point

The average iris radii sizes in CASIA version 1.0, CASIA version 3.0, BATH and MMU iris databases are 102.21, 101.37, 232.80 and 51.75 pixels respectively. Comparison of pupil and iris radii is tabulated in Table 5.11. Average width of irises in BATH iris database is maximum which is 136.46 (232.80 - 96.34) pixels and is minimum in MMU iris database with only 26.74 (51.75 25.01) pixels. Average width of irises in BATH database is greater than five times the average width of irises in MMU database. CASIA versions 1.0 and 3.0 have approximately same average iris width.
Table 5.11: Radii of pupil and iris in the databases Pupil Radii (pixels) Database Name
Average Minimum Maximum Average Minimum Maximum

Iris Radii (pixels)

CASIA version 1.0 CASIA version 3.0 BATH MMU

45.90 42.88 96.34 25.01

30 24.37 59 17

64 91.70 164 36

102.21 101.37 232.80 51.75

83.35 75.73 162.28 42.49

142.92 147.96 285.61 60.82

-107-

Chapter 5

Results & Discussions

5.9

Feature Extraction and Matching

Iris image is localized and the normalized using the proposed methods. Features of normalized iris images are extracted using the methods mentioned in the text and matching has been carried out. Euclidean distance and Hamming distance have been used as matching classifiers. Principal Component Analysis, bit planes and wavelets have been implemented for using them as features of normalized iris image.

5.9.1

Principal Component Analysis

The Principal Component Analysis (PCA) is a way of identifying patterns in data and expressing the data in such a way as to highlight their similarities and differences. Since in high dimension data it is hard to find patterns, where the luxury of graphical representation is not available, PCA is a powerful tool for analyzing data. Once patterns have been extracted from the data and one needs to compress the data (i.e. by reducing the number of dimensions) without much loss of information, PCA is a good choice for it. In terms of information theory, the idea of using PCA is to extract the relevant information in an iris image, encode it as efficiently as possible and compare test iris encoding with a database of similarly encoded models. A simple approach to extract the information contained in an image or iris is to somehow capture the variations in a collection of iris images independent of judgment of features and use this information to encode and compare individual irises [89]. The main use of PCA is to reduce the dimensionality of a data set while retaining as much information as possible. It computes a compact and optimal description of the data set. The first principal component is the combination of variables that explains the greatest amount of variation. The second principal component defines the next largest amount of variation and is independent of the first principal component. The mean m of training set is calculated, each image is centered by subtracting mean from it. This produces a dataset whose mean is zero. In next step, two dimensional variance called covariance of this dataset is calculated. As covariance matrix is a square matrix, its eigenvalues and eigenvectors are calculated which provide the information about patterns in the data. These eigenvalues are ordered from highest to lowest and similarly the

-108-

Chapter 5

Results & Discussions

corresponding eigenvectors which provides data components in order of significance. This arrangement of data allows to decide on ignoring the data of less significance. In this way, some information is lost, but if the eigenvalues are small, much information is not lost and the final dataset will have lesser dimensions than the original. Finally, this reduced dimension data is transposed so that eigenvectors are put in a row (with most significant eigenvector at the top) and multiplied by the transpose of centered image. This new data matrix is projection of iris image in eigeniris space. During the research work, PCA has been implemented and results on mentioned databases have been obtained. Three different sets of experiments have been carried out. In the first case, reduction in number of dimensions is varied from 64 to one while keeping the number of training images constant and effect of dimension reduction is studied with respect to correct recognition rate. In the second set of experiments, the numbers of training images are altered while keeping the dimension of PCA constant and correct iris recognition rate has been determined. In the third set of experiments, numbers of classes are increased and effect of this increase has been analyzed.

a. Experiment Set 1 (Dimension Reduction)


This set of experiment has been repeated on all the images obtained by all the proposed normalization methods. Experiments have been conducted by reducing the dimensions of the eigenirises and results have been discussed when number of training images are three. There are fourteen categories of normalized iris images which are described as follows:
Normalized 1: Normalization of iris images by considering pupil center as a reference

point and without eyelids localization.


Normalized 2: Normalization of iris images by studying iris center as a reference point

and without eyelids localization.


Normalized 3: Normalization of iris images by taking mid-point of pupil center and iris

center as a reference point and without eyelids localization.


Normalized 4: Normalization of iris images by utilizing the minimum distance between

the iris and pupil boundaries and without eyelids localization.


Normalized 5: Normalization of iris images by dynamic size model and without eyelids

localization.

-109-

Chapter 5

Results & Discussions

Similarly the same normalizations have been carried out in Normalized 6 to Normalized 10 with eyelids localization and in Normalized 11 to Normalized 14, normalization of iris is obtained by using non-circular pupil boundary. Normalization of dynamic size does not apply with non-circular pupil boundary because in this case size of the normalized image progresses with the increase of the radius starting from pupil to iris. Therefore, zigzag boundary of pupil is not considered for this case. Best results have been mentioned in Table 5.12 for CASIA version 1.0, whereas complete and detailed results of all normalized methods are given in Appendix I. For CASIA version 1.0, the best results of 59.16% accuracy has been produced using image of category Normalized 2 (i.e. normalization of iris by considering iris center as reference point and without eyelid localization) when the dimension of PCA is one. In this case, worst results of 47.23% have been obtained for iris recognition when 64 vectors for dimensions of PCA are considered. This shows that as the dimension of PCA reduces, the accuracy of results increases. This increase is because of the structure of iris in normalized image which is better separated in lower dimension space. In dimension of PCA, the numbers of elements in one vector are 64 whereas numbers of elements (when dimensions of PCA are 64) are 4096 (6464). Time utilized for the complete database to train is 1.17 seconds and recognition takes place in 2.27 seconds for CASIA version 1.0 iris database, when the number of dimension of PCA is one.
Table 5.12: Iris recognition rate with Normalized 2 using PCA for CASIA version 1.0 Dimensions of PCA Accuracy Training Time (Seconds) 3.14 2.94 2.87 2.86 2.78 2.69 2.63 2.28 2.24 2.09 2.01 Recognition Time (Seconds) 11.24 8.72 8.31 5.82 5.75 5.36 5.3 5.06 4.8 5.03 4.85

64 61 58 55 52 49 46 43 40 37 34

47.23% 46.89% 46.89% 46.72% 47.06% 47.06% 47.73% 47.73% 47.9% 48.07% 47.9%

-110-

Chapter 5 31 28 25 22 19 16 13 10 7 4 1 48.4% 48.57% 48.07% 49.24% 49.08% 49.75% 49.08% 48.91% 47.9% 50.76% 59.16% 1.91 1.86 1.84 1.77 1.65 1.48 1.42 1.33 1.28 1.22 1.17

Results & Discussions 5.63 5.4 4.13 3.88 3.59 3.18 3.19 2.91 2.63 2.46 2.27

Results of PCA, in terms of accuracy and time consumption for CASIA version 3.0 are shown in Figure 5.13. CASIA version 3.0 iris database has 396 classes with different number of images in it (ranges from 1 to 26). In these results, only that classes (246) are included which have seven or more than seven images. Three images have been used for

PCA on CASIA version 3.0


Accuracy Accuracy (%) and Time(s) 80 70 60 50 40 30 20 10 0 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 Dimensions of PCA Training Time Recognition Time

Figure 5.13: Results of Normalized 4 using PCA for CASIA version 3.0 iris database

-111-

Chapter 5

Results & Discussions

training and remaining images have been used as test images. Accuracy of 59.29% has been achieved with only one vector of PCA and time required to train the database is 3.4 seconds while recognition has been completed in 10.7 seconds. As the numbers of dimensions for PCA are increased to 64, time required to train the database is 18.98 seconds whereas 82.52 seconds are utilized for recognition. These results are obtained on normalized 4 category (i.e. Normalization of iris images by utilizing the minimum distance between the iris and pupil boundaries and without eyelids localization). Best results for MMU iris data using PCA are given in Table 5.13. Numbers of training images are kept constant which is equal to three. Maximum accuracy of 70.67% with only one PCA vector has been achieved where 62.44% is the minimum iris recognition rate for this database. Training time and recognition time are increased with the increase in the dimensions of PCA. This is because of large memory consumption and more computations for high dimensions of PCA.
Table 5.13: Accuracy with Normalized 2 using PCA for MMU iris database Dimensions of PCA Accuracy Training Time (Seconds) 3.47 3.34 3.22 3.12 2.99 2.89 2.77 2.68 2.55 2.45 2.34 2.23 2.11 2.01 1.89 1.72 1.6 1.52 Recognition Time (Seconds) 5.42 3.33 4.97 4.67 4.28 2.96 4.35 4.22 3.97 3.65 3.49 2.49 2.31 2.95 2.96 2.44 1.75 1.71

64 61 58 55 52 49 46 43 40 37 34 31 28 25 22 19 16 13

62.44% 62.89% 62.89% 63.11% 62.89% 62.89% 63.33% 63.11% 63.56% 63.11% 63.33% 63.56% 63.56% 62.89% 63.78% 63.33% 63.56% 64%

-112-

Chapter 5 10 7 4 1 62.89% 64.44% 67.11% 70.67% 1.42 1.44 1.33 1.04

Results & Discussions 1.63 1.42 1.46 1.26

In case of BATH iris database, normalized 4 performs best with an accuracy of 72.9%. Time consumed to train the database for three images per class is 0.68 seconds and recognition of complete database of 1000 images required 4.95 seconds. Results are shown in Figure 5.14. Database is trained on only 150 images whereas total test images are 850.

PCAon BATH
Accuracy 80 70
Accuracy(%) / Time (s)

Training Time

Recognition Time

60 50 40 30 20 10 0 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 Dimensions of PCA

Figure 5.14: Results of Normalized 4 using PCA for BATH iris database

b. Experiment Set 2 (Training Images)


In this set of experiments, numbers of training images are increased gradually to find out which normalization method performs better in terms of iris recognition accuracy. As shown in Figure 5.15 for CASIA version 1.0, best results have been achieved using normalized 2 method (i.e. normalization of iris images using iris as reference point

-113-

Chapter 5

Results & Discussions

without eyelid localization) when number of images used in training the database are 1, 3 and 4. Accuracy in percentage versus number of training images for all normalization methods is presented in Figure 5.15. Normalized 1 (i.e. normalization using pupil center as a reference point without eyelid localization) performs better when the number of images used in training the database are 2, 5 and 6.

PCAon CAS version 1.0 IA


90 Normalized 1 80 Normalized 2 Normalized 3 70
Accuracy (%)

Normalized 4 Normalized 5 Normalized 6 Normalized 7 Normalized 8 Normalized 9 Normalized 10 Normalized 11 Normalized 12 Normalized 13 Normalized 14

60

50

40

30

20 1 2 3 4 Training Images 5 6

Figure 5.15: PCA using different training image on CASIA version 1.0

The same set of experiments has also been conducted on CASIA version 3.0. Numbers of classes included in the experiments are 246. These are the classes which has more than six images in it. Results of PCA are shown in Figure 5.16. On x-axis normalized categories are given whereas accuracy is on y-axis. Each normalized category has six bars corresponding to number of training images (from one to six). Normalized category number 4 (i.e. normalization of iris images by utilizing the minimum distance between the iris and pupil boundaries and without eyelids localization) has the highest accuracy -114-

Chapter 5

Results & Discussions

for all number of training images. The best accuracy of 91.06% has been achieved when number of training images is six.

PCA on CASIA version 3.0 with different training images


90 Training Image 1 80 Accuracy (%) 70 60 50 40 30 20 10 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Normalized Categories Training Images 2 Training Images 3 Training Images 4 Training Images 5 Training Images 6

Figure 5.16: PCA using different training image on CASIA version 3.0

PCA on MMU with different training images


Training on 1 image Training on 3 images Training on 2 images Training on 4 images

100 90 80 70 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Normalized Categories

Accuracy (%)

Figure 5.17: PCA using different training image on MMU

-115-

Chapter 5

Results & Discussions

Results of different training images using PCA for MMU and BATH iris databases are shown in Figure 5.17 and Figure 5.18 respectively. MMU iris database has five image in each folder, up to four images of each class are used in training and best accuracy achieved is 86.67% when the normalized category is 2. Maximum of seven images out of twenty has been used for BATH iris database to obtain the results using PCA. Accuracy is directly proportional to the number of training images. Best results of 83.5% have been achieved for normalized 4 category when the number of training images are seven.
PCA on BATH with different training images
Training Image 1 Training Images 5 Training Images 2 Training Images 6 Training Images 3 Training Images 7 Training Images 4

90 80
Accuracy (%)

70 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Normalized Categories
Figure 5.18: PCA using different training image on BATH

The results of these experiments show that PCA performs better when normalization category is 2 (i.e. normalization of iris images by studying iris center as a reference point and without eyelids localization) for CASIA version 1.0 and MMU iris datasets and normalization category 4 (i.e. normalization of iris images by utilizing the minimum distance between the iris and pupil boundaries and without eyelids localization) for CASIA version 3.0 and BATH iris databases.

-116-

Chapter 5

Results & Discussions

c. Experiment Set 3 (Training Classes)


In these set of experiments, numbers of classes are increased only for the normalization categories two (for CASIA version 1.0 and MMU) and four (for CASIA version 3.0 and BATH) while keeping the number of training images constant (three). The results of accuracy, training time and testing time for this set of experiments for all the databases is shown in Figure 5.19, Figure 5.20 and Figure 5.21 respectively. Accuracy of 87.5%, 80%, 71.14% and 67.14% has been achieved for BATH, MMU, CASIA version 1.0 and CASIA version 3.0 respectively when ten classes are used which decreases to 71.9%, 72.4%, 58.29% and 61.71% when number of classes reaches to 50. This decrease in accuracy is because of the increase in number of test images.

PCA with increase in classes


BATH CASIA version 1.0 MMU CASIA version 3.0

Accuracy (%)

90 80 70 60 50 40 10 20 30 40 Number of Classes 50

Figure 5.19: Accuracy of PCA on all databases using three training images

Time consumed for training the PCA using three images of each class is same for all the databases because same number of images of each database is used. As the number of classes are increased, the time utilized in training also increases as shown in Figure 5.20.

-117-

Chapter 5

Results & Discussions

Training Time for PCA


BATH 2 1.5 CAS version 1.0 IA MMU CAS version 3.0 IA

Time (s)

1 0.5 0 10 20 30 40 50

Number of Classes
Figure 5.20: Training time of PCA on all databases using three training images

Time for recognition of BATH database is higher than all other databases because number of test images in each class is seventeen whereas in case of MMU database it is only two. That is why, MMU iris database consumes lowest time.

Recognition Time for PCA


BATH 6 5
Time (s)

CAS version 1.0 IA

MMU

CAS version 3.0 IA

4 3 2 1 0 10 20 30 Number of Classes 40 50

Figure 5.21: Recognition time of PCA on all databases using three training images

It is clear from the mentioned results that the best results of PCA have been achieved for normalization categories 2 and 4. Therefore, normalization of iris images is performed by studying iris center as a reference point & without eyelids localization and normalization of iris images by utilizing the minimum distance between the iris & pupil boundaries and without eyelids localization.

-118-

Chapter 5

Results & Discussions

5.9.2

Bit planes

A bit plane (in an image) is a set of bits having the same position in the respective binary numbers. For example, for 16-bit data representation, there are 16 bit planes: the first bit plane contains the set of the most significant bit and the 16th contains the least significant bit. It is possible to see that the first bit plane gives the roughest but the most critical approximation. The higher the number of the bit planes, the lesser is its contribution to the final stage [91]. Thus, adding bit plane gives a better approximation. Incrementing a bit plane by 1 gives the final result half of a value of a previous bit plane. If a bit is set to 1, the half value of a previous bit plane is added, otherwise it does not define the final value. In Pulse Code Modulation (PCM), sound encoding the first bit in sample denotes the sign of the function, or in the other words defines the half of the whole amplitude values range, and the last bit defines the precise value. Replacements of more significant bits result in more distortion than replacement of lesser significant bits. In lossy media compression that uses bit planes, it gives more freedom to encode less significant bit planes and it is more critical to preserve the more significant ones [110]. Bit plane is sometimes used as synonymous to bitmap; however, technically the former refers to the location of the data in memory and the latter to the data itself. One aspect of using bit planes is determining whether a bit plane is random noise or contains significant information. One method for calculating this is to compare each pixel (x,y) to three adjacent pixels (x-1,y), (x,y-1) and (x-1,y-1). If the pixel is same in at least two of the three adjacent pixels, it is not noise [111]. The result of bit plane is a binary image. A binary image is a digital image that has only two possible values for each pixel. Binary images are also called bi-level or two-level. The names black-and-white (B&W), monochrome or monochromatic are often used for this concept. But may also designate any images that have only one sample per pixel such as grayscale images. Binary images often arise in digital image processing as masks or as the result of certain operations such as segmentation, thresholding and dithering. Some input/output devices (such as laser printers, fax machines and bi-level computer displays) can only handle bilevel images. The interpretation of the pixel's binary value is also device-dependent. Some systems interpret the bit value of 0 as black and 1 as white, while others used its -119-

Chapter 5

Results & Discussions

reverse for processing of binary images. A binary image is usually stored in memory as a bitmap, a packed array of bits. Binary images can be interpreted as subsets of the twodimensional integer lattice Z2; the field of morphological image processing was largely inspired by this view.

a. Results on BATH
Iris database of BATH has 1000 images from 50 different eyes. All the images are in grayscale, bmp format of size 1.2 MB with 1280 x 960 pixels resolution. This database includes some of the people wearing lenses. The presented algorithm is equally good for localization of iris from eye images with lenses although lens incorporate an extra circle around the iris. The resolution of the images in the database is very high so discriminative features from such images can also be extracted easily. As the features are the bit planes of the resolved strip and iris code is in Boolean format, so it makes a very efficient decision. Experiments using the proposed algorithm have been conducted and the results of iris localization algorithm for the complete database reach up to 99.4% as shown in Table 5.4. Recognition has been obtained in two modes: (1) identification mode, in which correct recognition rate is calculated and (2) verification mode, in which FAR (False Accept Rate) and FRR (False Reject Rate) have been measured. Results of identification for different types of features are given in Table 5.14. Recognition rate is given with respect to number of enrolled images for training. It is clear from the Table 5.14 that feature type 4 corresponding to bit plane 5 performs better in first two experiments in which numbers of enrolled images are one and two. Feature type 3 corresponding to bit plane 4, gives results closer to feature type 4 (i.e. difference between recognition rate of the two features in first and second experiment is 0.7% and 0.1% respectively). Maximum difference with other features in experiment one is 54.2% corresponding to feature type 1 and in experiment two, maximum difference is reduced to 50.9% which is also with feature type 1. Feature type 3 presents best results when number of enrolled images is greater than two and less than six but when enrolled images are greater than five then both of feature types 3 and 4 give the same highest recognition. Feature type 1 portraits the worst results in

-120-

Chapter 5

Results & Discussions

each experiment as this feature is corresponding to bit plane 2 which next to least significant bit so this bit plane does not prove to be an appropriate feature because of very high frequency components in it which do not cater for the discriminative features of the iris. Feature types 3 and 4 perform better than other features because both have middle frequency components. In case of three and four training images its recognition rates are 96.7% and 99.6% respectively. When the training images are six or more, results of feature type 3 and 4 remain the same. As the number of enrolled images is increased, overall recognition rate is increased and difference between the best and worst recognition rates is decreased. Features based on bit planes 2 to 7 are analyzed so bit plane 4 is presenting the best results in case of small as well as large number of training images. While comparing all the features, it has been observed that correct recognition rate increases when the feature type increases up to feature type 4 and then it decreases for last two feature types. It can be concluded that feature type 3 and 4 are better than 1, 2, 5, and 6 so corresponding bit planes 4 and 5 have better discriminating factors with respect to iris images. If the number of enrolled images is 50, then total test images are 950. So, numbers of misclassified irises with respect to feature type 1 to 6 are 584, 213, 49, 42, 94 and 241 respectively. In case of feature types 3 and 4, only six out of twenty images (i.e. 30.0%, which is less than 42.85% (three out of seven) normally used in the literature) are used for training to get 96.6% recognition rate. If in training six images of each eye are used, then feature type 1 to 6 misclassify 317, 61, 4, 4, 29 and 112 irises. In feature type 3 and 4, only four images are misclassified because these images have different illumination than those included in training. Therefore, these features are sensitive to illumination.
Table 5.14: Results of recognition for BATH Iris dataset Correct Recognition Rate (%) Enrolled images using Feature Types (FT)

FT1 (bp* = 2) 41.6 45.5

FT2 (bp* = 3) 76.9 88.1

FT3 (bp* = 4) 95.1 96.3

FT4 (bp* = 5)
95.8 96.4

FT5 (bp* = 6) 90.6 93.8

FT6 (bp* = 7) 75.9 79.3

1 2

-121-

Chapter 5 3 4 5 6 * bp = bit plane 51.5 56.4 61.5 68.3 92.3 92.2 92.9 93.9
96.7 99.6 99.6 99.6

Results & Discussions 96.4 96.6 96.6


99.6

94.1 94.2 94.2 97.1

82.7 86.8 86.7 88.8

In verification mode, the Receiver Operating Characteristic (ROC) curves are obtained and are shown in Figure 5.22 for all feature types. ROC curve is a FAR versus FRR which measures the best feature type and shows the overall performance of an algorithm. FAR is the probability of a non-authorized person accepted as authorized and FRR is the probability of an authorized user rejected as non-authorized person by the system. Equal Error Rate (EER) is the point where ROC curve passes through a line of slope 1 (i.e. the

Figure 5.22: ROC curves for different features with six enrolled images

-122-

Chapter 5

Results & Discussions

point where the FAR is equal to FRR). In case of six training images, EER (in percentage) is 0.262, 0.1, 0.049, 0.041, 0.096 and 0.17 for feature type 1 to 6 respectively. It also shows that feature type 4 distinguishes the irises better than that of other feature types. Based upon these results, feature Type 4 corresponding to the bit plane 5 of normalized iris images outperforms other features used for bit planes. Therefore, correct iris recognition rate in other databases is incurred with only bit plane 5. Results on different sizes of normalized images by varying threshold for Bath iris database are given in Appendix II. By threshold, we mean the minimum normalized Hamming distance which is essential for matching two irises. If this threshold is less than the normalized Hamming distance, the irises are considered to be unmatched and from different eyes. Threshold is actually the normalized hamming distance between the two irises.

b. Results on CASIA version 1.0


After obtaining the correct iris recognition rate of 99.6% using bit planes of normalized iris images of BATH database, the same method has been applied to other databases. BATH iris database contains very clear and high resolution iris images. Based on the results of iris recognition using BATH database, bit plane 5 has been selected as Feature Vector (FV). This FV has been used for obtaining the results on CASIA version 1.0 iris database. Three normalized images of each class have been used for training and remaining images have been used as test image. A variation in the size of normalized image regarding the width of iris has been carried out in order to study the effect of width of iris on its recognition rate. Size of each normalized image is 64256 where 64 and 256 are radial resolution and angular resolution of the iris respectively. The effects of normalized iris image resolution on CASIA version 1.0 are shown in Table 5.15. Accuracy of correct iris recognition rate increases as iris width is increases up to certain image resolution then it decreases again. Maximum accuracy of 94.11% has been achieved in this scenario when the image resolution is 50256. It means FV (i.e. bit plane 5) is affected by the width of the iris image.

-123-

Chapter 5

Results & Discussions

Table 5.15: Effect of image resolution on accuracy on CASIA version 1.0 Experiment No. Image Resolution Accuracy Threshold

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25.

40 256 41 256 42 256 43 256 44 256 45 256 46 256 47 256 48 256 49 256 50 256 51 256 52 256 53 256 54 256 55 256 56 256 57 256 58 256 59 256 60 256 61 256 62 256 63 256 64 256

91.93% 91.93% 91.93% 92.60% 92.77% 92.94% 93.27% 93.61% 93.10% 93.615 94.11% 93.78% 93.78% 93.94% 93.78% 93.44% 93.94% 93.61% 93.61% 93.61% 93.44% 93.61% 93.44% 93.61% 93.61%

0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47

The reason of this low-high-low accuracy against iris width is that the maximum discriminatory information captured by FV is obtained when the iris width is 50 pixels. If the width of iris is less than 50 pixels in case of CASIA version 1.0, then the binary bit plane 5 does not contain required information for classification. Same is true when the width increases beyond 50 pixels. Complete results with different threshold values against the specific resolution of the normalized iris images are given in Table 5.16. Best results of 94.11% have been achieved with eight false rejects and 27 false accepts.
Table 5.16: Results with 50256 image resolution on CASIA version 1.0 Threshold Number of False Reject Number of False Accept Accuracy

0.3

296 -124-

50.25%

Chapter 5 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.4 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49 296 296 296 296 296 295 292 285 270 253 239 215 171 126 75 34 8 2 0 0 0 0 0 0 0 0 0 0 0 0 1 1 4 10 20 27 41 44

Results & Discussions 50.25% 50.25% 50.25% 50.25% 50.25% 50.42% 50.92% 52.10% 54.62% 57.47% 59.83% 63.69% 71.09% 78.15% 85.71% 90.92% 94.11% 92.77% 92.60%

c. Results on CASIA version 3.0


Bit plane five (i.e. feature Type 4) has been used as FV for CASIA version 3.0 and results with accuracy of 99.64% have been achieved. Results of iris recognition for the database using bit plane five are shown in Figure 5.23. These results are for the classes which have seven or more images where three images of each class have been used as training images and recognition is carried out on remaining images. Change in the normalized iris image resolution produces the highest accuracy of 99.64% when iris width is 49. If complete image is taken, then the result is 99.5% accurate. Therefore, accuracy of 0.14% has been increased. The reason for this increase is that optimal width for iris recognition which has best discriminating information by using bit plane five is 49 pixels. It means that iris has more information towards pupil boundary rather than near iris boundary. In other words, information near iris boundary is not useful for classification because iris muscles are connected in that portion. Results with complete details using normalized image width of 49 pixels using bit plane five as FV for CASIA version 3.0 are given in Table 5.17. Theses results have been obtained by changing threshold and calculating FRR and FAR and total number of error.

-125-

Chapter 5

Results & Discussions

99.7 99.65 99.6 99.55 99.5 99.45 99.4 99.35 99.3

Results iris recognition for CASIA version 3.0 using bit plane 5

Accuracy (%)

42

54

Iris Width (pixels)

Figure 5.23: Results of iris recognition on CASIA version 3.0 using bit plane 5

Maximum iris recognition rate of 99.64% have been achieved with FRR and FAR of 0.001% and 0.002% respectively. This concludes that information for classification lies in pupillary part of the iris that is only 49/64100 = 76.5% of the iris width is sufficient to obtain a reasonable recognition accuracy. In other words, if 1/4th part of iris is occluded by eyelids or eyelashes then accuracy of more than 99.6% can be achieved.
Table 5.17: Result of CASIA version 3.0 when normalized iris width is 49 pixels Threshold False Reject Rate (%) False Accept Rate (%) Accuracy (%)
0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.002143 0.002143 0.002143 0.001429 0.001429 0.001429 0.001429 0.001429 0.001429 0.001429 0.001429 0.001429 0.001429 0.001429 0.001429 0.000714 0.000714 0.000714 0.002143 0.002143 0.002143 0.002143 0.002857 0.003571 0.003571 0.005 0.006429 0.007143 0.013571 0.020714 0.027857 0.043571 0.057857 0.074286 0.105714 0.135714 99.57 99.57 99.57 99.64 99.57 99.5 99.5 99.35 99.21 99.14 98.5 97.78 97.07 95.5 94.07 92.5 89.35 86.35

-126-

60

64

40

44

48

46

50

52

58

56

62

Chapter 5

Results & Discussions

d. Results on MMU
Experiments using bit plane five as feature vector have been conducted on MMU iris database. Three images of each eye have been used for training and remaining images have been utilized as test images. Two sets of experiments have been applied to this dataset. In the first set, database is trained with enrollment of three images of each class. Effects of variation of iris width and change in threshold value have been studied. In the second set of experiments, database is trained with three images of the same class and average of the three trained images is also included as another training image. The results of correct iris recognition against iris width are shown in Figure 5.24. Accuracy of 96.66% has been achieved using three training images when iris width is 57 pixels (i.e. resolution of normalized image is 57256) at a threshold of 0.43. Addition of average of the three training images improves the overall accuracy of iris recognition system from 96.66% to 97.55%. In general, accuracy for MMU iris database is increased at each iris width; minimum increase of 0.67% in the accuracy has been noted for two iris widths i.e.

Iris recognition using bit plane 5 on MMU


Without Average 98 97.5 97 96.5 96 95.5 95 94.5 94 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 With Average

Accuracy (%)

Iris width (in pixels)

Figure 5.24: Iris recognition rate using bit plane 5 on MMU iris database

52 pixels and 54 pixels whereas maximum increase of 1.55% in accuracy is observed when the width of iris is 58 pixels. An important point regarding the width of iris

-127-

Chapter 5

Results & Discussions

discussed above is the width of iris from the normalized iris image and not the actual width of iris. The average width of iris in MMU iris database is 26.74 (51.75 - 25.01) pixels as given in Table 5.11. Details of iris recognition results for second set of experiment are shown in Table 5.18.
Table 5.18: Results of iris recognition with image resolution 58256 on MMU Threshold
0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.4 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49

FRR (%)
31.77 30.88 28.22 25.11 23.11 20.22 17.11 13.77 11.55 7.55 5.77 3.55 2.44 2.88 4.22 5.11 6.44 6.44 6.44 6.44

FAR (%)
31.77 30.88 28.22 25.11 23.11 20.22 17.11 13.77 11.55 7.55 5.77 3.33 1.77 1.11 0.44 0 0 0 0 0

Accuracy (%)
68.22 69.11 71.77 74.88 76.88 79.77 82.88 86.22 88.44 92.44 94.22 96.44 97.56 97.11 95.78 94.89 93.56 93.56 93.56 93.55

5.9.3

Wavelets

Experiments have been conducted on different wavelets. Optimal features have been determined using Daubechies 2 wavelets on CASIA version 1.0 and then these features are used to obtain the results on other wavelets. In all these experiments, coefficients of wavelet are quantized. As not all the coefficients of a wavelet transform have the information required for recognition, so coefficient optimization has been carried out by defining threshold value. This threshold value is defined in such a way that image quality and coefficients required for recognition are not compromised. The coefficients below this threshold are made zero and above this threshold are made one which help in reducing overall computational burden. Threshold for the wavelet coefficient is zero, all -128-

Chapter 5

Results & Discussions

the values less than zero are made zero and positive values are made one. After this process, each value in FV is either zero or one which makes it binary.

a. Results on CASIA version 1.0 using Daubechies 2


Many different combinations of FV have been used to find the best features. When an image is decomposed using wavelet of level one, it is converted into four sub-images (i.e. Approximation Coefficients (AC), Horizontal Details (HD), Vertical Details (VD) and Diagonal Details (DD)). For further decomposition to level two, AC of level one is used as image which is decomposed to obtain four sub-images of level two. Similarly, AC of level two is used for decomposition into level three and so on. Results of iris recognition on CASIA version 1.0 using Daubechies 2 have been given in Figure 5.25 with many combinations of FVs. Different FVs are used for obtaining the results using two types (original and enhanced) of normalized images.

Iris Recognition Results on CASIA version 1.0 using Daubechies 2


Original Images Enhanced Images

Accuracy (%)

100.00 95.00 90.00 85.00 80.00 75.00 70.00


AC 3 VD 3 AC 3, VD 3 HD 3, VD 3, DD 3 AC 3, HD 3, VD 3 AC 3, HD 3, DD 3 HD 3, DD 3 AC 3, VD 3, DD 3 VD 2 AC 3, HD 3 AC 3, DD 3 VD 3, DD 3 HD 2, VD 2 HD 3 DD 3 HD 3, VD3 HD 2

Feature Vectors
Figure 5.25: Results of iris recognition using Daubechies 2 on CASIA version 1.0

Enhancement is carried out by subtracting background from original normalized image. Decimation algorithm is applied with decimation factor 16 to find the background of the -129-

Chapter 5

Results & Discussions

normalized image. To make the size of both images same, decimated image is resized to the size of normalized image and the subtraction of images is carried out. Used FV for recognition are AC 3 (i.e. Approximation Coefficients of level 3), VD 3 (i.e. Vertical Details of level 3), HD 3 (i.e. Horizontal Details of level 3), DD 3 (i.e. Diagonal Details of level 3) and so on. When two or more FVs are combined (e.g. AC 3 and HD 3), it means concatenation of vectors AC 3 and HD 3. Similarly, other FVs are given in the Figure 5.25. Best results of 99.33% have been achieved with combination of HD 3 and VD 3 when the number of training images are three out of seven of each iris. Same accuracy of iris recognition has been obtained when the FV is selected as concatenation of AC 3, HD 3 and VD 3.

Iris Recognition Results on CASIA version 1.0 using Daubechies 2 including Average of Training Images
Original Images 100.00 95.00 90.00 Accuracy (%) 85.00 80.00 75.00 70.00 65.00 60.00 55.00 50.00 Enhanced Images

Figure 5.26: Results of iris recognition including average training images

AC 3 VD 3 H D AC D 3 D 3 AC , H 3 3 D AC , D 3 D 3 VD , V 3 3, D 3 H DD D AC H 3, 3 D VD 3 AC , H 3, D 3 3 D D AC , H 3, V 3 D 3 D H , V 3, D 3 D D 3, 3 D 3 VD , D D 3, 3 D D 3 VD H H 2 D D 2, 2 VD 2
Feature Vectors

-130-

Chapter 5

Results & Discussions

The reason for getting best results with combination of HD and VD is that the features in the normalized iris images are placed in horizontal and vertical directions. The reason of minimum accuracy when using original images with FV AC 3 is that these coefficients are the low frequency components of level 3 and low frequency values do not contain discriminating information because the patterns of iris are best described by middle frequency components. Same FVs are used to find the accuracy of iris recognition when average of the training images is also included as a training image. This process is also repeated with enhanced images. Results in graphical form are presented in Figure 5.26. Minimum and maximum correct iris recognition rates for CASIA version 1.0 using original normalized images are 54.62% and 99.33% respectively. These results are corresponding to AC 3 and [AC 3, HD 3, VD 3]. When normalized images are enhanced and same process of training is applied (i.e. three images of each iris and average of these three images is included in training) then optimum correct iris recognition rates of 93.61% and 98.99% have been achieved corresponding to DD 3 and [AC 3, VD 3] respectively. FV with combination of HD 3 & VD3 has presented accuracy of 98.82% which is only 0.17% less than the maximum accuracy. Based upon the results obtained by using different combination of features, HD 3, VD 3 is giving best results. Therefore, concatenation of HD 3 and VD 3 are used to find the iris recognition results on other wavelets.

b. Results using other wavelets on CASIA version 1.0


Best results after applying different wavelets are given in Table 5.19. All the results have been obtained by including FV which is combination of horizontal and vertical details of level three [HD 3, VD 3] for the different wavelets. Resolution of the normalized images against best accuracy and corresponding threshold values are also given. Length of FV and time consumed to complete the results for 34 different resolutions (i.e. from 31256 to 64256) are also presented in the Table 5.19. Applied wavelets include Haar, Daubechies 2, Daubechies 4, Daubechies 6, Daubechies 8, Daubechies 10, Biorthogonal 5.5, Biorthogonal 6.8, Coiflet 1, Coiflet 3, Coiflet 4, Coiflet 5, Symlet 2, Symlet 4, Symlet 8 and Mexican Hat.

-131-

Chapter 5

Results & Discussions

Table 5.19: Results of iris recognition with different wavelets on CASIA version 1.0 S. No. Wavelet FV FV Resolution Accuracy Time Threshold Length (pixels) (%) (sec.) (elements) 49256 98.82 0.35 448 284.11

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.


16.

Haar Db2 Db2 Db4 Db6 Db8 Db10 Bior5.5 Bior6.8

HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3

55 256 54 256 48 256 47 256 31 256 45 256 45 256

99.33 99.33 98.15 97.98 98.49 98.82 97.48 97.31 98.49 98.32 98.66 97.82 98.49 98.66
99.49

0.34 0.38 0.30 0.38 0.35 0.39 0.34 0.36 0.39 0.39 0.40 0.45 0.37 0.4
0.4

612 714 912 1230 1710 1920 1230 1840 2760 2760 720 3800 1840 2704
3534

615.00 633.24 466.21 585.32 733.88 892.92 906.95 1024.35 1069.04 1114.53 295.34 1054.43 1025.47 1210.32
1438.16

HD 3, VD 3, DD3 41 256

Bior6.8 AC 3 HD 3, VD 3 48 256 Bior6.8 HD 3, VD 3, DD3 44 256 Coif1 Coif3 Coif3 Coif4


Coif5

HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3
HD 3, VD 3

45 256 50 256 45 256 48 256


46 256

17. 18. 19. 20.

Sym2 Sym4 Mexican Hat

HD 3, VD 3 HD 3, VD 3

55 256 43 256

98.66 97.98 98.49 97.82

0.34 0.36 0.37 0.46

612 760 2565 8192

616.55 290.13 818.60 990.19

Sym8 HD 3, VD 3, DD3 49 256 HD 2, VD2 32 256

After Image Enhancement S. No. Wavelet FV FV Time Resolution Accuracy Threshold Length (%) (sec.) (pixels) (elements) 60256 98.82 0.33 512 322.60

1. 2. 3.

Haar db2 db2

HD 3, VD 3 HD 3, VD 3

46 256

99.33 99.33

0.39 0.41

612 714

196.60 210.11

HD 3, VD 3, DD3 37 256

-132-

Chapter 5 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.


16.

Results & Discussions HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 35 256 45 256 43 256 30 256 53 256 33 256 98.66 98.66 98.99 98.82 97.82 97.82 96.97 98.32 98.82 98.15 98.82 98.82
99.66

db4 db6 db8 db10 bior5.5 bior6.8

0.37 0.40 0.40 0.39 0.35 0.37 0.25 0.39 0.40 0.45 0.37 0.38
0.39

912 1230 1620 1920 1230 1748 2622 2760 648 3800 1748 2392
3306

428.81 551.99 696.16 457.30 391.98 527.44 578.31 577.90 239.78 562.20 543.64 718.70
964.85

bior6.8 AC 3 HD 3, VD 3 38 256 bior6.8 HD 3, VD 3, DD3 45 256 coif1 coif3 coif3 coif4


coif5

HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3
HD 3, VD 3

39 256 51 256 35 256 30 256


32 256

17. 18. 19. 20.

sym2 sym4 Mexican Hat

HD 3, VD 3 HD 3, VD 3

46 256 43 256

98.82 98.49 98.49 98.32

0.39 0.37 0.4 0.46

544 836 2565 9472

195.53 281.43 406.72 1028.12

sym8 HD 3, VD 3, DD3 48 256 HD 2, VD2 37 256

Optimum features have been evaluated for these wavelets. Best iris recognition rate of 99.49% has been achieved using Coeiflet wavelets. This high iris recognition accuracy corresponds to the resolution of normalized iris images of 46256 pixels and length of FV is 3534 elements. Time utilized for the complete database with thirty four different resolutions is 1438.16 seconds. For each resolution, average time consumed is 42.29 (=1438/34) seconds and for each image, it is further reduced to 0.07 seconds. This is average time (per image) for training the database and recognizing the test images. When the same wavelets are applied after enhancing the images then the results are also improved and best iris recognition rate of 99.66% have been achieved using Coiflet 5 wavelets. In this case, less than 50% of the normalized iris images have been used and the size of the images for finding FV is smaller. Therefore, the length of FV (3306 elements) is 228 elements less than the length of FV with image enhancement. Similarly, the time

-133-

Chapter 5

Results & Discussions

consumed while getting best results with smaller normalized images is also reduced to 964.85 seconds from 1438.16 seconds. Maximum accuracy of 98.82% has been obtained by using Haar wavelets. Length of FV is smaller due to the nature of Haar wavelet. It is the only wavelet which produces best results with relatively large radius of iris which is 60 pixels. Among the Daubechies wavelets, Daubechies 2 wavelet performs better than others with best iris recognition accuracy of 99.33% with two combinations of FVs (i.e. HD 3, VD 3 and HD 3, VD 3, DD 3). The length of FV [HD 3, VD 3, DD 3] is 714 elements which is larger than 612 elements. Using biorthogonal wavelets best accuracy of 98.49% has been attained with the FV combination of AC 3 HD 3, VD 3. Results of Coiflet wavelets have already been discussed. In case of Symlet wavelets, application of Symlet 2 presented the best results with iris recognition accuracy of 98.82% with relatively smaller FV of 544 elements. Mexican hat wavelet is also applied to the CASIA version 1.0 iris database. Results of iris recognition obtained using this wavelets are more than 97.3% with original images and when images are enhanced by subtracting the background the accuracy is improved to 98.32%. The same experiments have been conducted with a little variation in the training set. Average of the three training images is also included as a training image in the database. Consequently the one iris image is increased in the enrolled images against each iris image. Also this process is repeated, after enhancing the images and results of these experiments are given in Table 5.20. On observing these results, it is concluded that increase of average image in the training set improves the overall results, which are further raised when normalized images are enhanced. Qualitative behaviour of the results is almost same as the results obtained without including average of the training image in training process. Same six wavelets with their different variations are applied for this set of experiments. In most of the cases, FV is combination of Horizontal and Vertical details of level three. Resolution of normalized iris images ranges (row-wise) from 30 pixels to 64 pixels and it is maintained to find the best iris width. As mentioned earlier, elements of FV are zero or one, so making the FV binary reduces the computational time. Time utilized for all these

-134-

Chapter 5

Results & Discussions

resolutions in training and testing processes is presented in the last columns of Table 5.19 and Table 5.20.
Table 5.20: Iris recognition results on CASIA version 1.0 including average image S. No. Wavelet FV FV Resolution Accuracy Time Threshold Length (pixels) (%) (sec.) (elements)

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20.

Haar db2 db2 db4 db6 db8 db10 bior5.5 bior6.8 bior6.8 bior6.8 coif1 coif3 coif3 coif4 coif5 sym2 sym4 sym8 Mexican Hat

HD 3, VD 3 HD 3, VD 3 HD 3, VD 3, DD3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 AC 3 HD 3, VD 3 HD 3, VD 3, DD3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3, DD3 HD 2, VD2

63256 45 256 32 256 53 256 32 256 48 256 31 256 45 256 30 256 48 256 45 256 47 256 51 256 55 256 47 256 45 256 45 256 46 256 49 256 35 256

98.82 98.49 98.32 98.15 97.82 98.66 98.66 97.31 97.31 98.49 97.98 98.82 98.32 98.49 98.49 99.66 98.49 98.15 98.49 97.98

0.36 0.36 0.4 0.30 0.37 0.38 0.39 0.34 0.34 0.39 0.39 0.40 0.44 0.36 0.4 0.39 0.36 0.38 0.37 0.46

512 544 612 912 1066 1710 1920 1230 1656 2760 2760 720 3800 1932 2704 3534 544 836 2565 8960

404.26 707.43 739.80 672.71 794.16 1033.43 1006.85 1016.36 1178.91 1237.85 1259.76 286.10 1230.45 1186.10 1400.07 1689.90 713.91 306.67 958.91 1439.47

After Image Enhancement

1. 2. 3. 4. 5. 6. 7. 8. 9.

Haar db2 db2 db4 db6 db8 db10 bior5.5 bior6.8

HD 3, VD 3 HD 3, VD 3 HD 3, VD 3, DD3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3 HD 3, VD 3

63256 37 256 37 256 50 256 32 256 46 256 50 256 44 256 34 256 -135-

99.16 99.33 99.33 98.82 98.66 99.16 98.82 97.65 98.15

0.37 0.38 0.41 0.37 0.39 0.39 0.4 0.34 0.36

512 476 714 912 1148 1620 2112 1230 1748

455.31 230.98 250.63 629.86 837.93 1076.61 522.78 454.28 622.23

Chapter 5 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. bior6.8 AC 3 HD 3, VD 3 bior6.8 HD 3, VD 3, DD3 coif1 HD 3, VD 3 coif3 HD 3, VD 3 coif3 HD 3, VD 3 coif4 HD 3, VD 3 coif5 HD 3, VD 3 sym2 HD 3, VD 3 sym4 HD 3, VD 3 sym8 HD 3, VD 3, DD3 Mexican HD 2, VD2 Hat 39 256 45 256 39 256 51 256 43 256 41 256 43 256 37 256 45 256 52 256 37 256 97.14 98.15 99.16 98.15 98.82 98.66 99.83 98.66 98.49 98.66 98.66 0.24 0.39 0.39 0.45 0.38 0.4 0.34 0.38 0.38 0.39 0.46

Results & Discussions 2622 2760 648 3800 1840 2600 3420 476 836 2565 9472 676.25 720.39 297.84 669.03 625.75 843.38 1095.35 231.23 317.57 469.47 1494.26

Haar wavelet performs better when almost all the iris width (i.e. 63 rows out of 64 rows) is used. Its best iris recognition accuracies of 98.82% and 99.16% have been observed with and without enhancement of iris image respectively. Among the Daubechies wavelets, Daubechies 8 wavelet outperforms other Daubechies with highest accuracy of 99.16% when results are obtained using enhanced normalized iris images. All the results of Daubechies have accuracy more than 97.8%. Information discrimination power of Daubechies 10 wavelet is very high because it uses less than half of the iris width for an accuracy of 98.82%. Also Daubechies 6 utilizes 50% of the normalized iris images and performs well with accuracies of 97.82% and 98.66% with and without enhancement of images. Minimum length of FV among all the wavelets is obtained by Daubechies and Symlet 2 but the results of Symlet wavelets are less than Daubechies wavelets. Similarly, Mexican hat and Biorthogonal wavelets provide good information discrimination capacity but Coiflet 5 wavelet gives the best results with highest iris recognition accuracy of 99.83% with image enhancement and 99.66% with using the original images. Coiflet is a discrete wavelet which is more symmetrical than the Daubechies wavelet. This makes Coiflet the right choice for iris recognition. Complete results with Coiflet 5 wavelets on CASIA version 1.0 are given in Table 5.21. It uses only 67.18% of the normalized iris width. Only one image is false rejected whereas no false accept is noted when threshold value is 0.34. False reject decreases and false accept increases with the increase in the threshold value.

-136-

Chapter 5

Results & Discussions

Table 5.21: Results with Coiflet 5 wavelet at image resolution 43256 Threshold 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.4 False Reject 38 31 15 3 1 1 0 0 0 0 0 False Accept 0 0 0 0 0 5 9 14 14 14 14 FRR (%) FAR (%) Accuracy (%) 6.39 0.00 93.61 5.21 0.00 94.79 2.52 0.00 97.48 0.50 0.00 99.50 0.17 0.00 99.83 0.17 0.84 98.99 0.00 1.51 98.49 0.00 2.35 97.65 0.00 2.35 97.65 0.00 2.35 97.65 0.00 2.35 97.65

In view of the above, Coiflet 5 wavelet with FV concatenation of HD 3 and VD 3 is the best wavelet for iris recognition. Same wavelet with same FV is applied to other iris databases and results have been obtained.
0 ROC curve for Coiflet 5 wavelet at image resolution 0 0 43*256 0 0 0 0.0084 0.0151 0.0235 0.0235 0.0235 0.0235 1

1 0.0639 0.0521 1 0.0252 0.005 0.0017 0.0017

0.9 0.8

0.02 0.015 0.01 0.005 0

0.70
0

0.60
FRR

0.50 0.4 0.3 0.2 0.1 0


0

0.02 0.018 0.016 0.014 0.012 0.01 0.008 0.006 0.004 0.002 0 0 0.005 0.01 0.015 0.02

0.005

0.01

0.015

0.02

0.1

0.2

0.3

0.4

0.5
FAR

0.6

0.7

0.8

0.9

Figure 5.27: ROC using Coiflet 5 wavelets for CASIA version 1.0

-137-

Chapter 5

Results & Discussions

ROC using Coiflet 5 wavelet has been obtained as shown in Figure 5.27 and EER of 0.0017 is achieved.

c. Results on CASIA version 3.0


Coiflet 5 wavelet has been applied to find results of iris recognition on CASIA version 3.0 iris image database and results are shown in Figure 5.28. In the first experiments, three out of seven training images are used to train the database. Enhanced images have been used in the second experiment. Average of the three training image as a training image is included in third experiment whereas enhanced normalized images are used in forth experiment. Maximum iris recognition accuracy of 96.59% has been achieved on CASIA version 3.0. The main reason of less than 97% results is that large numbers of images in this dataset are blurred or defocused.
Iris recognition using Coif5 wavelet on CASIA version 3.0

97.00 96.50 96.00 95.50


Accuracy (%)

95.00 94.50 94.00 93.50 93.00 92.50 92.00 1 2 3 4


Experiment Number

Figure 5.28: Iris recognition results on CASIA version 3.0 using Coiflet 5 wavelet

d. Results on MMU
Coiflet 5 wavelet is used to find the iris recognition rate on MMU iris database with FV combination of horizontal and vertical details of level three. Four types of experiments have been conducted. In the first experiment, three original iris images of each class are -138-

Chapter 5

Results & Discussions

used for training and remaining images are used as test images. In second experiment, first experiment is repeated with enhanced normalized iris images. Third experiment includes average of the three training images as an enrolled image and remaining images have been used as test image. Fourth experiment has been conducted by using the enhanced images after background subtraction. Iris recognition rate of 98.22% has been achieved for first experiment and remaining experiments resulted with an accuracy of 98.44% as shown in Figure 5.29. The difference between original and enhanced normalized iris images appears in terms of threshold values which changes from 0.4 to 0.32. Length of FV in all the experiments is 3534 elements and best results have been achieved with resolution (number of rows) of normalized image from 46 pixels to 50 pixels.
Iris recognition results on MMU with Coif5 Wavelet
98.5 98.45 98.4 98.35 98.3 98.25 98.2 98.15 98.1 1 2 3 Experiment Number 4

Accuracy (%)

Figure 5.29: Results of Coiflet 5 wavelet on MMU iris database

e. Results on BATH
Coiflet 5 wavelet performs best on this database with 100% iris recognition rate. With three training images, in all the conducted experiments (like with and without enhancement of images, including average of training images in training process), 100% accuracy has been achieved as shown in Figure 5.30. In this case, threshold value decreases from 0.32 to 0.3 after enhancement of images. Length of binary FV is 3306 elements in all experiments. Only 30 pixels width of iris is necessary for obtaining the best results. It indicates that after normalization, less than 50% of the image is sufficient

-139-

Chapter 5

Results & Discussions

Iris recognition results on BATH with Coif5 Wavelet 100 90 80 70 60 50 40 30 20 10 0 1 2 3 Experiment Number 4

Accuracy (%)

Figure 5.30: Results of Coiflet 5 wavelet on BATH iris database

to get very high iris recognition rate. Therefore, if half of an iris is occluded by eyelids, then this iris can also be identified correctly. Similarly, localization of eyelids is an overhead if it covers less than half of the iris.

-140-

Chapter 6

Conclusions and Future Research Work

Chapter 6:

Conclusions and Future Research Work

With the current stress on security and surveillance, intelligent personal identification has been an important consideration. Iris has been widely studied for personal identification because of its extraordinary structure and non-touch capturing mode. Iris has proved to be the most reliable and accurate among the biometric traits. The main components of iris recognition system consist of image acquisition, iris localization, feature extraction and matching.

6.1

Design & Implementation Methodologies

Normally iris localization takes more than half of the total time used in order to recognize a person through iris recognition system. System is designed in such a way that maximum accuracy of the localization is achieved. Iris is localized by first finding the boundary between pupil and iris by different methods for different databases. Different methods have been implemented for pupil localization because different databases have different image capturing devices under different environments (illumination conditions). Irregular boundary of pupil has been obtained by using circular boundary of pupil. Each iris image has three prominent areas; (a) pupil, (b) iris and (c) sclera and eyelids. While inspecting the histogram of an iris image, it has been observed that in general, it has three overlapping parts. First part has the information of pupil, second part is related to iris and last part corresponds to sclera and outer part of iris. In order to localize iris, a new method has been designed and implemented based on the gradient in intensity values. This method performs well on all the databases. After localizing the iris, next step is to compensate for the variation in size of the iris due to camera to eye distance and pupil dilation and constriction. For normalizing the iris, five different methods have been implemented. Four out of five methods depend on the selection of reference point (e.g. pupil center, iris center, mid-point of pupil and iris centers) whereas the last method depends on the width of the iris. For feature extraction, bit plane and different combination of wavelets coefficients have been investigated in order to obtain maximum accuracy. Coefficients of Haar, Daubechies, Symlet, Coiflet, Biorthogonal and Mexican hat wavelets have been used. In

-141-

Chapter 6

Conclusions and Future Research Work

addition, width of the iris is varied from thirty one to sixty four pixels to find out its effect on iris recognition.

6.2

Performance of the Developed System

In this thesis, mainly the performance of iris localization methods on different datasets has been analyzed. A point inside the pupil is obtained to find the location of the pupil in the image. 100% results have been achieved to correctly observe the point in CASIA version 1.0, BATH and MMU iris databases. In case of CASIA version 3.0, this point has been detected accurately in 99.93% images. Exact boundary of the pupil is ciphered by divide and conquer rule. Radially a specified number of points are selected. These points are repositioned with respect to the maximum gradient and then linearly joined together to obtain exact boundary of the pupil. The worst result attained for complete correct pupil localization is 99.3% on CASIA version 3.0 and the best result of 99.8% for CASIA version 1.0 has been achieved. For outer iris boundary, a band is calculated within which iris outer boundary lies. One dimensional signals are picked along radial direction from the determined band in a sequence at different angles to obtain the outer circle of the iris. Redundant points are discarded by finding certain distance from the center of the pupil to the point. This is because the distance between center of pupil and center of iris is very small. The domain for different directions is left and right lowers half quadrants when pupil center is at the origin of the axes. This proposed method performs very well on all the databases and the highest accuracy is 99.7% on MMU version 1.0 and the lowest accuracy is 99.21% on CASIA version 3.0 iris image databases. Whereas, the results of correct iris localization on CASIA version 1.0 and BATH iris databases are 99.6% and 99.4% respectively. Eyelids are detected by fitting parabolas using points satisfying different criteria. Experimental results show that the proposed method is most effective on CASIA version 1.0. The results with accuracy of 98.91% for upper eyelid and 97.8% for lower eyelid have been obtained. The results of upper eyelid localization have been achieved with accuracy of 84.5%, 84.66% and 90.02% for BATH, MMU and CASIA version 3.0 iris image databases respectively. In case of lower eyelid localization, the correct localization

-142-

Chapter 6

Conclusions and Future Research Work

outcomes have been attained up to 96.22%, 96.6% and 91.9% for MMU, BATH and CASIA version 3.0 iris datasets respectively. Five different normalization methods have been proposed and implemented termed as: (1) normalization of iris using a reference point as pupil center, (2) iris center, (3) mid-points of iris and pupil centers, (4) normalization using minimum distance and (5) dynamic size normalization. The results of these normalized images have been analyzed. Minimum time consumed for normalization is 0.007 seconds per image for MMU iris database with dynamic size normalization method and 18.38 seconds is maximum time utilized in normalization for BATH iris database using normalization via reference point as iris center. Time consumed for each image of every database in normalization via reference point as pupil center is 0.05 seconds and for normalization using mid-point of iris & pupil centers as reference point is 0.07 seconds per image. Minimum distance normalization method consumes 0.033 seconds per image for every dataset. Bit planes have been used as features of the normalized iris images. Experiments on bit plane two to seven have been conducted and best results obtained are on bit plane five. Correct iris recognition rate of up to 99.64% has been achieved using CASIA version 3.0. Results on other databases have also given encouraging performance with accuracy of 94.11%, 97.55% and 99.6% on MMU, CASIA version 1.0 and BATH iris databases respectively. Different wavelets transforms have been used for iris recognition. Best feature vector is determined by analyzing a large number of features. Selected feature vector is combination of horizontal and vertical details of level three. Coiflet 5 wavelet outperforms all the wavelets. Best iris recognition accuracies of 99.83%, 96.59%, 98.44% and 100% have been achieved on CASIA version 1.0, CASIA version 3.0, MMU and BATH iris databases respectively.

6.3

Future Research Work

Research in the following directions can enable the researchers to make an error free human iris identification system. Images acquired from the cameras should be checked in iris image quality phase. Iris image quality can be determined by evaluating certain parameters like focus, occlusion, area of iris, lighting, image capturing environment and other factors. System performance -143-

Chapter 6

Conclusions and Future Research Work

can be improved by using a quality metric in the matching or by omitting the poor quality images. There is no generally accepted measure of iris image quality. Thus, iris image quality metric can be found. Iris localization is very active research area. Many methods have been proposed to segment the iris in the images. Two segmentation topics to research further are as follows: One is pupil and iris boundaries are not approximated as circle when the images are acquired off angle or when the acquired eye is not orthogonal to the capturing device and second is the segmentation of iris from noisy parts of the eye like eyelids, eyelashes, specular reflections and head hairs particularly of stylish females. In case of occlusion of iris by the mentioned noises, iris localization is a real challenge. Many feature extraction methods have been proposed by different researchers for analyzing iris textures but there is no general agreement on which form of features gives the best results. To find the features or combination of different features which perform best is another possible area of future research. Recognition of human beings using iris images of high resolution while the object is on the move is another area of research. Video of the object can be acquired and frames in which iris images are clear can be used for recognition. Iris as a biometric can not be used in eyes with many diseases like cataract, glaucoma, albinism, aniridia etc. To identify people with such diseases multimodal biometrics systems are needed. Therefore, it is recommended to research on multi biometrics technologies using different combinations of biometrics like iris and face, iris and ear, iris and fingerprint, iris and hand geometry etc. This will not only accommodate the people with diseases as mentioned above but will also improve the results of the system and save the system from intruders, spoof attacks etc. Some researchers [18, 112-114] have already worked in this direction but a complete system is still the need of the day.

-144-

Appendix I

Appendix I
Results of PCA for database CASIA version 1.0 with different normalization methodologies are presented below, where Dim stands for number of dimensions of PCA, Ttime is the time utilized for training in seconds and Rtime is time used for recognition in seconds and Accuracy is in percentage of the total images in the database. Normalized 1 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 57.31 1.16 2.29 4 50.25 1.22 2.48 7 49.41 1.26 2.65 10 49.24 1.34 2.95 13 49.58 1.39 3.19 16 49.41 1.47 3.13 19 50.08 1.57 3.56 22 49.08 1.64 3.67 25 48.74 1.73 3.98 28 49.58 1.81 5.52 31 48.74 1.95 5.69 34 49.08 2.00 4.62 37 48.74 2.07 4.80 40 48.40 2.22 4.75 43 48.74 2.28 4.88 46 49.24 2.35 5.03 49 49.08 2.45 5.25 52 48.40 2.52 5.41 55 48.74 2.65 5.69 58 48.57 2.78 8.03 61 48.74 2.87 8.43 64 47.73 2.93 10.53 Normalized 2 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 59.16 1.17 2.27 4 50.76 1.22 2.46 7 47.90 1.28 2.63 10 48.91 1.33 2.91 13 49.08 1.42 3.19 16 49.75 1.48 3.18 19 49.08 1.65 3.59 22 49.24 1.77 3.88

-145-

Appendix I 25 28 31 34 37 40 43 46 49 52 55 58 61 64 48.07 48.57 48.40 47.90 48.07 47.90 47.73 47.73 47.06 47.06 46.72 46.89 46.89 47.23 1.84 4.13 1.86 5.40 1.91 5.63 2.01 4.85 2.09 5.03 2.24 4.80 2.28 5.06 2.63 5.30 2.69 5.36 2.78 5.75 2.86 5.82 2.87 8.31 3.14 8.72 2.94 11.24

Normalized 3 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 58.82 1.19 2.30 4 51.26 1.23 2.42 7 48.74 1.30 2.71 10 49.24 1.40 3.06 13 49.41 1.46 3.30 16 48.24 1.54 3.24 19 47.90 1.57 3.62 22 48.24 1.68 3.80 25 48.24 1.86 4.12 28 47.73 1.97 5.58 31 47.23 1.94 5.89 34 47.56 2.21 6.34 37 47.39 2.27 5.00 40 47.73 2.28 4.85 43 47.06 2.47 5.09 46 46.55 2.53 5.17 49 47.06 2.64 5.47 52 47.23 2.73 5.60 55 47.56 2.82 5.73 58 47.56 2.97 8.45 61 47.39 3.14 8.48 64 47.56 3.31 11.43 Normalized 4 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 58.49 1.19 2.36 4 50.42 1.26 2.56

-146-

Appendix I 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 47.23 47.39 47.23 47.39 47.56 47.90 47.23 46.72 47.06 47.06 46.72 46.22 45.88 46.22 46.22 46.55 45.71 46.22 46.39 46.05 1.31 2.73 1.37 3.04 1.45 3.25 1.48 3.16 1.56 3.53 1.64 3.70 1.75 3.96 1.80 5.33 1.91 5.63 2.00 4.70 2.08 5.02 2.18 4.74 2.26 4.92 2.37 5.14 2.47 5.34 2.65 5.75 2.88 5.79 3.00 8.93 3.08 9.49 3.20 12.50

Normalized 5 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 57.65 0.38 1.56 4 48.07 0.48 1.66 7 48.57 0.57 1.82 10 47.56 0.64 1.87 13 47.90 0.74 2.13 16 47.90 0.85 2.41 19 47.73 0.91 2.69 22 47.06 1.04 2.76 25 46.72 1.32 2.98 28 46.39 1.26 3.33 31 46.72 1.34 3.06 34 46.55 1.65 5.88 Normalized 6 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 54.45 1.20 2.36 4 50.25 1.27 2.47 7 48.24 1.31 2.73 10 48.07 1.41 3.07 13 47.90 1.48 3.31 16 48.07 1.55 3.24

-147-

Appendix I 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 48.40 49.08 48.40 48.74 47.90 47.90 47.56 48.07 47.06 47.56 47.56 47.56 47.39 46.72 46.89 47.06 1.65 3.70 1.75 3.83 1.85 4.16 1.94 5.99 2.03 6.20 2.14 6.62 2.25 5.21 2.35 4.88 2.47 5.12 2.55 5.24 2.68 5.48 2.75 5.60 2.86 5.83 2.96 8.83 3.08 9.36 3.19 12.57

Normalized 7 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 53.11 1.18 2.35 4 48.24 1.25 2.57 7 46.22 1.32 2.74 10 45.88 1.39 2.97 13 46.39 1.48 3.29 16 46.89 1.56 3.29 19 46.55 1.66 3.51 22 46.72 1.61 3.69 25 46.55 1.69 3.94 28 46.05 1.78 5.29 31 45.88 1.92 5.70 34 45.55 2.14 4.94 37 45.88 2.25 5.31 40 46.05 2.35 4.94 43 45.88 2.44 5.09 46 46.22 2.55 5.20 49 46.22 2.68 5.47 52 45.38 2.75 5.67 55 45.55 2.85 5.76 58 45.21 2.97 8.87 61 45.55 3.07 9.43 64 45.71 3.19 12.50 Normalized 8 ------------------------------------------------------Dim. Accuracy Ttime Rtime

-148-

Appendix I 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 53.78 49.08 47.56 47.39 47.90 47.06 46.39 47.06 46.55 46.39 46.72 47.39 47.56 47.06 47.23 46.72 46.72 46.55 46.22 46.39 45.71 45.71 1.19 2.35 1.27 2.47 1.33 2.74 1.41 3.06 1.46 3.30 1.56 3.25 1.65 3.70 1.74 3.79 1.84 4.14 1.95 5.96 2.04 6.24 2.13 6.62 2.25 5.25 2.34 4.92 2.46 5.11 2.57 5.18 2.66 5.48 2.77 5.57 2.88 5.84 2.96 8.82 3.07 9.37 3.18 12.56

Normalized9 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 53.61 1.19 2.36 4 49.92 1.25 2.57 7 48.07 1.31 2.74 10 46.89 1.39 2.98 13 48.40 1.48 3.29 16 48.07 1.56 3.25 19 47.73 1.65 3.62 22 48.07 1.75 3.85 25 48.07 1.84 4.14 28 48.07 1.95 5.86 31 47.06 2.06 6.23 34 46.72 2.13 4.93 37 47.56 2.26 5.30 40 47.23 2.36 4.91 43 47.06 2.46 5.10 46 46.89 2.56 5.24 49 47.39 2.68 5.50 52 47.56 2.76 5.66 55 48.07 2.87 5.79

-149-

Appendix I 58 61 64 47.06 47.39 47.06 2.97 8.89 3.08 9.42 3.11 10.28

Normalized10 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 56.97 0.25 1.16 4 49.58 0.37 1.23 7 48.07 0.39 1.30 10 48.24 0.43 1.40 13 47.90 0.47 1.47 16 48.91 0.50 1.75 19 48.91 0.53 1.86 22 48.74 0.57 1.95 25 48.40 0.61 2.03 28 47.39 0.65 2.17 31 46.89 0.70 2.05 34 47.73 0.74 3.44 Normalized11 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 52.77 1.17 2.26 4 50.08 1.22 2.38 7 46.55 1.24 2.60 10 47.73 1.30 2.96 13 47.90 1.39 3.19 16 47.73 1.47 3.16 19 47.39 1.52 3.50 22 47.23 1.59 3.63 25 47.73 1.68 3.92 28 47.90 1.76 5.34 31 47.73 1.87 5.56 34 47.73 1.95 5.89 37 48.24 2.05 4.89 40 47.23 2.13 4.67 43 48.24 2.23 4.85 46 47.23 2.31 4.99 49 47.23 2.42 5.23 52 46.72 2.49 5.36 55 46.55 2.61 5.55 58 46.89 2.68 7.88 61 46.89 2.78 8.16 64 46.72 2.88 10.18

-150-

Appendix I Normalized12 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 53.95 1.15 2.24 4 49.92 1.22 2.50 7 46.39 1.25 2.61 10 46.72 1.32 2.89 13 46.72 1.38 3.12 16 47.06 1.44 3.11 19 47.23 1.53 3.46 22 47.23 1.61 3.67 25 46.72 1.68 3.92 28 46.89 1.77 5.28 31 47.39 1.89 5.56 34 47.23 1.96 4.66 37 47.06 2.07 4.96 40 46.89 2.15 4.71 43 47.90 2.26 4.86 46 48.24 2.32 5.07 49 48.24 2.43 5.23 52 47.73 2.51 5.40 55 47.56 2.60 5.58 58 46.72 2.76 7.96 61 46.55 2.79 8.24 64 46.22 2.90 10.22 Normalized13 Dim. Accuracy 1 53.28 4 49.24 7 46.72 10 48.74 13 47.90 16 48.24 19 47.56 22 47.39 25 47.90 28 47.73 31 47.73 34 47.90 37 47.73 40 47.90 43 47.73 46 47.73 49 47.39 52 47.23 Ttime Rtime 1.16 2.25 1.20 2.38 1.26 2.64 1.32 2.98 1.41 3.18 1.44 3.09 1.52 3.51 1.60 3.63 1.69 3.92 1.79 5.36 1.87 5.55 1.96 5.89 2.08 4.89 2.14 4.68 2.23 4.88 2.39 5.10 2.42 5.20 2.51 5.36

-151-

Appendix I 55 58 61 64 46.72 46.72 46.72 46.72 2.60 5.56 2.69 7.88 2.79 8.22 2.90 10.22

Normalized14 ------------------------------------------------------Dim. Accuracy Ttime Rtime 1 54.29 1.16 2.24 4 49.92 1.20 2.46 7 48.91 1.28 2.63 10 47.90 1.31 2.91 13 49.24 1.40 3.12 16 48.74 1.44 3.12 19 48.40 1.52 3.46 22 48.07 1.61 3.73 25 48.57 1.70 3.92 28 48.91 1.78 5.30 31 48.24 1.91 5.58 34 48.40 1.99 4.66 37 48.57 2.04 4.93 40 48.40 2.15 4.69 43 47.90 2.23 4.85 46 47.56 2.32 4.99 49 47.90 2.42 5.22 52 48.57 2.49 5.37 55 48.24 2.60 5.51 58 48.57 2.70 7.92 61 48.74 2.78 8.21 64 49.58 2.87 10.10

-152-

Appendix II

Appendix II
Results of bit plane 5 for BATH iris database with different number of rows of normalized iris image are presented below. Experiments have been conducted by changing total number of rows in normalized images starting from 20 to 64, results of only 50 to 64 number of rows are given here. Other rows do not produce better results. Total number of images used in the experiment is 1000 and size of each normalized image is 64 by 256. Threshold value is changed from 0.3 to 0.49 to obtain false reject and false accept along with total errors occurred during matching and at the end maximum accuracy is given with corresponding threshold value and number of errors. Image Rows = 50 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 38 4 42 0.31 31 4 35 0.32 21 5 26 0.33 13 5 18 0.34 7 6 13 0.35 7 6 13 0.36 1 6 7 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Accuracy = 99.30 Minimum Number of Errors = 7

************************************************************************ Image Rows = 51 -----------------------------------------------------------------------------------------------------------153-

Appendix II Threshold 0.30 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49 False Reject 39 33 21 13 7 7 1 0 0 0 0 0 0 0 0 0 0 0 0 0 False Accept 4 4 5 5 6 6 6 7 7 7 7 7 7 7 7 7 7 7 7 7 Total Errors 43 37 26 18 13 13 7 7 7 7 7 7 7 7 7 7 7 7 7 7

At Threshold = 0.36 Accuracy = 99.30

Minimum Number of Errors = 7

************************************************************************ Image Rows = 52 ---------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 40 4 44 0.31 33 4 37 0.32 20 5 25 0.33 13 5 18 0.34 7 6 13 0.35 7 6 13 0.36 1 6 7 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7

-154-

Appendix II 0.47 0.48 0.49 0 0 0 7 7 7 7 7 7

At Threshold = 0.36 Accuracy = 99.30

Minimum Number of Errors = 7

************************************************************************ Image Rows = 53 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 41 4 45 0.31 33 4 37 0.32 20 5 25 0.33 13 5 18 0.34 7 6 13 0.35 7 6 13 0.36 1 6 7 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 7 Accuracy = 99.30 ************************************************************************ Image Rows = 54 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 41 4 45 0.31 34 4 38 0.32 19 5 24 0.33 13 5 18 0.34 7 6 13 0.35 5 6 11

-155-

Appendix II 0.36 0.37 0.38 0.39 0.40 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49 1 0 0 0 0 0 0 0 0 0 0 0 0 0 6 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7

At Threshold = 0.36 Accuracy = 99.30

Minimum Number of Errors = 7

************************************************************************ Image Rows = 55 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 42 4 46 0.31 35 4 39 0.32 20 5 25 0.33 14 5 19 0.34 7 5 12 0.35 4 6 10 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40

-156-

Appendix II ************************************************************************ Image Rows = 56 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 42 4 46 0.31 35 4 39 0.32 21 5 26 0.33 14 5 19 0.34 7 5 12 0.35 4 6 10 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Accuracy = 99.40 Minimum Number of Errors = 6

************************************************************************ Image Rows = 57 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 42 3 45 0.31 34 4 38 0.32 23 5 28 0.33 14 5 19 0.34 7 5 12 0.35 4 6 10 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7

-157-

Appendix II 0.44 0.45 0.46 0.47 0.48 0.49 0 0 0 0 0 0 7 7 7 7 7 7 7 7 7 7 7 7

At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40 ************************************************************************ Image Rows = 58 Threshold False Reject False Accept Total Errors ----------------------------------------------------------------------------------------------------------0.30 43 3 46 0.31 34 4 38 0.32 23 5 28 0.33 14 5 19 0.34 7 5 12 0.35 4 6 10 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Minimum Number of Errors = 6 Accuracy = 99.40 ************************************************************************ Image Rows = 59 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 42 3 45 0.31 36 4 40 0.32 23 5 28 0.33 15 5 20 0.34 7 5 12 0.35 4 6 10

-158-

Appendix II 0.36 0.37 0.38 0.39 0.40 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 7 7 7 7 7 7 7 7 7 7 7 7 7 6 7 7 7 7 7 7 7 7 7 7 7 7 7

At Threshold = 0.36 Accuracy = 99.40

Minimum Number of Errors = 6

************************************************************************ Image Rows = 60 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 42 3 45 0.31 38 3 41 0.32 24 4 28 0.33 14 5 19 0.34 7 5 12 0.35 4 6 10 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Accuracy = 99.40 Minimum Number of Errors = 6

-159-

Appendix II ************************************************************************ Image Rows = 61 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 42 3 45 0.31 38 3 41 0.32 25 4 29 0.33 13 5 18 0.34 8 5 13 0.35 3 6 9 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Accuracy = 99.40 Minimum Number of Errors = 6

************************************************************************ Image Rows = 62 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 42 3 45 0.31 39 3 42 0.32 24 4 28 0.33 14 4 18 0.34 8 5 13 0.35 3 6 9 0.36 0 6 6 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7

-160-

Appendix II 0.44 0.45 0.46 0.47 0.48 0.49 0 0 0 0 0 0 At Threshold = 0.36 Accuracy = 99.40 7 7 7 7 7 7 7 7 7 7 7 7 Minimum Number of Errors = 6

************************************************************************ Image Rows = 63 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 45 3 48 0.31 38 3 41 0.32 26 3 29 0.33 16 4 20 0.34 7 5 12 0.35 3 4 7 0.36 1 3 4 0.37 0 7 7 0.38 0 7 7 0.39 0 7 7 0.40 0 7 7 0.41 0 7 7 0.42 0 7 7 0.43 0 7 7 0.44 0 7 7 0.45 0 7 7 0.46 0 7 7 0.47 0 7 7 0.48 0 7 7 0.49 0 7 7 At Threshold = 0.36 Accuracy = 99.60 Minimum Number of Errors = 4

************************************************************************ Image Rows = 64 ----------------------------------------------------------------------------------------------------------Threshold False Reject False Accept Total Errors 0.30 48 2 50 0.31 38 3 41 0.32 27 3 30 0.33 15 4 19

-161-

Appendix II 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49 6 3 1 0 0 0 0 0 0 0 0 0 0 0 0 0 At Threshold = 0.36 Accuracy = 99.60 4 5 3 7 7 7 7 7 7 7 7 7 7 7 7 7 10 8 4 7 7 7 7 7 7 7 7 7 7 7 7 7 Minimum Number of Errors = 4

-162-

References

References
[1] A. Basit, M. Y. Javed, and M. A. Anjum, "Efficient iris recognition method for human identification," in International Conference on Pattern Recognition and

Computer Vision (PRCV 2005), vol. 1, 2005, pp. 24-26.


[2] [3] B. Miller, "Vital signs of identity," Spectrum IEEE, vol. 31, pp. 22-30, 1994. A. K. Jain, A. Ross, and S. Prabhakar, "Introduction to Biometric recognition,"

IEEE Transaction on Circuits and Systems for Video Technology, vol. 14, pp. 420, 2004. [4] [5] [6] A. Jain, L. Hong, and S. Pankati, "Biometric Identification," Communications of

the ACM, vol. 43, pp. 91-98, 2000.


T. Ruggles, "Comparison of Biometric Techniques," http://www.biometricconsulting.com/bio.htm, 1998. M. A. Anjum, M. Y. Javed, and A. Basit, "Face Recognition using Double Dimension Reduction," in International Conference on Pattern Recognition and

Computer Vision (PRCV 2005), 2005, pp. 43-46.


[7] M. A. Anjum, M. Y. Javed, and A. Basit, "A New Approach to Face Recognition Using Dual Dimension Reduction," International Journal of Signal Processing, vol. 2, pp. 1-6, 2005. [8] M. A. Anjum, M. Y. Javed, A. Nadeem, and A. Basit, "Face Recognition using Scale Invariant Algorithm," in IASTED, International Conference Applied

Simulation & Modeling, 2004, pp. 309-312.


[9] B. Moghaddam, W. Wahid, and A. Pentland, "Beyond Eigenfaces: Probabilistic Matching for Face Recognition," in 3rd IEEE International Conference on

Automatic Face and Gesture Recognition, 1998.


[10] A. Nefian and M. Hayes, "An embedded HMM-based approach for face detection and recognition," in IEEE international Conference on Acoustics, Speech, and

Signal Processing, 1999.


[11] Z. M. Hafed and M. D. Levine, "Face Recognition Using the Discrete Cosine Transform," International Journal of Computer Vision, vol. 43, pp. 167-188, 2001.

-163-

References [12] R. Chellappa, S. Sirohey, C. Wilson, and C. Barnes, "Human and machine recognition of faces: A survey," Technical Report CAR-TR-731, CS-TR-3339,

University of Maryland, 1994.


[13] [14] [15] [16] S. Z. Li and J. Lu, "Face Recognition Using the Nearest Feature Line Method,"

IEEE Transactions on Neural Networks, vol. 10, pp. 439-443, 1999.


H. Moon and P. J. Phillips, "Computational and performance aspects of PCAbased face-recognition algorithms," Perception, vol. 30, pp. 303-321, 2001. Y. Wang, C. Chua, and Y. Ho, "Facial feature detection and face recognition from 2D and 3D images," Pattern Recognition Letters, vol. 23, pp. 1191-1202, 2002. A. Bronstein, M. Bronstein, and R. Kimmel, "Expression-invariant 3D face recognition," in 4th International Conference Audio and Video based Biometric

Person Authentication, 2003, pp. 62-70.


[17] K. Iwano, T. Hirose, E. Kamibayashi, and S. Furui, "Audio-visual person authentication using speech and ear images," in ACM Workshop on Multimodal

User Authentication, 2003, pp. 85-90.


[18] C. Sanderson, S. Bengio, H. Bourlard, J. M. R. Collobert, M. BenZeghiba, F. Cardinaux, and S. Marcel, "Speech and face based biometric authentication," in

International Conference on Multimedia and Expo, 2003.


[19] P. Aleksc and A. Katsaggelos, "An audio-visual person identification and verification system using FAPs as visual features," in ACM Workshop on

Multimodal User Authentication, 2003, pp. 80-84.


[20] K. Chang, K. Bowyer, and V. Barnabas, "Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics," IEEE Trans. Pattern Analysis

and Machine Intelligence, vol. 25, pp. 1160-1165, 2003.


[21] X. Chen, P. Flynn, and K. W. Bowyer, "Visible-light and infrared face recognition," in ACM Workshop on Multimodal User Authentication, 2003, pp. 48-55. [22] T. Hazen, E. Weinstein, and A. Park, "Towards robust person recognition on handheld devices using face and speaker identification technologies," in 5th

international conference on Multimodal Interfaces, 2003, pp. 289-292.


[23] Computer Business Review, 1998.

-164-

References [24] [25] S. Prabhakar and A. K. Jain, "Decision-level fusion in fingerprint verification,"

Pattern Recognition, vol. 35, pp. 861-874, 2002.


R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzalez-Marcos, "Biometric Identification through Hand Geometry Measurements," IEEE Trans. on Pattern

Analysis & Machine Intelligence, vol. 22, pp. 1168-1171, 2000.


[26] [27] [28] [29] http://www.eyedesignbook.com/ch3/eyech3-i.html accessed, 2007. Industry Information: Biometrics, 1996. A. K. Jain, F. D. Griess, and S. D. Connell, "On-line signature verification,"

Pattern Recognition, vol. 35, pp. 2963-2972, 2002.


R. Plamondon and G. Lorette, "Automatic signature verification and writer identification - the state of the art," Pattern Recognition, vol. 22, pp. 107-131, 1989. [30] R. Plamondon and S. N. Srihari, "On-line and off-line handwriting recognition: A comprehensive survey," IEEE Transactions on Pattern Analysis and Machine

Intelligence, vol. 22, pp. 63-84, 2000.


[31] [32] I. Yoshimura and M. Yoshimura, "Off-line writer verification using ordinary characters as the object," Pattern Recognition, vol. 24, pp. 909-915, 1991. F. Borowski, "Voice activity detection for speaker verification systems,"

Proceedings of SPIE - The International Society for Optical Engineering, vol.


6937, 2008. [33] W. M. Campbell, J. P. Campbell, D. A. Reynolds, E. Singer, and P. A. TorresCarrasquillo, "Support vector machines for speaker and language recognition,"

Computer Speech and Language, vol. 20, pp. 210-229, 2006.


[34] B. Xiang, U. V. Chaudhari, J. Navrtil, G. N. Ramaswamy, and R. A. Gopinath, "Short-time Gaussianization for robust speaker verification," presented at ICASSP, IEEE International Conference on Acoustics, Speech and Signal 2002. [35] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, "Speaker verification using adapted Gaussian mixture models," Digital Signal Processing: A Review Journal, vol. 10, pp. 19-41, 2000. [36] S. V. Stevenage, M. S. Nixon, and K. Vince, "Visual Analysis of Gait as a Cue to Identity," Applied Cognitive Psychology, vol. 13, pp. 513-526., 1999.

-165-

References [37] [38] L. Wang, W. Hu, and T. Tan, "Recent developments in human motion analysis,"

Pattern Recognition, vol. 36, pp. 585-601, 2002.


P. S. Huang, C. J. Harris, and M. S. Nixon, "Statistical approach for recognizing humans by gait using spatial-temporal templates," IEEE International Conference

on Image Processing, vol. 3, pp. 178-182, 1998.


[39] C.-Y. Yam, M. S. Nixon, and J. N. Carter, "Gait Recognition by Walking and Running: a Model-Based Approach," in Asian Conference on Computer Vision

(ACCV-2002), 2002, pp. 1-6.


[40] [41] M. S. Nixon and J. N. Carter, "Automatic recognition by gait," Proceedings of the

IEEE, vol. 94, pp. 2013-2024, 2006.


A. Kale, A. Sundaresan, A. N. Rajagopalan, N. P. Cuntoor, A. K. RoyChowdhury, V. Kruger, and R. Chellappa, "Identification of humans using gait,"

IEEE Transactions on Image Processing, vol. 13, pp. 1163-1173, 2004.


[42] A. J. Hoogstrate, H. Van Den Heuvel, and E. Huyben, "Ear identification based on surveillance camera images," Science and Justice - Journal of the Forensic

Science Society, vol. 41, pp. 167-172, 2001.


[43] B. Moreno, A. Sanchez, and J. F. Velez, "On the Use of Outer Ear Images for Personal Identification in Security Applications," in IEEE 33rd Annual

International Carnahan Conference on Security Technology, 1999, pp. 469-476.


[44] R. Purkai and P. Singh, "A test of individuality of human external ear pattern: Its application in the field of personal identification," Forensic Science International, vol. 178, pp. 112-118, 2008. [45] [46] A. Iannarelli, Ear Identification, Forensic Identification Series: Paramont Publishing, Freemont, Califoria, 1989. A. J. Hoogstrate, H. Van den Heuvel, and E. Huyben, "Ear Identification Based on Surveillance Cameras Images " http://www.forensic-evidence.com/site/

ID/IDearCamera.html, 2003.
[47] [48] http://www.UNDBiometricsDatabase.html, accessed, 2005. A. Basit and M. Javed, Y., "Localization of iris in gray scale images using intensity gradient," Optics and Lasers in Engineering, vol. 45, pp. 1107-1114, 2007.

-166-

References [49] [50] J. Daugman, "How iris recognition works," IEEE Transactions on Circuits and

Systems for Video Technology, vol. 14, pp. 21-30, 2004.


S. Lim, K. Lee, O. Byeon, and T. Kim, "Efficient iris recognition through improvement of feature vector and classifier," ETRI Journal, vol. 23, pp. 61-70, 2001. [51] [52] J. Kim, S. Cho, and J. Choi, "Iris recognition using wavelet features," Journal of

VLSI Signal Processing, vol. 38, pp. 147-156, 2004.


W. Boles and B. Boashash, "A Human Identification Technique Using Images of the Iris and Wavelet Transform," IEEE Trans. Signal Processing, vol. 46, pp. 1185-1188, 1998. [53] L. Ma, T. Tan, Y. Wang, and D. Zhang, "Personal identification based on iris texture analysis," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, pp. 1519 1533, 2003. [54] [55] [56] J. G. Daugman, "The importance of being random: Statistical principles of iris recognition," Pattern Recognition, vol. 36, pp. 279-291, 2003. R. Wildes, "Iris recognition: an emerging biometric technology," Proceedings of

the IEEE, vol. 85, pp. 1348-1363, 1997.


L. Masek and P. Kovesi, "Biometric Identification System Based on Iris Patterns " in The School of Computer Science and Software Engineering: The University of Western Australia, 2003. [57] Y. Z. Shen, M. J. Zhang, J. W. Yue, and H. M. Ye, "A new iris locating algorithm," in International Conference on Artificial Reality and Telexistence--

Workshops (ICAT'06), 2006, pp. 438-441.


[58] [59] H. Proenca and L. A. Alexandre, "Ubiris: A noisy iris image database," in 13th

International Conference on Image Analysis and Processing, 2005, pp. 970-977.


J. Cui, Y. Wang, T. Tan, L. Ma, and Z. Sun, "A fast and robust iris localization method based on texture segmentation," in SPIE Defense and Security

Symposium, vol. 5404, 2004, pp. 401-408.


[60] C. Tian, Q. Pan, Y. Cheng, and Q. Gao, "Fast Algorithm and Application of Hough Transform in Iris Segmentation," in 3rd International Conference on

Machine Learning and Cybernetics, 2004, pp. 3977-3980.

-167-

References [61] A. Rad, R. Safabakhsh, N. Qaragozlou, and M. Zaheri, "Fast iris and pupil localization and eyelid removal using gradient vector pairs and certainty factors," in Irish Machine Vision and Image Processing Conference, 2004, pp. 82-91. [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] L. Ma, Y. Wang, and T. Tan, "Iris recognition based multi-channel Gabor filtering," in The fifth Asian conference on computer vision, 2002, pp. 23-25. L. Flom and A. Safir, "Iris recognition system," U.S. Patent 4 641 349, 1987. J. Daugman, "Biometric Personal Identification System Based on Iris Analysis,"

US patent 5 291 560, 1994.


"Iris Recognition," http://en.wikipedia.org/wiki/Iris_recognition accessed, 2007. http://www.chinahistoryforum.com/index.php?showtopic=21366&st=0 accessed 2008. P. Kronfeld, Groos anatomy and embryology of the eye, H. Davson ed: The Eye, Academic Press, London, 1962. "Eyes," http://www.ratbehavior.org/Eyes.htm accessed, 2007. A. K. Bachoo and J. R. Tapamo, "A segmentation method to improve iris-based person identification," in 7th AFRICON Conference in Africa, 2004, pp. 403-408. "BMIris," http://ctl.ncsc.dni.us/biomet%20web/BMIris.html accessed, 2007. "CASIA-Iris Image Database version 1.0 and 3.0," Chinese Academy of Sciences Institute of Automation China, http://www.sinobiometrics.com accessed 2003 and 2006. [72] [73] [74] [75] [76] "Multimedia University, Iris database," http://persona.mmu.edu.my/~ accessed, 2006. J. S. Lim, Two-Dimensional Signal and Image Processing: Englewood Cliffs, NJ, Prentice Hall, 1990. J. R. Parker, Algorithms for Image Processing and Computer Vision: John Wiley & Sons, Inc. New York, 1997. "Zero crossing," http://www.ii.metu.edu.tr/~ion528/demo/lectures/6/1/index.html accessed, 2007. R. C. Gonzalez and R. E. Woods, Digital Image Processing, Second ed: Prentice Hall, Upper Saddle River, New Jersey, 2002.

-168-

References [77] [78] [79] [80] [81] "Image prcessing algoritms,"

http://www.ii.metu.edu.tr/%7Eion528/demo/demochp.html accessed, 2007. J. Canny, "A Computational Approach to Edge Detection," IEEE Transactions on

Pattern Analysis and Machine Intelligence, vol. PAMI-8, pp. 679-698, 1986.
"Canny," http://homepages.inf.ed.ac.uk/rbf/HIPR2/canny.htm accessed, 2007. "Hough Transform," http://en.wikipedia.org/wiki/Hough_transform accessed, 2007. J. G. Daugman, "High confidence visual recognition of persons by a test of statistical independence," IEEE Transactions on Pattern Analysis and Machine

Intelligence, vol. 5, pp. 1148-1161, 1993.


[82] [83] [84] P. V. C. Hough, "Method and means for recognizing complex patterns," U.S. Patent 3 069 654, 1962. J. R. Bergen, P. Anandan, K. Hanna, and R. Hingorani, "Hierarchical modelbased motion estimation," in Euro. Conf. Computer Vision, 1991, pp. 5-10. D. J. Field, "Relations between the statistics of natural images and the response properties of cortical cells," Journal of the Optical Society of America, vol. 4, pp. 2379-2394, 1987. [85] [86] [87] [88] [89] [90] [91] C. Burrus, R. Gopinath, and H. Guo, Introduction to Wavelets and Wavelet

Transforms: Prentice Hall, New Jersy, 1998.


R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, second ed: WileyInterscience publication, 2001. A. W. Goodman, Analytic goemetry and the calculus, fourth ed. New York: Collier Macmillan International Editions, 1980. "Methods for classification," http://sundog.stsci.edu/rick/SCMA/node2.html accessed, 2007. M. Turk and A. Pentland, "Eigenfaces for Face Recognition," Journal of

Cognitive Neuroscience, vol. 3, pp. 71-86, 1991.


"Bit Plane," PC Magazine, 2007. "Bit-plane," http://en.wikipedia.org/wiki/Bit-plane accessed, 2008.

-169-

References [92] "Coiflets," http://documents.wolfram.com/applications/wavelet/FundamentalsofWavelets/1.4 .5.html accessed, 2007. [93] [94] [95] G. Strang and T. Nguyen, Wavelets and Filter Banks: Wellesley Cambridge Press, 1995. E. W. Weisstein, "Hamming Distance ": From MathWorld--A Wolfram Web Resource http://mathworld.wolfram.com/HammingDistance.html accessed, 2007. K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, "Image understanding for iris biometrics: A survey," Computer Vision and Image Understanding, vol. 110, pp. 281-307, 2008. [96] [97] "University of Bath Iris image database," UK http://www.bath.ac.uk/eleceng/research/sipg/irisweb/database.htm accessed, 2006. P. J. Phillips, K. W. Bowyer, and P. J. Flynn, "Comments on the CASIA version 1.0 iris dataset," IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 29, pp. 1-2, 2007. [98] A. Basit, M. Y. Javed, and S. Masood, "Non-circular Pupil Localization in Iris Images," presented at 4th International Conference on Emerging Technologies (IEEE ICET 2008), Rawalpindi, Pakistan, 2008. [99] K. Masood, M. Y. Javed, and A. Basit, "Iris Recognition using Wavelets," in

International Conference on Emerging Technologies (ICET 2007) 2007, pp. 253256. [100] J. Illingwroth and J. Kittler, "A survey of Hough transform," Computer Vision,

Graphics and Image Processing, vol. 44, pp. 87-116, 1998.


[101] A. Zaim, "Automatic segmentation of iris images for the purpose of identification," in IEEE International Conference on Image Processing (ICIP-

2005) 2005, pp. III-273-6.


[102] R. Zhu, J. Yang, and R. Wu, "Iris recognition based on local feature point matching," in International Symposium on Communications and Information

Technologies (ISCIT '06). Bangkok, 2006, pp. 451-454.

-170-

References [103] S. P. Narote, A. S. Narote, L. M. Waghmare, and A. N. Gaikwad, "An automated segmentation method for iris recognition," in TENCON 2006, IEEE Region 10

Conference. Hong Kong, 2006, pp. 1-4.


[104] H. Mehrabian and P. Heshemi-Tari, "Pupil boundary detection for iris recognition using graph cuts," in International Conference on Image and Vision Computing

New Zealand (IVCNZ -2007). New Zealand, 2007, pp. 77-82.


[105] L. R. Kennell, R. W. Ives, and R. M. Gaunt, "Binary morphology and local statistics applied to iris segmentation for recognition," in IEEE International

Conference on Image Processing (ICIP-2006), 2006, pp. 293-296.


[106] K. Grabowski, W. Sankowski, M. Napieralska, M. Zubert, and A. Napieralski, "Iris recognition algorithm optimized for hardware implementation," in IEEE

Symposium on Computational Intelligence and Bioinformatics and Computational Biology, 2006, pp. 1-5.
[107] X. Guang-Zhu, Z. Zai-feng, and M. Yi-de, "An image segmentation based method for iris feature extraction," The Journal of China universities of posts and

telecommunications, vol. 15, pp. 96-117, 2008.


[108] C. Teo and H. Ewe, "An efficient one dimensional fractal analysis," in 13th

WSCG International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. Czech Republic, 2005, pp. 157-160.
[109] A. A. Kassim, T. Tan, and K. H. Tan, "A comparative study of efficient generalized Hough transform techniques," Image and Vision Computing, vol. 17 pp. 737-748, 1999. [110] C.-Y. Cho, H.-S. Chen, and J.-S. Wang, "Smooth Quality Streaming With BitPlane Labelling," Visual Communications and Image Processing, Proceedings of

the SPIE, vol. 5690, pp. 2184-2195, 2005.


[111] T. Strutz, "Fast Noise Suppression for Lossless Image Coding," in Picture Coding

Symposium (PCS'2001), 2001.


[112] K. I. Chang, "New multi-biometric approaches for improved person identification," PhD dissertation, University of Notre Dame, 2004.

-171-

References [113] K. I. Chang, K. W. Bowyer, and P. J. Flynn, "An evaluation of multimodal 2D+3D face biometrics," IEEE Transactions on Pattern Analysis and Machine

Intelligence, vol. 27, pp. 619-624, 2005.


[114] N. Poh, S. Bengio, and J. Korczak, "A multi-sample multi-source model for biometric authentication," in IEEE International Workshop on Neural Networks

for Signal Processing, 2002, pp. 375-384.

-172-

S-ar putea să vă placă și