Sunteți pe pagina 1din 6

Real-Time Iris Recognition System

Name1, Name2, Name3


1

Department of E&TC,Collage Name ., N.M.U, Jalgaon, Maharashtra,INDIA, mailid@gmail.com Abstract


Iris detection is one one of the most accurate and secure means of biometric identification while also being one of the least invasive. Fingerprints of a person can be faked--dead people can come to life by using a severed thumb. Thiefs can don a nifty mask to fool a simple face [1] recognition program. The iris has many properties which make it the ideal biometric recognition component. The iris has the unique characteristic of very little variation over a life's period yet a multitude of variation between individuals. Irises not only differ between identical twins, but also between the left and right eye. Because of the hundreds of degrees of freedom the iris gives and the ability to accurately measure the textured iris, the false accept probability can be estimated at 1 in 10^31. Another characteristic which makes the iris difficult to fake is its responsive nature. Comparisons of measurements taken a few seconds apart will detect a change in iris area if the light is adjusted--whereas a contact lens or picture will exhibit zero change and flag a false input. [2]

Iris recognition is an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of the irides of an individual's eyes, whose complex random patterns are unique and can be seen from some distance. Not to be confused with another, less prevalent, ocular-based technology, retina scanning, iris recognition uses camera technology with subtle infrared illumination to acquire images of the detail-rich, intricate structures of the iris. Digital templates encoded from these patterns by mathematical and statistical algorithms allow the identification of an individual or someone pretending to be that individual. Databases of enrolled templates are searched by matcher engines at speeds measured in the millions of templates per second per (single-core) CPU, and with infinitesimally small False Match rates. A key advantage of iris recognition, besides its speed of matching and its extreme resistance to False Matches, is the stability of the iris as an internal, protected, yet externally visible organ of the eye. Key Words: Iris; Recognition; Localization;Segametation; Real-Time System, etc. ---------------------------------------------------------------------------

1.1 Aquiring Image Beginning with a 320x280 pixel photograph of the eye taken from 4 cenimeters away using a near infrared camera. The near infrared spectrum emphasizes the texture patterns of the iris making the measurements taken during iris recognition more precise. All images tested in this program were taken from the Chinese Academy of Sciences Institute of Automation (CASIA) iris database.[3]

1. Introduction
Iris recognition has been used for many decades and shown to be a highly accurate method identifying people by using the unique patterns of the human iris among the other biometric system[9]. Human iris on the other hand as an internal organ of the eye and as well protected from the external environment. It is the annular part between pupil and sclera, and has distinct characteristics such as freckles, coronas, stripes, furrows,crypts, and so on. It is an inner organ visible outside; hence, iris image can be captured without physical touch. Each eye has its own iris pattern that is stable throughout ones life,yet it is a perfect biometric for an identication system with the ease of speed, reliability, and automation. The human eye structures is shown in Figure1

Figure 2 Near-infrared image of eye from CASIA. Iris recognition is the process of recognizing a person by analyzing the random pattern of the iris (Figure 1). The automated method of iris recognition is relatively yo ung, existing in patent only since 1994 Figure 1. Right View

2. METHODOLOGY
To achieve automated iris recognition, there are three main tasks: First we must locate the iris in a given image. Secondly, it is necessary to encode the iris information into a format which is amenable to calculation and computation, for example a binary string. Finally, the data must be storable, to load and compare these encodings.[4]

Iris recognition system mainly includes four parts, namely iris image acquisition, iris image pre-processing, iris image feature extraction and encoding, and iris image matching and recognition. The part of iris image pre-processing further includes iris positioning, normalization and image enhancement. Iris image acquisition: Iris image acquisition is the first step in iris recognition. Iris is a very small organ, around ten-odd millimeters in diameter. Different races have greatly different iris colors. The yellow race normally have irises in dark brown, which have very obscure texture; therefore a special iris image acquisition device must be used to take iris images with rich texture. At the time of acquisition, the user has to stand in a range of 10-50 cm from the acquisition device and stare at the acquiring window with eyes widely open so that a clear iris image can be acquired. Iris positioning: As the most important step in the whole iris recognition process, iris positioning is to accurately determine the inner and outer boundaries of an iris and ensure the iris area in which the features are extracted each time from similar area and there is not too much deviation; positioning speed and accuracy determine he practicability and feasibility of the entire iris system. Iris boundary positioning methods are mainly divided into two types. One is positioning algorithms based on round iris, including the positioning method based on grey gradient, for example calculus method, and the method based on binary boundary point, such as least square method and Hough transform; the other is positioning algorithm based on non-round iris, including ellipse fitting method and dynamic contour method. Iris normalization: In the process of iris image acquisition, images obtained are not only different in size but also appear rotated or translated under the influence of factors like focal length, human eye size, eye translation and rotation and pupil contraction. For ease of comparison, an iris recognition system will normally give normalization processing to an iris for the purpose of adjusting each original image to a same size and corresponding position and thus eliminating the effect of translation, contraction/enlargement and rotation on iris recognition. Iris image enhancement: Some cause of the acquiring device itself renders iris images unevenly illuminated; this can be treated normally through histogram equalization. The interference of various noises is also present in the acquisition process; noise interference due to, for example, reflection of light is usually removed through homomorphic filtering. If the acquired image for iris recognition is dim and unclear, the recognition capability of the iris recognition system will be greatly affected; a reconstruction based super-resolution method is usually used to improve the iris image. In a word, image enhancement is intended to reduce the effect of factors like uneven illumination and various noises on the recognition capability of an iris recognition system. Iris feature extraction and encoding: A relevant algorithm is relied on to extract a distinct detail feature from an iris image and a proper feature recording method is taken, so as to form iris encoding and finally create a feature template or modality

Figure 3. Image Recognition Flow Daigram

As shown in figure 3, iris recognition method commonly consists of three main modules as following: -Image acquisition Image Acquisition is the first step of all process in iris recognition. The aim is to acquire the image of eye. This can be done by using any tools like CCD camera, Infrared camera, etc. However, the most important thing is to get the good image that can be process into iris recognition system well. -Pre-Processing and Feature Extraction Some researchers argued that the iris algorithm is the preprocessing part. Since the pre-processing includes iris localization, feature extraction, encoding. - Matching Matching refers to the final decision for recognition or identification person. This isn't only matching and comparing iris, but also getting the information from iris. In matching system, it can be tested by using the matching algorithm such as Hamming Distance, Euclidean, etc.

2.1. Iris Location


When locating the iris there are two potential options. The software could require the user to select points on the image, which is both reliable and fairly accurate, however it is also time consuming and implausible for any real-world application. The other option is for the software to auto-detect the iris within the image. This process is computationally complex and introduces a source of error due to the inherent complexities of computer vision. However, as the software will then require less user interaction it is a major step towards producing a system which is suitable for real-world deployment, and thus became a priority for extending the program specification. Locating the iris is not a trivial task since its intensity is close to that of the sclera and is often obscured by eyelashes and eyelids. [5]

template. This step, good or bad, relates directly to the accuracy rate of iris recognition. From the point of view of feature extraction, methods now available can be divided into three types: phase analysis based methods, such as Daugmans phase encoding method, zero crossing detection based methods, such as Boles 1D wavelet zero-crossing encoding method, and texture analysis based methods, such as Wildes Laplacian pyramid algorithm.[10] Matching and identification: Iris recognition is a typical matter of modality matching, i.e. matching of the feature of an acquired image with the iris image feature template in the database to judge if the two irises belong to a same type. Modality matching algorithms normally relate to feature extraction algorithms; main matching methods include Hamming distance and Euclidean distance. The matching process of an iris recognition system can be divided into two modes. The first is identification by matching the feature to be recognized with all feature templates stored and spotting the modality to be recognized from a number of types; hence a matter of one-to-many matching. The second is verification by matching the feature to be recognized with the identity template declared by the user, judging according to the matching result if they belong to a same modality, and finishing one-to-one matching. With respect to identification, verification has a much smaller range and much higher speed.[6][8] Edge Detection Since the picture was acquired using an infrared camera the pupil is a very distinct black circle. The pupil is in fact so black relative to everything else in the picture a simple edge detection should be able to find its outside edge very easily. Furthermore, the thresholding on the edge detection can be set very high as to ignore smaller less contrasting edges while still being able to retrieve the entire perimeter of the pupil. The best edge detection algorithm for outlining the pupil is canny edge detection. This algorithm uses horizontal and vertical gradients in order to deduce edges in the image. After running the canny edge detection on the image a circle is clearly present along the pupil boundary.

iris are not concentric. Consequently, the pupil information does not help directly determine the same parameters for the iris. However, the pupil information does give a good starting point, the pupil center. Most modern iris detection algorithms use random circles to determine the iris parameters. Having a starting point at the pupil, these algorithms guess potential iris centers and radii. They then integrate over the circumference in order to determine if it is on the border of the iris. While this is highly accurate the process can consume a lot of time. This module explains an alternate approach which is much more lightweight but with higher error rates. 3.2 IRIS Radius Approximation The first step in finding the actual iris radius is to find an approximation of the iris radius. This approximation can then be fine tuned to find the actual iris parameters. In order to find this approximation a single edge of the iris must be found. Knowing that eyes are most likely to be distorted in the top and bottom parts due to eyelashes and eyelids, the best choice for finding an unobstructed edge is along the horizontal line through the pupil center. Having decided on where to attempt to detect the iris edge, the question of how to do it arises. It seems obvious that some type of edge detection should be used. It happens that for any edge detection it is a good idea to blur the image to subtract any noise prior to running the algorithm, but too much blurring can dilate the boundaries of an edge, or make it very difficult to detect. Consequently, a special smoothing filter such as the median filter should be used on the original image. This type of eliminates sparse noise while preserving image boundaries. The image may need to have its contrast increased after the median filter.[7]

Figure 5 a) & b) The original image after running through a median filter. A median filter works by assigning to a pixel the median value of its neighnors. Now that the image is prepped the edge detection can be done. Since there is such a noticeable rising edge in luminescence at the edge of the iris, filtering with a haar wavelet should act as a simple edge detector. The area of interest is not just the single horizontal line through the iris, but the portion of that line to the left of the pupil. This is so that the rising luminescence from the transition from iris to white is the only major step.[13]

Figure 4. Canny edge detected image of the eye

3. Iris Recognition: Detecting the Iris


3.1 IRIS Detection With the information on the pupil discovered the location of the iris can now begin. It is important to note that the pupil and

a. Harr Wavelate Figure 7 The unrolled iris before and after edge detection

b. Area Of Interest

4.1 IRIS Information Extraction In order to extrapolate the iris' center and radius, two chords of the actual iris through the pupil must be found. This can be easily accomplished with the information gained in the previous step. By examing points with x values on the strip offset by half of the length of the strip a chord of the iris is formed through the pupil center. It is important to pick the vectors for these chords so they are both maximally independent of each other, while also being far from areas where eyelids or eyelashes may interfere.

c. area of interest filtering with Harr Wavelate Figure 6 a,b & c : By filtering the area of interest with a haar wavelet all rises in luminence are transformed into high valued components of the output. The sharpness of change in luminence affects the overall height of the component[11][12]

4. IRIS Translation
Having acquired an approximate radius, a small pad of this value should produce a circle centered on the pupil which contains the entire iris. Furthermore, with the perimeter of the pupil known, an annulus may be formed which should have the majority of it's area filled by the iris. This annulus can then be unrolled into cartestian coordinates through a straight discretized transformation. (r --> y, --> x) The details of this procedure are described in Step 3. If the iris is perfectly centered on the pupil, the unrolled image should have a perfectly straight line along its top. However, if the iris is off center even a little this line will be wavy. The line represents the overall distance the iris is at from the pupil center. It is this line which will help to determine the iris' center and radius. Consequently, an edge detection algorith m must be run on the strip in order to determine the line's exact location. Once again canny edge detection is used. However, before the edge detection can be run the image should undergo some simple pre-processing to increase the contrast of the line. This will allow for a higher thresholding on the edge detection to eliminate extraneous data.[13]

Figure 8 The points selected on the strip to form the chords of the iris through the pupil The center of the iris can be computed by examining the shift vectors of the chords. By looking at both sides of a chord and comparing their lengths an offset can be computed. If the center was shifted by this vector it would equalize the two components of the chord. By doing this with both of the chords two different offset vectors can be computed. However, by just shifting the center through both of these vectors some components of shift will be overcompensated for due to the vectors not being orthogonal. Thus, the center should be shifted through the first vector, and the orthogonal component of the second to the first.[8]

Figure 9 he change vectors (black) represent the shift of the pupil (black circle) in order to find the iris center. By just adding the vectors (blue vector) the result (blue circle) is offset by any vector the two change vectors share. Consequently, by adding the orthogonal component (gray vector) of one vector to the other (red vector), the actual iris center (red circle) is found.

ignored when doing the iris code comparison. In this way a user is not penalized for blocking a portion of their iris in the overall authentication. This takes advantage of the need for only a fraction of the iris for identification. The algorithm described in this module does not currently produce a mask. One way in which a mask could be produced is during the iris extrapolation. After unrolling the iris the presence of discontinuities in the line across the top the eyelids or eyelashes could be detected and a mask could be returned showing them. Another way a mask could be generated is through the running of a edge detector around the edge of the computed iris. In this way all sections of the perimeter which are messy could be found and masked.

6. Iris Recognition: Gabor Filtering


To understand the concept of Gabor filtering, we must first start with Gabor wavelets. Gabor wavelets are formed from two components, a complex sinusoidal carrier and a Gaussian envelope. g(x,y)=s(x,y)wr(x,y) The complex carrier takes the form: s(x,y)=ej(2(u0x+v0y)+P) We can visualize the real and imaginary parts of this function seperately as shown in this figure.

Figure 10 The pupil center and perimeter, along with the original estimate of the iris perimeter and the determined iris center and perimeter The diameter of the iris can now be estimated by simply averaging the two diameters of the chords. While this is not a perfect estimate, that would require a single chord through the iris center, it is a very good approximation.

5 Possible Improvement
Currently the algorithm only examines two static vectored chords. While these were chosen to work best within the maximal orthogonality and minimal eyelash/eyelid interference constraint there are still many cases where they do not work. It is possible for someone to have the entire upper half of their iris covered and still have the rest of the iris code generation work since less then half of the iris surface is needed for an identification. Of course, if half of the eye is shut the chords will not intersect the iris edge and no result will occur. Instead of these static vectors for chords, an adaptive algorithm could be used which starts from the eye center and works progressively through angles to find points near the eyelid interference region. In this case the orthogonality would be maximized for that eye while still finding points on the iris edge. Most modern iris detection algorithms produce what is known as an iris mask. This mask represents the portion of the iris obstructed by the eyelid or eyelash. This portion can then be

The real part of the function is given by: Re(s(x,y))=cos(2(u0x+v0y)+P) and the imaginary: Im(s(x,y))=sin(2(u0x+v0y)+P) The parameters u0 and v0 represent the frequency of the horizontal and vertical sinusoids respectively. P respresents an arbitrary phase shift. The second component of a gabor wavelet is its envelope. The resulting wavelet is the product of the sinusoidal carrier and this envelope. The envelope has a gaussian profile and is described by the following equation: g(x,y)=Ke(a2(xx0)r2+b2(yy0)r2)where: (xx0)r=(xx0)cos()+(yy0)sin() (yy0)r=(xx0)sin()+(yy0)cos() The parameters used above are K - a scaling constant (a,b) - envelope axis scaling constants, - envelope rotation constant, (x0,y0) - Gausian envelope peak. To put it all together, we multiply s(x,y) by wr(x,y). This produces a wavelet like this one: [14]

Figure 11 1D Gobar Wavelate

Any given iris has a unique texture that is generated through a random process before birth. Filters based on Gabor wavelets turn out to be very good at detecting patterns in images. We'll use a fixed frequency 1D Gabor filter to look for patterns in our unrolled image. First, we'll take a one pixel wide column from our unrolled image and convolve it with a 1D Gabor wavelet. Because the Gabor filter is complex, the result will have a real and imaginary part which are treated seperately. We only want to store a small number of bits for each iris code, so the real and imaginary parts are each quantized. If a given value in the result vector is greater than zero, a one is stored; otherwise zero is stored. Once all the columns of the image have been filtered and quantized, we can form a new black and white image by putting all of the columns side by side. The real and imaginary parts of this image (a matrix), the iris code, are shown here:

REFERENCE

[1] S. Sanderson, J. Erbetta. Authentication for secure environments based on iris scanning technology. IEE Colloquium on Visual Biometrics, 2000. [2] J. Daugman. How iris recognition works. Proceedings of 2002 International Conference on Image Processing, Vol. 1, 2002.
th

[3] E. Wolff. Anatomy of the Eye and Orbit. 7 edition. H. K. Lewis & Co. LTD, 1976. [4] R. Wildes. Iris recognition: an emerging biometric technology. Proceedings of the IEEE, Vol. 85, No. 9, 1997. [5] J. Daugman. Biometric personal identification system based on iris analysis. United States Patent, Patent Number: 5,291,560, 1994. [6] J. Daugman. High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 11, 1993.

Figure 12. Real Part of Iris Code

[7] R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey, S. McBride. A system for automated iris recognition. Proceedings IEEE Workshop on Applications of Computer Vision, Sarasota, FL, pp. 121-128, 1994. [8] W. Boles, B. Boashash. A human identification technique using images of the iris and wavelet transform. IEEE Transactions on Signal Processing, Vol. 46, No. 4, 1998.

Figure 13 . Imaginary Part of Iris Code Now that we have an iris code, we can store it in a database, file or even on a card. What happens though if we want to compare two iris codes. 7. Future Of IRIS As evident from the results, it is possible to create, relatively easily, an algorithm to detect and recognize irises to a calculated degree of confidence. In addition, after a little research (hint: google "John Daugman"), it is clear that more sophisticated algorithms exist that give zero false acceptance-something many other authentication techniques simply cannot deliver. One specific algorithm patented by Dr. Daugman, is currently the most accepted and widely used in iris code recognition systems. This embodiment of the algorithm uses robust algorithms in each part of the implementation (pupil and iris detection, masking, Gabor correlation), and has experimentally proven to be extremely accurate.

[9] S. Lim, K. Lee, O. Byeon, T. Kim. Efficient iris recognition through improvement of feature vector and classifier. ETRI Journal, Vol. 23, No. 2, Korea, 2001. [10] S. Noh, K. Pae, C. Lee, J. Kim. Multiresolution independent component analysis for iris identification. The 2002 International Technical Conference on Circuits/Systems, Computers and Communications, Phuket, Thailand, 2002. [11] Y. Zhu, T. Tan, Y. Wang. Biometric personal identification based on iris patterns. Proceedings of the 15th International Conference on Pattern Recognition, Spain, Vol. 2, 2000. [12] C. Tisse, L. Martin, L. Torres, M. Robert. Person identification technique using human iris recognition. International Conference on Vision Interface, Canada, 2002. [13] Chinese Academy of Sciences Institute of Automation. Database of 756 Greyscale Eye Images. http://www.sinobiometrics.com Version 1.0, 2003. [14] C. Barry, N. Ritter. Database of 120 Greyscale Eye Images. Lions Eye Institute, Perth Western Australia.

S-ar putea să vă placă și