Sunteți pe pagina 1din 8

IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013 ISSN (Online) : 2277-5420 www.ijcsn.

org

11

Eyes Detection by Pulse Coupled Neural Networks


1

Maminiaina Alphonse Rafidison, 2Andry Auguste Randriamitantsoa, 3Paul Auguste Randriamitantsoa


1, 2, 3

Telecommunication- Automatic Signal Image Research Laboratory High School Polytechnic of Antananarivo, University of Antananarivo Antananarivo, Ankatso BP 1500, Madagascar

Abstract
This paper presents a new method fast and robust for eyes detection, using Pulse-Coupled Neural Networks (PCNN). The functionality is not the same as traditional neural network because there are no training steps. Due of this feature, the algorithm response time is around tree millisecond. The approach has two components including: face area detection based on segmentation and eyes detection using edge. The both operations are ensured by PCNN The biggest region which is constituted by pixel value one will be the human face area. The segmented face zone which will be the input of PCNN for edge detection undergoes a vertical gradient operation. The two gravitys center of close edge near the horizontal line which corresponds to the peak value of horizontal projection of vertical gradient image will be the eyes.

nodes coupled together with their neighbors within a definite distance, forming a grid (2Dvector). The PCNN neuron has two input compartments: linking and feeding. The feeding compartment receives both an external and a local stimulus, whereas the linking compartment only receives a local stimulus. When the internal activity becomes larger than an internal threshold, the neuron fires and the threshold sharply increases. Afterward, it begins to decay until once again the internal activity becomes larger.

Keywords: Pulse Coupled Neural networks, Face detection,


Eyes detection, Image Segmentation, Edge Detection.

1. Introduction
In recent decades, image processing domain has an exponential evolution. The current status is completely different from initial state. Actually, image processing searches are oriented to object recognition especially for face recognition. Eyes detection is an important phase ensuring a good performance of face recognition. In this paper, an eyes detection method is proposed. The method is based on pulse coupled neural networks. It is divided in two parts: face detection first, following by eyes detection. We will see in the next paragraph the neural network coupled pulse purpose, then the details of the proposed algorithm followed by the test phase, its performance measurement and its prospect.

Fig. 1 Pulse Coupled Neural Networks Structure

This process gives rise to the pulsing nature of PCNN, forming a wave signature which is invariant to rotation, scale and shift or skew of an object within the image. This last feature makes PCNN a suitable approach for feature extraction in very-high resolution imagery, where the view angle of the sensor may play an important role. PCNN system can be defined by the following expression: (1)

2. Pulse Coupled Neural Networks Model


The architecture of a Pulse Coupled Neural Networks (PCNN) is rather simpler than most other neural network implementations. PCNN do not have multiple layers and receive input directly from the original image, forming a resulting pulse image. The network consists of multiple (2) Where is the input stimulus to the neuron , and are respectively the values of the Feeding and Linking compartment. Each of these neurons

IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013 ISSN (Online) : 2277-5420 www.ijcsn.org

12 communicates with neighboring neurons by means of the weights given by M and W kernels. Y is the output of a neuron from the previous iteration, while and indicate normalizing constants. The output of feeding and linking compartment are combined to create the internal state of the neuron : (3) A dynamic threshold , is also calculated as follow: (4) In the end, the internal activity is compared with produce the output , by: to (5) Initial values of matrix : The initial values of linking L, feeding F matrix and stimulus S are the same as the input image. The convolution between null matrix which has the same size as the input image RxC and weights matrix initiates the output value Y of PCNN. The first value of dynamic threshold is an R-by-C matrix of two. Delay constants: , and

3.1 Face Detection


Searching face area focuses on the skin detection because it is the dominant part in the top portion of human image. Once we get grayscale image as input, we proceed to configure the PCNN using the below parameters: Weights matrix (6)

The result of a PCNN processing depends on many parameters. For instance, the linking strength, , affects segmentation and, together with M and W, scales feeding and linking inputs, while the normalizing constants scale the internal signals. Moreover, the dimension of the convolution kernel affects the propagation speed of the autowave. With the aim of developing an edge detecting , PCNN, many tests have been made changing each parameter [1][3].

3. Proposed Method
The method doesnt depend on image input format. In case of image color, the conversion to grayscale type is required. We have two steps to follow: face detection, then eyes detection. The following figure presents shortly the chart of our algorithm.

Normalizing constants: , , and The PCNN is ready for iteration exercise. For skin segmentation, the iteration set is n= . We have already gotten an image segmented during the first iteration but to obtain a good result, we repeat the operation three times. The Fig. 4, Fig. 5, Fig 6 show the PCNN outputs.

Fig. 2 Face/Eyes detection method

Fig. 3 Original image

IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013 ISSN (Online) : 2277-5420 www.ijcsn.org

13

Fig. 4 PCNN output for first iteration n=1

Fig. 6 PCNN output for third iteration n=3

Once the original image with RxC size is segmented, we calculate the sum of pixel value per row and per column . (7)

(8)

Projection vertical of PCNN image output 30

25

20 Sum of pixel value

15

Fig. 5 PCNN output for second iteration n=2

10

50

100 Image column

150

200

250

Fig. 7 Vertical projection of Fig. 6

IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013 ISSN (Online) : 2277-5420 www.ijcsn.org

14 The vertical projection graph presents some peaks values in the column range , same case for horizontal projections in row range
Projection Horizontal of PCNN image output 25

20

Sum of pixel value

15

10

50

100

150 Image row

200

250

300

Fig. 10 Face area

Fig. 8 Horizontal projection of Fig. 6

3.2 Eyes detection


After detecting the face, the next step is to localize the iris. We need to extract first the content of rectangle from the image segmented PCNNs output (Fig. 6). Then, we customize each region to be delimited as well. The operation doing this task is available with Matlab called imclose. The Fig. 11 presents the result of this operation.

Face area is the intersection region of the two bands; it means the rectangles area described on Fig. 9. (9)

Fig. 9 Face detection method

Fig. 11 Region customization

With our experimental image:

And we get the following picture:

The image with region closed becomes the input of PCNN for edge detection. The Pulse Coupled Neural Networks will use the same parameters as before during segmentation steps. Three iterations are enough to get a good result of edge detection and the below figures show the output for each iteration.

IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013 ISSN (Online) : 2277-5420 www.ijcsn.org

15

Fig. 15 Eyes regions candidates

Fig. 12 First iteration

Eye regions candidates are found, then we are looking for gravitys center of each region. For a 2D continuous domain, knowing the characteristic function of a region, the raw moment of order ( is defined as : (10) For adapting this to scalar (greyscale) image with pixel intensities , raw image moments are calculated by: (11) The two raw moments order one et , associated with moment order zero are used to calculate the centroid of each region. Its position is defined as: et (12)

Fig. 13 Second iteration

Now, our problem is how to identify the eyes? . To answer this question, we proceed to calculate vertical gradient of face area segmented image.

(13)

Fig. 14 Third iteration

The PCNN has played two important roles: segmentation and edge detection. The closed edge will be filled with blank color (imfill Matlab function) and we calculate the difference between this one with the image of the last iteration of the PCNN.

Fig. 16 Vertical gradient of face area segmented

We use the same principle as the face detection by calculating the sum of gray level of vertical gradient

IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013 ISSN (Online) : 2277-5420 www.ijcsn.org

16 image per row [2]. We get the peaks and draw the line horizontal corresponding.
Projection verticale gradient Y 3000

2000

Sum of vertical gradient projection

1000

-1000

-2000

-3000

20

40

60

80

100 Image row

120

140

160

180

200

Fig. 19 Eyes detected Fig. 17 Horizontal projection of Fig. 16

4. Results and Performance


(14) All tests were performed with image color with different dimension. As we know, the algorithm doesnt use a database image for training, so the eyes detection is very fast. However, it has a weakness when the person wears glasses because the iris is not detected correctly. Samples of the experimental results are shown in the series of pictures (Fig. 20 and Fig. 21) below:

is the line carrier relevant information in top part of image and the two centers of gravity of a region near the horizontal line are the eyes. The distance between and the center of gravity is calculated by:

(15)

Fig. 20 Multiple detection

Fig. 18 Line and gravitys centers positions

Finally, the eyes are detected with more precision.

IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013 ISSN (Online) : 2277-5420 www.ijcsn.org

17
Table 2: Comparison results

No.

Methods Choi and Kim [4] Proposed Method S. Asteriadis, N. Nikolaidis, A. Hajdu, I. Pitas [5] Song and Liu [6] Kawaguchi and Rison [7] Eye detection based-on Haarcascade classifier Zhou and Geng [8]

Eyes detection rate success 98.7 % 98.5% 98.2% 97.6 % 96.8 % 96.5% 95.9 %

1 2 3 4 5 6 7

5. Conclusion
In this paper, we proposed a method for eyes detection using Pulse Coupled Neural Networks PCNN which is inspired by the human visual cortex. The algorithm has a two parts: face detection which is based on segmentation and eyes detection based on edge detection. The method is very fast due of iteration instead of image database learning. The time requirement of the algorithm is three millisecond which is acceptable for real time applications and less than this for grayscale image. The success rate is up to 99.4% for a picture with a person without glasses against 97.6% with glasses. Our prospects are turning to extract face feature such as and are the iris nose and mouth. position. which is perpendicular line with segment, passes in middle of the both iris. The first black region from PCNN output (Fig. 6) passed by is the noise and the mouth is the second one. Acknowledgments
Face Detection With glasses Without glasses Total 99.6% 98.4% 99% Eyes Detection 99.4% 97.6% 98.5%

Fig. 21 Testing results

An approximate measure of performance was done by passing image database test used by the methods listed on Table 2 and some image from internet, as input of our algorithm. The following table (Table 1) shows the result of testing:
Table 1: Performance measurement

Authors thank Mrs Hellen Ndiri Ayuku and Mr Arnou Georges Philippe Jean for English language review.

References
[1] F. D. Frate, G. Licciardi, F. Pacifici, C. Pratola, and D. Solimini, "Pulse Coupled Neural Network for Automatic Features Extraction from Cosmo-Skymed and Terrasar-x imagery", Tor Vergata Earth Observation Lab Dipartimento di Informatica, Sistemi e Produzione, Tor Vergata University, Via del Politecnico 1, 00133 Rome, Italy, 2009. A. Soetedjo, "Eye Detection Based-on Color and Shape Features", (IJACSA) International Journal of Advanced

With performance 98.5%, we can say that our method is powerful. A comparison with another algorithm was done and the table (Table 2) indicates the results. We can use this algorithm for face recognition or reading facial expression.

[2]

IJCSN International Journal of Computer Science and Network, Volume 2, Issue 5, October 2013 ISSN (Online) : 2277-5420 www.ijcsn.org

18
Computer Science and Applications, Vol. 3, No. 5, 2011, pp. 17-22, 2011. T. Lindblad, J. M. Kinser, "Image processing Using Pulse-Coupled Neural Networks", Second, Revised Edition, Springer, 2005. I. Choi, D. Kim, "Eye correction using correlation information", In Y. Yagi et al. (Eds.): ACCV 2007, Part I, LNCS 4843, pp. 698-707, 2007. Z. Zhou, X. Geng, "Projection functions for eye detection, Pattern Recognition", Vol. 37, pp. 1049-1056, 2004. J. Song, Z. Chi, J. Liu, "A robust eye detection method using combined binary edge and intensity information, Pattern Recognition", Vol. 39, pp. 1110-1125, 2006. T. Kawaguchi, M. Rizon, "Iris detection using intensity and edge information, Pattern Recognition", Vol. 36, pp. 549-562, 2003. S. Asteriadis, N. Nikolaidis, A. Hajdu, I. Pitas, "An Eye Detection Algorithm Using Pixel to Edge Information", Department of Informatics, Aristotle University of Thessaloniki, Box 451, 54124, Thessaloniki, Greece, 2010.
Maminiaina A. Rafidison was born in Moramanga, Madagascar on 1984. He received his Engineer Diploma in Telecommunication on 2007 and M.Sc. on 2011 at High School Polytechnic of Antananarivo, Madagascar. Currently, he is a consultant expert on Value Added Service (VAS) in telecom domain at Mahindra Comviva Technologies and in parallel; he is a Ph.D. student at High School Polytechnic of Antananarivo. His current research is regarding image processing especially using Neural Networks. Andry A. Randriamitantsoa received his Engineer Diploma in Telecommunication on 2008 at High School Polytechnic of Antananarivo, Madagascar and his M.Sc. on 2009. Currently he is working for High School Polytechnic and he had a PhD in Automatic and Computer Science in 2013. His research interests include Automatic, robust command, computer science. Paul A. Randriamitantsoa was born in Madagascar on 1953. He is a professor at High School Polytechnic of Antananarivo and first responsible of Telecommunication- Automatic Signal Image Research Laboratory.

[3]

[4]

[5] [6]

[7]

[8]

S-ar putea să vă placă și