Sunteți pe pagina 1din 3

The Face Detection Algorithm Combined Skin Color Segmentation and PCA

Liying Lang
College of Information and Electrical Engineering Hebei University of Engineering Handan, China langliying@126.com

Weiwei Gu
College of Information and Electrical Engineering Hebei University of Engineering Handan, China weiwei1982820527@126.com

AbstractThe main problem of face detection is illumination, pose, facial expression, etc; every face detection algorithms are focused on solving these problems, which algorithm can solve these problems very well, so then this algorithm will be given a higher detection rate and a lower of false detecting rate. In this paper, a new face detection algorithm is introduced, which combined the skin color segmentation and the Principal Component Analysis (PCA), named SCS-PCA. The experimental results in IMM face database and self-built face database showed that algorithms is very robust to the illumination, pose etc variations and suitable for the real-time face detection system. Keywords- Skin color segmentation; PCA, color space; feature subspace;

texture, etc. But most face detection methods use only a single mode characteristic. The researchers experimental also show that every kinds of algorithm have limited reliability on the single-channel. Because of skin color segmentation and Eigenface algorithm have the common advantage that is very robust to the illumination, pose etc. So in this paper, the algorithm is proposed, which combines skin color segmentation and PCA named SCS-PCA. This algorithms theory is: First, through skin color segmentation detected the candidates human face region, then search in the candidates human face region, according to the distance of the projection of Eigen-face space and the original image to sentence human face. II. SKIN COLOR SEGMENTATION

I.

INTRODUCTION

Face detection is a computer technology that determines the locations and sizes of human faces in arbitrary (digital) images. Face detection is a very active research topic in the field of computer vision and pattern recognition, Which is widely applied in the identity authentication; man-machine interface; visual communication, virtual reality, management of public security files, content-based retrieval and many other aspects, its the first step in automatic face recognition system. In the early stage face detection methods can be broadly divided into Knowledge-Based method, Feature-Based method and Template Matching method. Their main defect is more sensitive to noise, illumination and the change of face-size, so their have a lower accuracy rate and a higher false alarm rate. In recent years, large number of studies focused on Statistical Learning Theory-Based face detection method, for instance: Artificial Neural Network (ANN), Support Vector Machine (SVM), etc. Based on statistical learning methods rely on statistical analysis and machine learning technology to find human faces and non-human face-related features. These learning characteristics have application in face detection in form of distribution model and discrimination function, at the same time in order to improve the efficiency of calculation and detection is usually used on the eigenvector dimension reduction methods. The researchers of Face detection from different point of view use different information to face detection, for example, the use of color information of human faces, face structure,

One of the most significant features of human faces surface is skin color, for the color image, skin color is the relatively concentrated; stable region in the image. Its better to distinguish human face from background regions by the skin color. It shows that different race, age, sex with different human face skin color seemly, but the difference mainly concentrates in brightness, if in a color space which removes brightness, the different face skin color distribution has clustering. Base on this principle, its feasible to segmented image by the skin color. Skin color segmentation mainly refers to two aspects content color space and skin color model. A. Color Space Skin color has its own characteristics, which can form the different expression in the different color space. Therefore, it makes the computer has different the skin color identification ability and treatment effect in different color space. The main color space has RGB, CMY/CMYK, YCbCr, HIS (HSV), YIQ, YUV and so on. This paper will describe the skin color model with YCbCr color space. Because the YCbCr space has the similar composition principle with humanity's visual perception process, which can separate brightness and chroma very well, besides, the color space is discrete, which is easy to realize clustering algorithm and other merits. The conversion formula from the RGB space to the YCbCr space is as follows:

978-1-4244-4994-1/09/$25.00 2009 IEEE

0.2990

0.5870

0.1140

0.1687 0.3313 0.5000 128 G Cb = . Cr 0.5000 0.4187 0.0813 128 B 1 0 0 0 0 1

(1)

Space, each vector are in this space, total population scatter matrix be defined as:

ST =
Which: vector
T

1 N

N k =1

( xk )( xk )T =

1 XX T N

(4)

B.

Skin Color Model Skin color model is one model that it needs to use the algebra (analysis) or look-up table forms to express which pixels color belongs to the skin color, or express the similar degree between pixels color and skin color. This paper will use skin color Gaussian model, which is not simply binary skin color location, but by computing pixel's probability value, constituting continual data information, then obtain a skin color probability chart, confirm the color according to the numerical magnitude. This method overcomes the shortcomings of the geometric model, and doesnt need to consider the problem that its difficult to extract accurately the non-skin color sample in the neural network model. The expression of skin color distribution two-dimensional Gaussian function is:

xk is the training sample, R n n is the average


of all Structure samples, matrix

X = [ x1 , x2 ,..., xn , ]

R = X X , it is very easy to obtained the eigen-value of R , sorting eigen-value in descending order 1 2 ... n , Its
corresponding orthogonal normalized eigenvector is vi (i = 1, 2,..., N ) , this can be drawn the orthogonal normalized eigenvector

ei

of

sT : ei =

Xvi (i = 1, 2,..., N )

P(Cb, Cr ) = exp[0.5( x M ) C ( x M )]

(2)

Which: x for the sample pixel in the YCbCr color space's value, M is the skin colors sample mean in the YCbCr color space, C is the skin color similarity models covariance matrix, M and C are obtained through the statistical computation to the massive skin color sample.

Select the pre-M eigenvector to build the feature subspace: E = {e1 , e2 ,..., eM } , human face image projection to the subspace will get a set of coordinate coefficient. X is the sub window, which is projection to feature subspace When judged whether it exist human face in target sub window, and T get its coefficient vector is: y = E ( x ) , its reconstruction

x = [Cb, Cr ]T ; M = E ( x) C = E (( x M )( x M )T )

sub window is: x ' = + Ey , the reconstruction Signal-tonoise Ratio is defined as:

(3)

r ( x) = 10 lg(

2 2

x x'

(5)

III.

FEATURE SUBSPACE METHOD

Face detection algorithm based on feature subspace is mapped face image to one certain feature space, and then distinction between the face mode and non-face mode according to the distribution law in feature subspace. The commonly used algorithm is Principal Component Analysis (PCA), also known as eigen-face method. PCA is a commonly used face recognition algorithm; moreover, it is also a classical face detection algorithm. By reducing the dimension this algorithm can find the projection direction that makes the general dispersion of all face images to maximum. Just because it is select this projection, which makes the general dispersion to maximum, so it preserves the image difference caused by the changes of illumination, facial expression etc. Therefore PCA is optimal for the reconstruction face image, but is not optimal for face distinguishes and classify, this is the main reason why the PCA algorithm is commonly used in face detection. The processes of use PCA implementation face detection as follows: Suppose there are N pieces of face images that composed the training sample set, it through vectorization n formed Vector Set {x1 , x2 ,..., xN } , R is a N-dimensional

Set a threshold T, if r ( x ) > T , then holds that it is exist human face in this sub window. IV. SCS-PCA ALGORITHM

Face detection method based on skin color has the stability, which is not influenced by the change of dimension; expression; posture and so on. But in the complex background image the detection effect is not ideal, ordinary false detecting rate is quite high. Because of PCA is to choose the largest projection of the general dispersion, so it also remains the image difference generated by the change of illumination, facial expression etc. Moses once said the deference generated by the changes of illumination, pose, perspective etc always greater than the changes of human face classification. Thus PCA is the best for the reconstruction of face image, but for the face identification classification is not optimal. This is just about the main reason why PCA commonly used in face detection. Skin color segmentation and PCA equally have advantage of not effected by the change of illumination, facial expression etc. Therefore, combination of the two algorithms can be effective resolve the main problem in face detection, such as facial expression and pose etc.

Algorithm process is: first of all, utilize skin color segmentation do the rough detection got the candidate face region, magnified the threshold value appropriate to avoid missing detection in the course of rough detect and allow a certain degree of false detection, then to regard candidate face region as the input, through the PCA method to find out the feature face space, and use eigen-face Vector to expression each sample in the face database, towards a new sample work out the space distance between it and the face database. According to space distance can be judged whether or not it is a face image. V. Experimental results and analysis

expression, pose change etc as test images, and considered multiple faces detection. The experiment results as the below table shown:
TABLE I. Face database IMM face database Self-built face database 95.4 89.2 EXPERIMENT RESULTS Detection rate (%) False detecting rate (%) 2.3 4.5

We used IMM face database and some face image which is downloaded from the network carry on experiment, the below picture is part of test images:

As can be seen form the table 1, SCS-PCA algorithm proposed in this paper have a higher detection rate and a lower false detection rate test with IMM face database and self-built face database. VI. CONCLUSION

Figure 1.

Part images of IMM face database

In this paper, through in-depth study and analysis skin color segmentation and feature face algorithm these two kinds of face detection method, the novel face detection method was introduced, which integrates these two kinds of classifier face detection. The experiment results show that the method mentioned in this paper reduces the false detecting rate increases the detection rate at the same time. This new algorithms is very robust to the illumination, pose etc variations and suitable for the real-time face detection system. REFERENCES
[1] M.H.Yang, N.Ahuja, D.Kriegman, Face recognition using kernel eigenfaces, International Conference on Image Processing, Vancouver, Canada, 2000:37-40 M.J.Jones,J.M.Rehg Statistical color models with application to skin detection In Journal of Computer Vision archive 2002 vol.46,pp:81-96. Hsu ReinLien Jain Anil K Face Detection in Color Images Ieee Trans actions on pattern analysis and machine intelligence may, 2002 M Turk and A Pentland Eigenfaces for Recognition[-J] Journal of Cognitive Neuroscience, 1991, 3(1): 71-86 G.Yang,T.S.Huang.Human Face Detection in Complex Background[J].

Figure 2.

Part images of self-built face database [2] [3]

IMM face database is created by the University of Danish informatics and mathematical model Science and Technology; it is included forty volunteers, two hundred forty images, including changes of face pose, facial expression, scale, and illumination. The images which is downloaded from the network are selected for our test purpose, that is selected the images which have stronger impact on illumination, obvious facial

[4] [5]