Sunteți pe pagina 1din 6

Driver Fatigue Detection based on Eye

State Analysis
Yong Du 1 Peijun Ma 1 Xiaohong Su 1 Yingjun Zhang 1
1
Department of Computer Science and Technology
Harbin Institute of Technology, Harbin, China 150001
duyong@hitsz.edu.cn

Abstract Highway Traffic Safety Administration


(NHTSA), falling asleep while driving
In this paper, we present an effective vi- causes at least 100,000 automobile
sion-based driver fatigue detection crashes annually in the United States and
method. Firstly, the interframe difference results in more than 40,000 nonfatal inju-
approach binding color information is ries and 1,550 fatalities result from these
used to detect face. If exists, the face area crashes per annum [3][4]. With this back-
is segmented from the image based on a ground, how to supervise the driver’s
mixed skin tone model. Then we simulate level of vigilance and avoid fatigue driv-
the process of crystallization to obtain the ing is essential to the accident prevention.
location of eyes within face area. Later, The rest of the paper is organized as
eye area, average height of the pupil and follows: Section 2 provides a brief survey
width to height ratio are used to analyze of related researches on driver fatigue de-
the eye’s status. Finally, the driver fatigue tection. In section 3, the outline of our
is confirmed by analyzing the changes of proposed vision-based automatic driver
eye’s states. The experimental results fatigue detection system is presented.
show validity of our proposed method. Face location method including face exis-
tence detection is described in section 4.
Keywords: driver fatigue detection, face Section 5 describes a novel method of the
segment, eye location, eye states analysis eyes location and eye template generation.
Section 6 gives a new driver fatigue
1. Introduction analysis method by measuring pupil
height, area of the eye and width to height
The increasing number of traffic acci- ratio of the eye to identify the eye’s states.
dents due to a diminished driver’s vigi- The experimental results for driver fa-
lance level resulting from sleep depriva- tigue detection is shown in section 7.
tion or dull scenes of driving has become Conclusion and future work are discussed
a serious problem for society. Statistics in the last section.
show that between 10% and 20% of all
the traffic accidents are due to drivers 2. Survey of Previous Work
with a diminished vigilance level [1].
Moreover, accidents related to driver’s In the past decade, many countries have
declined level of vigilance are more seri- begun to pay great attention to the driver
ous than other types of accidents, since safety problem. Researchers have been
drowsy drivers often do not take any working on the detection of driver’s
avoidable actions prior to a collision [2]. drowsiness using various techniques,
According to the statistics of National such as physiological detection [5][6][7],

Proceedings of the 11th Joint Conference on Information Sciences (2008)


Published by Atlantis Press
© the authors
1
driver behavior monitoring [8], vehicle eye’s status. The approach contains five
running status analysis [9] and vision- phases:
based detection [10][11][12][13][14].
However, among those techniques, vi- (1) Judgement of face existence
sion-based driver fatigue detection (2) Face location based on mixed skin
method is a natural, non-intrusive and tone model
convenient technique to monitor driver’s (3) Eyes Location and eye template
vigilance. When driver fatigue occurs, generation
visual behaviors can be easily observed (4) Eye state analysis based on pupil
from changes in their facial features es- height, area of the eye and width
pecially from their eyes. It is indicated to height ratio
that the change regularity of eye states (5) Confirmation of driver’s vigilance
have high relativity with the driver’s
mental states [15]. Eyes express the most The outline of the proposed approach
direct reaction when driver is drowsy or is shown in Fig. 1. In the following sec-
inattention and eye blinking is always tions each step will be discussed in detail.
used as the basis for driver fatigue detec-
tion by researchers.
Zheng pei calculated the ratio of eye
closing during a period of time. The ratio
can reflect driver’s vigilance level [16].
Wenhui Dong proposed a method to de-
tect the distance of eyelid, then judged
the driver’s status by this kind of infor-
mation [15]. Nikolaos P used front view
and side view images to precisely locate
eyes [10]. Edge detection and gray-level
projection methods were also applied for
the eyes location by Wen-Bing Horng [11].
Zutao Zhang located the face by using
Haar algorithm and proposed an eye
tracking method based on Unscented
Kalman Filter [12]. Abdelfattah Fawky
presented a combination of algorithms,
namely wavelets transform, edge detec- Fig. 1: Flow chart of the approach
tion and YCrCb transform in the eye de-
tection [13]. Qiang Ji depended on IR illu-
mination to locate eyes [14]. Eyes always 4. Localization of the Face
contains two kinds of information: size of
opening and duration of the different When driving on the road, the driver’s
states. By analyzing the change rules of eyes’ position and light condition change
eyes in fatigue, we propose an efficient constantly. To search and locate the
approach for driver fatigue detection. driver’s eyes directly in the whole image
will not be easy. Moreover, the back-
ground is usually complex and unpredict-
3. Overall Algorithm
able especially when the automobile is
We present an algorithm of detecting running on the road. So the first step we
driver fatigue by analyzing the changes of detect face in order to reduce the range of
eye detection, besides, the procedure will

Proceedings of the 11th Joint Conference on Information Sciences (2008)


Published by Atlantis Press
© the authors
2
improve the veracity and speed of eyes tions on skin pixels, the right, left, top
location and reduce the interference of and bottom boundaries of the face can be
background. confirmed if the projection values exceed
For purpose of reducing the blindness some threshold which has been set based
of face searching, we calculate interframe on experience, as is shown in Fig. 2(c).
differences to decide whether there is an Usually the normal position of the eyes
moving object. When some object is de- will be in the upper half part of the face
tected and in the local area there contains region, as is shown in Fig. 2(d), further-
skin color information in YCbCr color more, the area is sufficient for us to detect
space, we believe there is a person before eyes later.
the camera.
Although people of different races dis-
tinguish in skin color, the distribution of
human skin color in YCbCr color space
can be approximated by a planar Gaus-
sian distribution [15]. Specifically, face (a) original image (b) face skin
area can be segmented from a image by
skin color information. To improve the
accuracy of detection, in this paper, we
use a mixed skin model based on YCbCr
and HSI color Space. Because in the HSI (c) face region (d) eyes region
color space hue is independent of bright-
ness, the brightness factor can be ex- Fig. 2: Location result of eyes region
cluded from colors. This mixed skin
model is more suitable for distinguishing
skin and non-skin colors no matter the 5. Localization of the Eyes
face is under light or shadowed. The for-
mulas for converting RGB to its corre- According to the thermodynamic princi-
sponding Cb, Cr and H components are ple, for a closed system the minimum of
given as follows. Helmholtz free energy E f is used to de-
scribe the equilibrium state in stead of
ïìq B£G
H = ïí (1) maximum entropy. In such a system, the
ïïî360 - q B>G
particles of set si has the same free en-
Where,
ì
ï 1 ü
ï ergy of E if . Follow the Boltzmann theory
ï
ï [( R - G ) + ( R - B) ] ï
ï
ï
q = arccos í 2 ï and Metropolis rule, the probability of

ïï é( R - G ) 2 + ( R - G ) (G - B)ù1/ 2 ï one particle transforming from free en-
ïï ëê ûú ï ï
î ï ergy level E if to E fj is p which is
Cb =-0.148R-0.291G +0.439B +128 (2) shown as follows.
Cr = 0.439 R - 0.368G - 0.071B +128 (3)
-DE f - E if - E fj
The three components (Cb, Cr and H) p=e kT
=e kT
(4)
have different abilities to represent vari-
ous skin colors and they can be comple-
where k is a constant and T is thermody-
mented each other. By using this method,
it is easy to separate face region from the namic temperature.
origin image, as is shown in Fig. 2(b). By It is easy to see that the larger the free
performing vertical and horizontal projec- energy gap between different particles the

Proceedings of the 11th Joint Conference on Information Sciences (2008)


Published by Atlantis Press
© the authors
3
smaller the probability that they can reach light reflections. The eye template is gen-
the same level. When the free energy gap erated as follows.
is same the higher temperature the larger
the probability it will be. Consider the And Operation
theory mentioned above, we take each Binary Processing Or Operation
pixel as various particle and vector norm Eye Template

of properties is defined to represent the


Image
free energy E f . The free energy of pixel Alignment

(m,n) is as follows.

DE(fm,n) = k1(Cbm,n -CbC )2 +k2 (Crm,n -CrC )2 +k3(Hm,n -HC )2 +k4 (Graym,n -GrayC )2
(5) Fig. 4: Eye template generation process

where Cbm , n , Crm , n , H m , n and Graym , n Also the two eyes’ positions are re-
represent the average values of Cb , Cr , corded, the eyes can be detected in the
H and Gray , which are calculated by next frame based on these positions. The
searching area expanding 6 pixels in four
pixel (m, n)’s nearest eight neighbors.
directions from the eyes’ centers in the
Similarly, CbC , Cr C , H C and Gray C current frame. If the distance between
represent the property values of the ex- two eyes detected in the next frame
pected class correspondingly. k1 , k2 , k3 changes greatly, the eye tracking is re-
and k4 are constants range from 0 to 1 garded as failure. Then the face detection
and eyes location procedures will be re-
and the four parameters add up to one.
started.
Also the gray value can be easily com-
puted by the RGB components.
By considering the diversity to the 6. Detection of Fatigue
nearby pixels and the similarity to the eye
pixels sufficiently, the specific region of To distinguish the driver’s status the
the eyes can be obtained. After this step, eyes’ states should be recognized ahead.
projection method is used again to detect There is two factors which can affect the
the eyes’ precise boundaries. The method size of the eyes in the frames. On the one
has been adopted frequently in the eye hand, human eyes are always different in
size. On the other hand, the distance be-
detection so we needn’t describe the de-
tails. Our method starts from both sides, tween driver and the camera is the other
reason. So we normalize the eye template
left and right, to find eyes, therefore we
to a fixed size of 12×30 before feature
can detect the eyes separately.
extraction. For each eye template, eye
area, average height of pupil, width to
height ratio are the most important fea-
(a) left eye (b) right eye
tures to judge eye’s status which is shown
in Table 1.
Fig. 3: Eyes detected
The eye states can be divided into three
types: full open, half open and closed.
Then we segment the eyes from the
From the table below we can see that
image and use them to generate an eye
eyes of different states present different
template, by this means we obtain a
features. The experiments also proved
rather stable eye template for the status
that the three indicators (Area, Height and
analyzing and reduce the influence of
Ratio) are efficient for the eye’s states

Proceedings of the 11th Joint Conference on Information Sciences (2008)


Published by Atlantis Press
© the authors
4
recognition. By analyzing the driver’s status recognition can achieve 91.16%.
eyes’ states changes while driving we The different eye states of full open and
discovered two principles which can indi- half open sometimes can not be well dis-
cate driver drowsiness. Firstly, if a tinguished which has caused most false
driver’s eyes keep closed over 4 consecu- judgements and the fast movement of the
tive frames it is believed that the driver is driver’s head can result in the driver’s
drowsy. Secondly, fatigue can be con- eyes tracking failure.
firmed if a driver’s eyes only change be- The correct warning rate for driver
tween half open and closed over 8 con- dozing is 100%. Here, a 100% correct
secutive frames. Before the system is put warning rate does not reveal a absolute
into use we trained it in advance to get robustness of the algorithm. If during a
different states parameters for the driver period of dozing time the system can de-
aiming at improving the accuracy of the tect driver’s drowsiness and alarm then
driver’s status analysis. we consider this a successful fatigue de-
tection.
Table 1. Eye states and features

Eye Area Eye Average


8. Conclusion
Ratio
Region (pixel) Template Height
Full
Open
200 7.6 2.8750 In this paper, we present a new driver fa-
Half
155 6.8 3.0000
tigue detection method for driving safety.
Open
The interframe difference approach is
Closed 114 6.0 3.1667 used to decide whether the frame exists a
face. Then a mixed skin color model is
used to detect face. In the face region we
7. Experimental Results use a novel method by simulating the
crystallization process to segment the
From the camera fixed on the car eyes from the face. By performing projec-
dashboard we acquired the tested videos tions the eyes can be located precisely.
in the natural driving conditions. The The detected two eyes are used to gener-
proposed approach has also been tested ate an eye template. We use eye area, av-
on the videos. The experimental results erage height of the pupil and width to
are shown as follows. height ratio to distinguish the eye’s status.
Finally, driver fatigue can be detected
Table 2. Results of fatigue detection based on the rules we discovered. The
experimental results show validity of our
Video 1 Video 2 proposed method for driver fatigue detec-
Total Frames 166 213 tion under realistic conditions. The future
Open eyes 61 68 research will focus on obtaining more
Half Open Eyes 82 54 elaborate eye regions and improving the
Closed Eyes 23 91 accuracy of eye states distinction and we
False Judgement 13 21 also should do more work on detecting
Real Dozing 3 5 the eye’s flickering status in the nearly
Correct Warning 3 5 future which can be used to describe the
Tracking Failure 2 0 fatigue status more precisely.
Precision Rate 92.17% 90.14%
Average Precision Rate: 91.16% 9. References
The input video format is 320×240 true
color. The average correct rate for eye

Proceedings of the 11th Joint Conference on Information Sciences (2008)


Published by Atlantis Press
© the authors
5
[1] L.M. Bergasa, J. Nuevo, M.A. Sotalo, Man and Cybernetics, vol. 2, pp.
and M. Vazquez, “Real-time system 1765-1770, 2005.
for monitoring driver vigilance,” Proc. [9] Erez Dagan, Ofer Mano, Gideon P.
IEEE Intelligent Vehicle Symposium, Stein, et al, “Forward Collision Warn-
pp. 78-83, 2004. ing with a Single Camera,” Proc. In-
[2] Qiong Wang, Jingyu Yang, Mingwu telligent Vehicles Symposium, pp. 37-
Ren, and Yujie Zheng, “Driver Fa- 42, 2004.
tigue Detection: A Survey,” Proc. of [10] Nikolaos P, “Vision-based Detection
the 6th World Congress on Intelligent of Driver Fatigue,” Proc. IEEE Inter-
Control and Automation, pp. 8587- netional Conference on Intelligent
8591, 2006. Transportation, 2000.
[3] D. Royal, “Volume I - Findings re- [11] Wen-Bing Horng, Chih-Yuan Chen,
port; national survey on distracted Yi Chang, et al, “Driver Fatigue De-
and driving attitudes and behaviours, tection Based on Eye Tracking and
2002,” The Gallup Organization, Dynamic Template Matching,” Proc.
Washington, D.C., Tech. Rep., DOT of the 2004 IEEE International Con-
HS 809 566, 2003. ference on Networking, Sensing &
[4] Luis M. Bergasa, Jesús Nuevo, “Real- Control, pp. 7-12, 2004
Time System for Monitoring Driver [12] Zutao Zhang, Jiashu Zhang, “A New
Vigilance,” IEEE Trans. Intelligent Real-Time Eye Tracking for Driver
Transportation Systems, vol. 7, no. 1, Fatigue Detection,” Proc.2006 6th In-
pp. 63-77, 2006. ternational Conference on ITS Tele-
[5] Lal, S. K. L., Craig, et communications, pp8-11,2006.
al,“Development of an Algorithm for [13] Abdelfattah Fawky, Sherif Khalil,
an EEG-based Driver Fatigue Coun- and Maha Elsabrouty, “Eye Detection
termeasure,” Journal of Safety Re- to Assist Drowsy Drivers,” IEEE. pp.
search, vol. 34, pp. 321-328, 2003. 131-134, 2007.
[6] Akira Kuramori, Noritaka Koguchi, [14] Qiang Ji, Zhiwei Zhu, and Peilin Lan,
“Evaluation of Effects of Drivability “Real-Time Nonintrusive Monitoring
on Driver Workload by Using Elec- and Prediction of Driver Fatigue,”
tromyogram,” JSAE Review, vol. 25, IEEE Transaction on vehicular tech-
pp. 91-98, 2004. nology, vol. 53, no. 4, pp. 657-662,
[7] Byung-Chan Chang, Jung-Eun Lim, 2004.
Hae-Jin Kim, et al, “A Study of Clas- [15] Wen-Hui Dong, Xiao-Juan Wu,
sification of the Level of Sleepiness “Driver Fatigue Detection Based on
for the Drowsy Driving Prevention,” the Distance of Eyelid,” Proc. IEEE
Proc. SICE Annual Conference, pp. Int. Workshop VLSI Design & Video,
3084-3089, 2007. Tech., pp. 28-30 , 2005.
[8] Yoshihiro Takei, and Yoshimi Furu- [16] Zheng Pei, Song Zhenghe, and Zhou
kawa, “Estimate of driver’s fatigue Yiming, “Perclos-based recognition
through steering motion,” Proc. IEEE algorithms of motor driver fatigue,”
International Conference on Systems, Journal of China Agricultural Uni-
versity, pp. 104-109, 2004.

Proceedings of the 11th Joint Conference on Information Sciences (2008)


Published by Atlantis Press
© the authors
6

S-ar putea să vă placă și