Sunteți pe pagina 1din 6

Driver Fatigue Detection Using Machine Vision Approach

Amandeep Singh
Department of Electronics and Communication Engineering SLIET Longowal amandeep.singh.sodha@gmail.com

Jaspreet Kaur
Department of Electronics and Communication Engineering GNDU Amritsar gill_jas8@hotmail.com

Abstract-Driver fatigue plays a vital role in a large number of accidents. In this paper, a real time, machine vision-based system is proposed for the detection of driver fatigue which can detect the driver fatigue and can issue a warning early enough to avoid an accident. Firstly, the face is located by machine vision based object detection algorithm, then eyes and eyebrows are detected and their count (four or less) is computed. By comparison of the calculated number of black spots, with a predefined value, which is four (Two eyes and two eye brows), for a particular time interval, driver fatigue can be detected and a timely warning can be issued whenever there will be symptoms of fatigue. The main advantage of this system is its fast processing time and very simple equipment. This system runs at about 15 frames per second with a resolution of320*240 pixels. This algorithm is implemented using MATLAB platform along with camera and is well suited for real world driving conditions since it can be non-intrusive by using a video camera to detect changes. Keywords: Driver fatigue monitoring, Machine vision, Camera, Threshold Value, Eye Detection

The aim of this paper is to develop a prototype for drowsiness detection system. The focus is done on designing a system that will accurately monitor the open or closed state of the drivers eyes in real-time. Detection of fatigue involves a sequence of images of drivers face and the observation of eye movements and blink patterns. The localization of the eyes involves looking at the entire image of the face and determining the position of the eyes by a self developed image-processing algorithm. Once the position of the eyes is located, the system can determine whether the eyes are opened or closed and hence detect fatigue. This paper is organized as: After Introduction in section I, In section II, face and eye detection has been discussed, In section III, how the algorithm has applied to image for making decision is described, In section IV, simulation results are given, Section V highlights the challenges that are faced and finally section VI gives a conclusion and points out the future direction. II. FACE AND EYE DETECTION

I.

INTRODUCTION

Fatigue and drowsiness are the major factors responsible for on road accidents. According to a survey report submitted by National Highway Traffic Safety Association (NHTSA) of USA, near about 3,85,000 crashes [1] take place on the roads, out of which 83 percent are because of fatigue. Recent statistics estimate that annually 750 deaths and 20,000 injuries can be attributed to fatigue [2, 3] related crashes [1, 3]. The development of technologies for detecting or preventing drowsiness while driving is a major challenge in the field of accident avoidance. Because of the hazard that drowsiness is still present on the road, methods need to be developed and should be quality refined for counteracting its affects. In recent years, many studies [4-8] have been done focusing on this topic using visual behavior. However, some of these studies use very complicated hardware, costing too much money and energy and some use simple hardware but run at a relatively low speed. In this paper, a self developed algorithm has been proposed in which total number of eyes and eyebrows has been chosen as the measurement parameter and a real time system is proposed for monitoring driver fatigue using just one camera.
978-1-4673-4529-3/12/$31.00 c 2012 IEEE

The whole system process consists of following steps for the detection of the eyes (All steps of algorithm are implemented on MATLAB 7.5.0(R2007b) [9] platform) and hence the fatigue of driver:

a) Acquiring Quality Images


The manner in which the system is set up depends on the type of analysis and processing you need to do. The image capturing system should produce images with high enough quality so that one can extract the information which is needed from the images. Five factors contribute to overall image quality, which are given below: 1. 2. 3. 4. 5. Resolution Contrast Depth of field Threshold value Noise

Here explanation of the eye detection procedure is given. After inputting a facial image, pre-processing is done to convert it into binary image. It is done so as to reduce the processing time as processing time increases with the increase in size of image to be processed. In pre processing phase, the top and sides of the face are detected to narrow down the area of which is to be processed. Then the image is converted to the binary image,
645

so as to reduce the response time and memory consumption. After that image is scanned from top left corner to right bottom corner to find black bunches of pixels in the image. A range is fixed for the size of bunches which decides which bunches are to be selected and rejected. The following section describes the process Binarization in a brief. All images are generated in MATLAB using a camera with USB interface.

b) Binarization
The first step to localize the eyes is binarizing the picture. Binarization is converting the RGB image to a binary image. For obtaining the binary image first of all only red component is extracted from the original image as shown in following figures:

success of this step is to stop at left and right edge of the face; otherwise, the information, where the edges of the face are, will be lost. There are three commands for removal of noise in MATLAB [9] given below: 1. bwareaopen(im1,4) It fills the open areas whose size is less than four pixels. It uses a brush of four pixels (which is variable). 2. imopen(ir1,strel('disk',2)) It uses a brush of size 2 pixels (Variable/User defined) for filling up the areas which are open at any end. 3. imfill(ir1,'holes') This fills white holes in a given image (Here in this syntax ir1 is the variable carrying image information). For better understanding, these commands are called as filters and the images are inverted. The following figures show the result of using all these filters:

Fig. 5 Binary picture after noise removal using first filter

Fig. 1. Original image Fig. 6 Binary picture after noise removal using first two filters

Fig. 2. Selected Area (Which is to be processed)

Fig. 7. Binary picture after noise removal using all three filters

Fig. 3. After selecting only Red Component from original image

Red component is used only for providing the better results in Binarization. Then, using a proper threshold value, it is converted into black and white image to detect the eyes and eyebrows. Here, for calculation of best suited threshold value, the concept of histogram has been used. Let H be the index with maximum value in the histogram (the x-coordinate with highest y value), then H*2/3 is a best suited value of threshold to differentiate the eye portion from the skin around the eye.

As from the above figures, it is clear that the addition of only first two filters is sufficient. If command for third filter is used the results get distorted. Hence third filter has not been used in the system. After removing the black blobs on the face, the eyes and eyebrows of the face are found again. As seen below, the second time of doing this, it results in, accurately finding the eyes of the face.

Fig. 8. Binary picture after noise removal Fig. 4. Binary Image with threshold value 50 under normal lightening

d) Finding area (black bunches) on the captured image


After obtaining the black and white image, using filters, the next step is to find the black areas which lie in the predefined range of pixels, that is the size of the bunches. In this prototype, the size range for the selection of pixels is kept from 10 to 350 pixels. (The control for increasing or decreasing this limited area is the command used for it. In the future, some more scripts and protocols can be added to eliminate the manual changes, making the system error prone to the various eye sized people.) Any black area which is out

c) Removal of Noise
The removal of noise in the binary image is very straightforward. Starting from the top, (x1, y1), by moving right on pixel by incrementing x1, and in the same way, y value up to the end of picture, the whole picture is scanned. After the scanning of whole image, the various irregularities such as open area, small black or white spots are removed by using the MATLAB [9] commands. The key to

646

2013 3rd IEEE International Advance Computing Conference (IACC)

of this limit will be rejected and also will be treated as black background, because it cant either be the human eye or eye brow.

6.

e) Calculating the number of black bunches of pixels


After the selection of bunches, their number has been calculated. It is done using MATLAB [9] commands given below: 1. For the detection and calculation of size: y=strcats('y',num2str(i)) x=strcat('x',num2str(i)) [y,x]=find(lw==i) [xs,ys]=size(x) 2. For calculating number of black areas in the given image(ir1): [lw,numr]=bwlabel(ir1) These are some commands available in MATLAB which are used for the proper/smooth operation of the system. f) Comparison with a threshold value (predefined) After getting the number of bunches, this number is compared with a predefined count (In this case it is 4 for two eyes and two eyebrows). The system checks error after the comparison. III. DECISION MAKING

Select bunches of black pixels from the image which falls between a predefined size range of pixel area. 7. Calculate the number of black bunches and compare it with four. 8. Depending upon the result of comparison an alarm is issued if the number of bunches is found less than four for 1 second or for 15 continuous frames. Otherwise reset the timer. 9. Capture a new image and repeat the process again. Figure below shows an example of step-by-step result of finding the eyes:

After the comparison the result is analyzed. If the count is less than predefined value (four) then system turns on the timer and wait for a predefined time period (1 second). If the count reaches again a value equal to four, the timer is reset declaring that it was not a symptom of fatigue but was simple eye blinking (Assuming that time required for eye blinking is only 0.166 second). On the other hand, if the count remains less than four for one second then the system declares that one or two black bunches (eyes) are missing, deciding that the eyes are closed and issues a warning signal which is further given to the parallel port of PC for the recording purpose and in the mean time this signal is sent to buzzer which is interfaced using a microcontroller. This signal can also be given to some another circuitry such as seat belt vibration etc. to alter the driver. IV. SIMULATION RESULTS

Fig. 9 Result of algorithm with a threshold value 50

From these, it is clear that a proper value of threshold is necessary to be chosen. Otherwise, the results get affected. V. PERFORMANCE EVALUATION

The performance of the system varies with different threshold values and illumination conditions and also with the distance of driver from the camera. All these variations were recorded and results obtained are given below. Here SN is small noise and N stands for noise.
TABLE I: Variation of performance with threshold under 40 w light source S.No 1 2 3 4 Threshold 50 100 150 200 D 4 4 2 2 Distance of Driver from Camera 35cm 40 cm 45 cm R A D R A D R 0 100 2 2 50 2+N 2 0 100 4 0 100 2 2 2 50 4+N 0 100 4+SN 0 2 50 0 4 0 0 4

The system was tested on 6 people and was successful with 5 people, resulting in 81.3% accuracy under normal conditions of light. The system gave errors in results when it was subjected to an image of cat eyed man. The light source used was simple desk lamp. The threshold value was kept 45 because of lightening conditions. Under daylight conditions the system was working with same accuracy with a threshold value of 90. Following the step by step procedure as is given in the flow chart the whole algorithm has been implemented using following steps: 1. 2. 3. 4. Take snap shots by the camera hardware with a rate of 15 frames per second. Select a part of image that consists of eyes area. Extract only red portion from the original image to get the best results. Convert the above image to a binary image using a particular and proper value of threshold depending upon illumination conditions. Remove the noise using filters as per requirement.

A 50 50 100 0

TABLE II: Variation of performance with threshold under 40 w and 23 w light source S.No 1 2 3 4 Threshold 50 100 150 200 D 0 2 2 0 35 cm R 4 2 2 4 Distance of Driver from Camera 40 cm 45 cm A D R A D R 0 0 4 0 0 4 50 0 4 0 0 4 50 2 2 50 2 2 0 0 4 0 0 4

A 0 0 50 0

TABLE III: Variation of performance with threshold under 40w, 23w and 15w light source S.No 1 2 3 4 Threshold 50 100 150 200 D 0 0 4 0 35 cm R 4 4 0 4 Distance of Driver from Camera 40 cm 45 cm A D R A D R 0 0 4 0 0 4 0 4 0 100 0 4 100 4+N 0 100 3+N 1 0 2 2 50 0 4

5.

A 0 0 75 0

2013 3rd IEEE International Advance Computing Conference (IACC)

647

TABLE IV: Variation of performance with threshold under normal daylight S.No 1 2 3 4 Threshold 50 100 150 200 D 4 4 0 0 Distance of Driver from Camera 35 cm 40 cm 45 cm R A D R A D R 0 100 2+N 2 50 2+N 2 0 100 4+N 0 100 4+N 0 4 0 0 4 0 0 4 4 0 0 4 0 0 4

A 50 100 0 0

TABLE V: Variation of performance with threshold under normal daylight and 23w light source S.No 1 2 3 4 Threshold 50 100 150 200 D 0 0 3 0 Distance of Driver from Camera 35 cm 40 cm 45 cm R A D R A D R A 4 0 0 4 0 0 4 0 4 0 0 4 0 2+N 2 50 1 75 3 1 75 0 4 0 4 0 0 4 0 0 4 0

TABLE VI: Variation of performance with threshold under normal daylight, 23w and 15w light source S.No 1 2 3 4 Threshold 50 100 150 200 D 0 2 4 0 Distance of Driver from Camera 35 cm 40 cm 45 cm R A D R A D R A 4 0 0 4 0 0 4 0 2 50 4+N 0 100 0 4 0 0 100 2+N 2 50 2 2 50 4 0 0 4 0 0 4 0 Fig. 12: Variation of Performance with Threshold under 40W, 23W and 15W light Source

For better understanding of all these variations the graphs have been plotted using the above data. Here, the line graphs corresponding to various light conditions and threshold values are given below:

Fig. 13. Variation of Performance with Threshold under Normal daylight

Fig. 10 Variation of Performance with Threshold under 40 W light Source

Fig. 14. Variation of Performance with Threshold under Normal daylight and 23W light Source

Fig. 11. Variation of Performance with Threshold under 40 W and 23 W light Source

648

2013 3rd IEEE International Advance Computing Conference (IACC)

Fig. 15 Variation of Performance with Threshold under Normal daylight, 23W and 15W light Source

After going through all the above results it is clear that output of algorithm is best (with 100% accuracy) when the threshold varies from 50 to 100 under either normal illumination (Normal daylight) or when a light source of 40W is used in night mode condition. The camera is placed at a distance of 35 cm to 40 cm from the face of driver. If the light intensity and distance is varied beyond these limits then algorithm requires a change in threshold value and also change in the limits of size window which is fixed for the calculation of size of black spots so as to detect eyes. VI. PRACTICAL RESULTS

2) Having access to the memory location where the image is being stored. Initially, the commercial grabber was considered to use. It was later found that this hardware would not be enough compatible with the system and it would be difficult to write software to process the images in conjunction with the hardware. So, simple webcam is used which is connected to PC via USB interface. Constructing an effective light source Illuminating the face is an important aspect of the system. Initially, a light source consisting of 8 IR LEDs and a 9V battery was constructed and mounted onto the camera. It was soon realized that the source was not strong enough. After review other literature, the conclusion was that in order to build a strong enough light source, approximately 50 LEDs. To reduce the cost, a desk lamp is used. Determining the correct Binarization threshold Because of varying facial complexions and ambient light, it was very hard to determine the correct threshold for Binarization. The initial thought was to choose a value that would result in the least amount of black blobs on the face. After observing several binary images of different people, it was concluded that a single threshold that would result in similar results for all people is impossible. Histogram equalization was attempted, but resulted in no progress. Finally, it was decided that the issue can be solved only by using proper illumination source and intensity. d) Case when the drivers head is tilted If the drivers head is tilted, then eye localization and calculations regarding number of black bunches from the left side of the head to the right side come out to be inaccurate. It is not realistic to assume that the drivers head will be perfectly straight at all times. So a little more attention is required in this direction. e) Finding the eyes and eyebrows correctly Successfully finding the eyes and eyebrows was a big challenge. Depending on the binarization of the face, they could not be found correctly. The problem was with the black pixels on the face of the binary image. In some instances the nose, lips or any mark on face were found due to their black blobs in the binary image. After implementing the noise removal algorithm, this problem was eliminated. VIII. CONCLUSION c) b)

Now, for evaluation of the system performance, it has been tested over 10 people. The other parameters were as: 100 threshold values, normal day light and distance of camera from the driver was 35 cm.
TABLE VII: Performance of system in the light of warning signal, when tested over 10 persons
P1 C F W F +ve F ve C W C (%) P R (%) 20 03 03 00 00 03 100 100 P2 25 01 01 00 00 01 100 100 P3 25 04 04 00 00 04 100 100 P4 15 01 01 00 00 01 100 100 P5 10 01 01 00 00 01 100 100 P6 25 00 00 00 00 00 100 100 P7 30 02 02 00 00 02 100 100 P8 10 01 01 00 00 01 100 100 P9 15 00 00 00 00 00 100 100 P10 45 20 21 01 00 20 100 95.23

Average precision

99.52%

where P1, P2.P10 No. of Persons under consideration C No. of Eye Closure F No. of times when fatigue is present W No. of Warning Signals F +ve No. of more warnings when fatigue was not found F ve No. of no warnings when fatigue was present C W Correct Warning P R Percentage accuracy Above given same results are obtained when the threshold value was kept 100, 40W light source was there and the distance of driver from the camera was 40 cm. VII. CHALLENGES a) Obtaining the images The first and probably most significant challenge faced is transferring the images obtained from the camera to the computer. Two issues involved with this challenge are: 1) Capturing and transferring in real time

A non-intrusive system to localize the eyes and monitor fatigue is developed. Information about the head and eyes position is obtained through various self-developed image processing algorithms. During the monitoring, the system is able to decide if the eyes are opened or closed. When the eyes have been closed for too long, a warning signal is issued. In addition, during monitoring, the system is able to automatically detect any eye localizing error that might have occurred. In case of this type of error, the system is able to recover and properly localize the eyes. The following conclusions have been made: a) Image processing achieves highly accurate and reliable detection of drowsiness. b) Image processing offers a non-intrusive approach to detect drowsiness without the annoyance and interference.

2013 3rd IEEE International Advance Computing Conference (IACC)

649

c)

d)

A drowsiness detection system developed around the principle of image processing judges the drivers alertness level on the basis of continuous eye closures. The system works with a minimum accuracy of 81% which will result in reducing at least 50% of accidents which take place because of the driver drowsiness.

ACKNOWLEDGMENT This work is supported by Electronics and Communication Department of Sant Longowal Institute of Engineering and Technology (Deemed University), Longowal, Punjab, by providing excellent laboratories (Computer Lab, Machine Vision Lab and Digital Signal Processing Lab) and MATLAB software for the development and testing of the algorithm. REFERENCES
[1]. US department of transportation, NHTSA, Assessment of a Drowsy Driver Warning System for Heavy-Vehicle Drivers Final Report Page 27-30, 1063-7125/94 IEEE,April 2009

[2]. Paul Stephen Rau, National Highway Traffic Safety Administration United States, Drowsy driver detection and warning system for commercial vehicle drivers: field operational test design, data analyses, and progress, 1996 [3]. Paper Number 05-0192 Department of consumer and employment protection, Govt. of western Australia, Driver fatigue safety bulletin 2006, Jan 2006 [4]. Ronald R. Knipling, Walter W. Wierwille, Vehicle-Based Drowsy Driver Detection:Current Status and Future Prospects IVHS America Fourth Annual Meeting, Atlanta, GA, April 17-20, 1994 [5]. Zhiwei Zhu, Qiang Ji, Real Time and Non-intrusive Driver Fatigue Monitoring Department of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute Troy, New York, USA [6]. N.G.Narole, Dr.P.R.Bajaj, A Neuro-Genetic System Design for Monitoring Drivers Fatigue International Journal of Computer Science and Network Security, VOL.9 No.3, March 2009 [7]. Ann Williamson and Tim Chamberlain Review of on-road driver fatigue monitoring devices NSW Injury Risk Management Research Centre University of New South Wales April, 2005 [8]. Mahesh M. Bundele, Rahul Banerjee, Detection of Fatigue of Vehicular Driver using Skin Conductance and Oximetry Pulse: A Neural Network Approach Proceedings of the iiWAS2009 [9]. Release notes for MATLAB R2007b, Summary of New features : release notes for MATLAB R2007b Software Lab Manuals

650

2013 3rd IEEE International Advance Computing Conference (IACC)

S-ar putea să vă placă și