Sunteți pe pagina 1din 11

International

Journal of Computer
Engineering
Technology (IJCET),
ISSN 0976-6367(Print),
INTERNATIONAL
JOURNAL
OFand
COMPUTER
ENGINEERING
&
ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

TECHNOLOGY (IJCET)

ISSN 0976 6367(Print)


ISSN 0976 6375(Online)
Volume 6, Issue 3, March (2015), pp. 01-11
IAEME: www.iaeme.com/IJCET.asp
Journal Impact Factor (2015): 8.9958 (Calculated by GISI)
www.jifactor.com

IJCET
IAEME

COMPUTER VISION BASED ADAPTIVE LIGHTING


SOLUTIONS FOR SMART AND EFFICIENT SYSTEM
Yoel E. Almeida1,
1,2,3

Ashray S. Bhandare2,

Aishwary P. Nipane3

Department of Computer Engineering, Vidyavardhinis College of Engineering and Technology


Mumbai University, Mumbai, India

ABSTRACT
Signal processing is one of the finest enabling technologies which encompass signal analysis
of Fundamental theories, algorithms and applications. The outputs obtained from technologies are
represented in a symbolic, mathematical and abstract format .In imaging science, signal processing
has several forms. One of the forms of Signal Processing is Image Processing which takes input as
image, video frame etc. The outputs obtained from Image Processing are either in the form of image
or set of characteristics parameter such as mathematical, symbolic formats as explained. Image
processing closely refers to digital image processing and computer vision. Using concepts of Digital
image processing and computer vision with image Histogram give effective visualization of tonal
distribution to study the physical content of image or sequences of images. In modern scientific
world, image gains demanding role in growing fields of visualization containing scientific and
experimental complex data. This paper provides an intelligent solution to day to day lighting
problems such as invariant light intensities, regardless of changes in environment conditions using
the concepts of image processing & computer vision. The proposed system becomes more aware of
the environment changes and this leads to more intelligent and customizable system.
Keywords: Computer Vision, Tonal Distribution, Histogram thresholding, Image Acquisition,
Image Subtraction.
I.

INTRODUCTION

Humans are still working on proposed scientific discoveries in order to make them work
more efficiently and effectively. They develop new technologies based on previous work done by
others. As discoveries are proposed and implemented, he finds drawbacks or limitations which
inspire him to do further research in that particular field [1].
1

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

This paper contributes towards the development and advancement of existing lighting
systems. The aim of existing smart lighting systems was energy efficiency. These lighting systems
required many sensors and hardware devices. These Sensors and Hardware devices made the
traditional lighting systems more dynamic and energy efficient.
Instead of using multiple and costly sensors, this project introduces the concept of optimum
use of energy resources management. This project tries to make the existing lighting system simpler
in some aspects of technical fields such as
Optimum energy resources management
Reduction in Usage of requirements of multiple Sensors, replaced by single sensor(Camera)
Reduction in communication setup requires between Multiple Sensors to transfer the data
within them
This project uses the concepts of Computer Vision and Image Processing. Image Processing
is majorly used along with the Computer Vision [2]. Computer Vision is used for Image Acquisition
and for extraction of features in the images using the Matlab Software .Once the image acquisition
process is completed then the new incoming video frames are subtracted from the background frame
to achieve a subtracted image consisting of the required object.
As these image contains various types of noises and distortion, It becomes difficult to perform
mathematical computations on it .To reduce these effects, Gaussian filter is applied to get a
smoothened image. In order to obtain a tonal distribution of the image, the concept of histogram
thresholding is used. This gives a contrast between the required object to be detected and the
remaining background.
Calibration is performed to obtain a relationship between the region in the image frame and
the real world. The required object is tracked in the image (Camera) and its coordinates are mapped
with those in the real world. The mapped region is further divided into necessary sub-regions. Once
the regions are properly mapped, the information about the locations of the object is passed on to the
Micro-controller. This Micro-controller is then used to control the intensity and state of lights
The main purpose of this research is to produce a dynamic, intelligent and aware Smart
Lighting System. The paper is organized as follows: section 2 describes the proposed method;
section 3 illustrates implementation and experiments; finally, section 4 concludes the research.

Fig. 1 Block Diagram

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

II. PROSOED SYSTEM


There are many variants to the smart home systems, most of these are based on the use of
expensive and tailor made sensors which are often limited in their flexibility and customizability.
The proposed system makes use of various computer vision and image processing techniques which
requires an imaging device instead of myriad expensive sensors. The proposed system is composed
of five blocks. The imaging device provides the essential information about the environment in
which the system is deployed. Processing device, which would perform computations on the data
acquired from the imaging device. Microcontroller, which uses the results from the computations
performed by the processing system and change the state of lights. Lighting devices, governed in
accordance to the microcontroller output.
The system is divided into mainly four parts namely image acquisition, feature extraction,
Calibration and microcontroller integration.
A. Image Acquisition
Image acquisition means gathering visual information from the live feed received from the
imaging device. Often this visual information contains some disparities in the form of environmental
noise and disturbances. This hinders object detection. The presence of noise in the acquired data,
makes it difficult to differentiate changes in the successive image frames.
In order to perform mathematical computations on the image for detection of objects and get
consistent results, an average background template is generated by sampling multiple frames and
taking its mean [3].
[AB]= ([BG] 1+ [BG] 2++ [BG] n )/n
Where,
AB = average background.
BG = background sample.
n= Total number of background samples.
B. Feature Extraction
Features are the unique characteristics of an object which are useful in describing that object
and provide information to perform the necessary mathematical operations. In order to extract
features from the acquired visual data, concept of background subtraction is used.
Background subtraction is a technique used to find and isolate the new objects in the
successive frames [3]. This is accomplished by using a background template and comparing it with
the new frames. The differences in the frames give us the required object.
Subtracted Image = New image Average Background Image
The subtracted image contains various forms of noise, which make it hard to perform
computations and get desired results. Gaussian filters are used to smoothen out these disparities in
the subtracted image. By applying a Gaussian filter of suitable size over the subtracted image, it
becomes easier to locate the object in the current frame.

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


0976
ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

The Gaussian low pass filter is represented as follows.

Fig. 2 Graph of Gaussian filter


Histogram of an image gives an estimate of tonal distribution of pixels of that image in a
graphical form. From the resulting Gaussian
Gaussian image, histogram of that image is obtained. A suitable
threshold value which distinguishes the object from the image is chosen based on the histogram.
After applying this threshold, the final image is obtained in the binary form. As object Image process
is shown by following mathematical representation.

In the final image, the constant background is represented by black region and the required
object is represented by white region. In order to detect the object,
object, the image is scanned for the
position of the white pixels [3].
This position is used in order to map the object from image co-ordinates
co ordinates to world coco
ordinates.
C. Calibration
The mapping of the image coordinates to the world coordinates requires a certain relationship
between them. The boundary points of the region in the real world are mapped with those in the

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

image. Once the boundary points are obtained the region is further divided into sub-regions
corresponding to the lighting system in the room.
D. Micro-Controller Integration
The task of controlling the lights and adapting to the presence of a person in a particular room
is governed by the microcontroller [4].
The information obtained in the calibration phase is used to change the state of lights.
III. IMPLEMENTATION AND EXPRIMENTATION
The fig. 6 shows the flow chart of the entire system. The implementation is performed as a
sequence of 4 phases.

Fig. 6 Flow Chart


A. Image Acquisition and Average Background Generation pre-process
Data obtained from the imaging device is in the form of continuous real-time video feed [2].
This real-time video feed is sampled into individual frames which contain RGB components. Only a
single component of these individual video frames is considered in order to reduce the computational
overhead. This is can be represented as:

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


0976
ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

imageFrame = videoFrame( :, :, 1) ;
Where, videoFrame
Frame is the individual frame obtained from the realreal-time video feed. The
operation
videoFrame ( :, :, 1) represents the image videoFrame
videoFrame along a single component . The term
imageFrame
Frame stores the single component information about the videoFrame.
video
Background removal or background subtraction is used to minimize the effect of noise. In
order to improve the efficiency of background subtraction, the background template is generated by
sampling and stacking several background frames. Average of these frames is taken after stacking
process is complete. This process is represented as

Where N is the number of background frames considered for generation of average template.
The terms backgroundFramei denotes the background frames. The term averageBackground
average
Template stores the output averaging the stacked background frames.
The experimental result of average background
background generation is given by Fig5, while Fig4
shows the original background image.

Fig. 4: Original background image

Fig. 5: Average background template


6

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

B. Feature Extraction and Object Detection


The real-time video frames are constantly subtracted from the average template generated in
the preceding stage [2]. The subtracted image is shown in Fig6.

Fig. 6 Output of background subtraction


The output of background subtraction contains traces of noise in the form of minor speckles
and minute spots. To remove the noise that persists in the background subtracted images, Gaussian
filter is applied. The graph in Fig8 depicts the Gaussian filter applied on Fig6. The obtained output
after applying filter is shown in Fig7.

Fig. 7: Output of Gaussian filter


The output obtained after applying Gaussian filter is used for feature extraction process. In
order to extract useful features from the image, its histogram is plotted. The plot of histogram gives
information about distribution of intensities of pixels. From this plot a distinction is made between
the object and the background. The plot of histogram for the image in fig7 is shown in fig8.

Fig. 8: Histogram of Gaussian smoothed image


7

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

Based on the histogram a threshold value is determined which separates the background
pixels from the object pixels. Using the threshold value, the output image is converted into a binary
image in which the object pixels are converted to white and the background pixels are converted to
black.

Fig. 9: Output of histogram based thresholding


Using the binary image generated using threshold, object extraction process is carried out.
The binary image is scanned in a bottom up manner to locate the object .The location of pixel having
intensity value as white is stored. Using the stored location, object is traced in successive frames.
The output of this process is shown in fig10.

Fig.10: Object extraction and tracking


C. Calibration and Mapping
Regions are generated using calibration points, these image mapped points correspond to the
location of light fixtures in the real world coordinates. Fig.11 shows the vectors which store the
coordinates

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

Fig.11: Region coordinates vectors


The region number obtained after tracking the object from fig.10 is shown in fig.12.

Fig.12 Region numbers obtained


The visual representation of region generation is shown in fig.13.

Fig.13: Plot of regions over background image


9

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

Fig.14: MATLAB Workspace


The variables generated during computation are shown in Fig.14.
D. Micro-Controller Integration
The region numbers generated from the previous phase are passed on to the microcontroller.
As shown in Fig.15, the microcontroller changes the state of the LEDs based on the region number.
The LED corresponding to the new region fades in while the previous region fades out [4]. The
fig.12 shows new region number detected and the corresponding LED is lit in fig.15.

Fig 15 : System implementation prototype using LEDs

10

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 01-11 IAEME

IV. CONCLUSION
The existing smart lighting systems, consists of expensive cameras and sensors. Though it
makes the system more energy efficient, it is more expensive in terms of hardware. To overcome
this, the proposed system makes the lighting system simpler by using only one camera and also
eliminates any more hardware requirements. The proposed system also provides proper management
of energy resources and gives optimum result .In conclusion; the project is capable of reducing
excessive Usage of multiple Sensors, replaced by single sensor (Camera). It also simplifies
communication setup structure between Multiple Sensors to transfer the data between them.
V.

ACKNOWLEDGEMENT

This work is supported by Vidyavardhinis College of Engineering and technology affiliated


to Mumbai University, India and Guided by Asst. Prof. Mr. Sunil Katkar .We are thankful to our
college Principal and Head of Department of Computer Engineering for their support.
REFERENCES

1.

2.
3.

4.
5.
6.

7.

8.

Giao Pham Ngoc, Suk-Hwan Lee, Young Moon Yu, Ki-Ryong Kwon, An Intelligent LED
Illumination Control System Using Camera. Journal Of Convergence Information
Technology (JCIT), Vol8, Issue12.25, Number 12, July 2013
Oge Marques, Florida Atlantic University Practical Image and Video Processing Using
MATLAB, John Wiley & Sons, Inc., Hoboken, New Jersey, Wiley-IEEE, 2011.
Student Dave's Tutorials. (2012). Basic Image Processing With Matlab Code, [Online].
Available:
http://studentdavestutorials.weebly.com/basic-image-processing-withmatlab.html
rogue.bh...@gmail.com . (Feb 13, 2012). Arduino/Wiring SoftPWM Library, [Online] .
Available: https://code.google.com/p/rogue-code/wiki/SoftPWMLibraryDocumentation
MATLAB Documentation, [Online]. Available: http://in.mathworks.com/help/matlab/
Lalit Saxena, Effective Thresholding of Ancient Degraded Manuscript Folio Images
International journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 5,
2013, pp. 285 - 291, ISSN Print: 0976 6367, ISSN Online: 0976 6375.
Jyoti, Abhishek and Manisha, Image Denoising Using Traditional Wavelet Thresholding
International journal of Computer Engineering & Technology (IJCET), Volume 5, Issue 4,
2014, pp. 65 - 72, ISSN Print: 0976 6367, ISSN Online: 0976 6375.
Manoj R. Tarambale and Nitin S. Lingayat, The Performance of Various Thresholding
Algorithms for Segmentation of Biomedical Image International Journal of Advanced
Research in Engineering & Technology (IJARET), Volume 5, Issue 4, 2014, pp. 119 - 130,
ISSN Print: 0976-6480, ISSN Online: 0976-6499.

11

S-ar putea să vă placă și