Documente Academic
Documente Profesional
Documente Cultură
A Project Report
On
Bachelor of Technology
In
Electronics and Communication Engineering
CERTIFICATE
It is certified that the work contained in the project report titled “Smart System For Blind
and Mute” submitted by Dhananjay Bhardwaj, Harsh Gupta and Harsh Vats, has been
carried out under my supervision and this work has not been submitted elsewhere for a
degree.
Signature of Supervisor:
Dr. Jitendra Mohan
ECE Department,
JIIT NOIDA
December, 2019
ii
Annexure C
DECLARATION
We declare that this written submission represents our ideas in our own words and where others'
ideas or words have been included, we have adequately cited and referenced the original sources.
We also declare that we have adhered to all principles of academic honesty and integrity and have
understand that any violation of the above will be cause for disciplinary action by the Institute and
can also evoke penal action from the sources which have thus not been properly cited or from whom
(Signature)
iii
ABSTRACT
The project aims to provide a convenient and safe method for blinds to overcome their
difficulties in daily life. The Smart System for Blind is application based prototype that can
help visually impaired facilitate movements and perform daily activities without relying too
much on others. It outlines a better navigation and gesture sensing tool for the visually
impaired and makes them more independent. The glove is equipped with sensors to give
information about the environment. The integration of ultrasonic sensor HC-SR04 will help
blind to facilitate movement and give alert to user if there is an obstacle in front of them in
the range 2 cm to 300 cm. In order to control the various functionalities of a media player
like increasing/decreasing volume, simple hand gesture recognition is used using OpenCV.
For conveying navigation information to the care taker, a more secure and efficient
technology called Li-Fi is used. Smart System for Blind will help blind people walk and
estimate the distance from obstacles and perform daily routine tasks autonomously with
simple hand gestures and even navigate quite easily and securely with Li-Fi
communication. Throughout the project, several hardware tools are employed in order to
implement the final prototype.
iv
ACKNOWLEDGMENT
December 2019
v
CONTENTS
Certificate……………………………….……......……….……………………….ii
Declaration………………………...……………...………….……..…………….iii
Abstract……………………………………...…………………………..………...iv
Acknowledgement………………………………...………………….……….…...v
List of Figures………………….………………………………………………..viii
2 Components 11
2.1 Detail of Components…………………….……………………..….11
2.1.1 LED… ................................................................................... 11
2.1.2 LDR ....................................................................................... 11
2.1.3 LM 741 ...................................................................................... 12
2.1.4 Keypad…………………………………………………………12
2.1.5 LCD……………………………………………………………13
2.1.6 Buzzer………………………………………………………13
2.1.7 Microcontroller………………………………………………...14
3 Implementation 15
3.1 Hardware .............................................................................................. 15
3.1.1 LED…………………………………………………………...15
3.1.2 LDR…………………………………………………………...16
3.1.3 LM 741 ..................................................................................... 16
3.1.4 Keypad………………………………………………………...16
3.1.5 LCD…………………………………………………………...16
3.1.6 Buzzer…………………………………………………………16
3.1.7 Results…………………………………………………………17
vi
3.2 Software ................................................................................................18
3.2.1 Introduction ...............................................................................18
3.2.2 OpenCV .....................................................................................18
3.2.3 Block diagram of system ...........................................................19
3.2.4 Segment the hand region ...........................................................19
3.2.5 Motion detection and thresholding ............................................20
3.2.6 Contour extraction .....................................................................20
3.2.7 Process flowchart .......................................................................20
3.2.8 Simulation results ......................................................................21
4 Application 22
4.1 Ease of work .......................................................................................... 22
4.2 Faster Communication .......................................................................... 22
4.3 Secure Communication ......................................................................... 23
4.4 Environmental Impact ........................................................................... 23
References
vii
LIST OF FIGURES
viii
CHAPTER 1
INTRODUCTION
AND
LITERATURE SURVEY
1.1 Background
Over the past years, blindness that is caused by diseases has decreased due to the success of
public health actions. However, the number of blind people that are over 60 years old is
increasing by 2 million per decade. Unfortunately, all these numbers are estimated to be
doubled by 2020 [1] .The need for assistive devices for navigation and orientation has increased.
The simplest and the most affordable navigations and available tools are trained dogs and the
white cane. Although these tools are very popular, they cannot provide the blind with all
information and features for safe mobility, which are available to people with sight [1]. Figure 1.1
shows the global population of blind persons and those with moderate and severe vision
impairment by region and age.
The World Health Organization (WHO) Fact reported that there are 285 million visually-
impaired people worldwide. Among these individuals, there are 39 million who are blind in
the world [2]. More than 1.3 million are completely blind and approximately 8.7 million are
visually-impaired in the USA [1]. Of these, 100,000 are students, according to the American
Foundation for the Blind and National Federation for the Blind.
India is home to nearly 12 million blind of the 39 million around the globe, almost 1/3 of total
blind people. NPCB (National Programme for Control of Blindness) defines blindness as
vision of 6/60 or less and a visual field loss of 20 degrees or less in the better eye after
spectacle correction.
9
Figure 1.1: Global population of blind persons
Source: Global Prevalence of Vision Impairment and Blindness: Magnitude and Temporal Trends, 1990–2010.
1.2 Motivation
In our societies, on a daily life basis we usually adopt various measures to simplify things for
our elders, either we implement some technical solutions or provide them an ease of life
manually. But these solutions are way simpler for those who are not handicapped. In most of
the cases people fail to provide a proper assistance to blind people. This biggest challenge is to
provide them a 24x7 assistance which is next to impossible and most of the times impractical to
implement. How can they compete with the other people of this society? How can they navigate
from one place to another and that too without anyone’s help? To answer these questions, we
have come up with a solution for this section of our society. it will be now easier for them to
detect an object. It will also help them to navigate from one place to another. Figure 1.2 depicts
the need of a care taker in simple day to day activities of a visually impaired.
10
CHAPTER 2
COMPONENTS
2.1.1 LED
2.1.2 LDR
A Light Dependent Resistor (LDR) also termed as photo resistor is a device having
resistivity as a function of the incident EM (electromagnetic) radiation, therefore its
behavior shows light sensitivity. They are also termed as photo conductors, photo conductive
cells or simply photocells. Figure 2.2 shows a LDR sensor.
11
2.1.3 LM 741
2.1.4 Keypad
12
2.1.5 LCD
The 16x2 LCD display is a basic module commonly used in prototyping of circuits. The
16x2 LCD translates a display of 16 characters per line in 2 such rows. In this type of LCD’s
every character is displayed in a format of a 5x7 pixel matrix [6]. Figure 2.5 displays a 16x2
LCD.
2.1.6 Buzzer
The buzzer is an alerting device with two pins to attach one at VCC and other at Ground
(GND). When current is applied to the buzzer it leads to a frequent expansion and
contraction of the ceramic plate. Figure 2.6 shows a buzzer having two polar terminals.
13
2.1.7 Microcontroller
Arduino UNO
Arduino UNO as shown in Figure 2.7 is a prototyping board with ATMEGA 328
functioning as the microcontroller. It has 14 digital input/output pins 6 analog inputs, a 16
MHz quartz crystal, a USB connection and other on board components [7]. In the
proposed system, Arduino UNO is directly connected to the transmitting LED and the
keypad. The digital and the analog pins of the keypad are used to receive the data from the
keypad, i.e. the number pressed on the key. The running program is responsible fetch the
key pressed and accordingly change.
Arduino Nano
The Arduino Nano as shown in Figure 2.8 is a small, complete, and breadboard-friendly
board based on the ATmega328P (Arduino Nano 3.x). It has more or less the same
functionality of the Arduino Duemilanove, but in a different package. It lacks only a DC
power jack, and works with a Mini-B USB cable instead of a standard one [8]. Figure 2.8
14
CHAPTER 3
IMPLEMENTATION
3.1 Hardware
The following section describes the working of the hardware interfaced. Figure 3.1
shows the hardware setup of the project. The following is the proposed system
architecture for the project. The architecture is accomplished by interfacing basic
components for example, LED, LDR and a buzzer. It is divided into two parts, one is the
receiver circuit and the other is transmitter circuit. Figure 3.2 demonstrates the actual
hardware setup of the project.
3.1.1 LED
It is a simple LED, used as the transmission source by frequently changing its light intensity.
The reason for using led is because of its high flickering rate. A normal human eye cannot
detect its fast flickering and its nature of getting obstructed with an object provides security.
15
3.1.2 LDR
In the proposed system, LDR acts as the receiving element as it senses the changing intensity.
One of the terminals is grounded and the other acts as the inverting input terminal for the
Opamp. The changing voltage from the LDR is an input at the Opamp terminals
3.1.3 LM 741
In the proposed system, single op-amp LM- 741 IC works as comparator. The Op-amp
comparator compares an analogue voltage level at one of the pin with another voltage level, or
some preset reference voltage, that is, VREF and produces an output signal based on this
voltage comparison. In the system, the positive/non- inverting terminal is connected to a 10k
potentiometer which is connected to the VCC, while the negative/inverting terminal is directly
connected to the LDR.
3.1.4 Keypad
The row and column pins are directly connected with Arduino. In the proposed system,
keypad is the crucial element as it acts as a connecting interface between the blind person and
the care taker. Each numeric on the keypad is assigned a corresponding function that the blind
person wishes to perform.
3.1.5 LCD
In the proposed system, LCD is interfaced with Arduino NANO at the receiver circuit which
displays the numeric key pressed by the person at the transmission side of the system.
3.1.6 Buzzer
In the proposed system, the buzzer is interfaced with the receiver circuit. Whenever the
transmitted data is received, the buzzer beeps for a short duration of time. It functions as an
alarm that helps the care taker to stay aware whenever the data is received.
14
16
Figure 3.2 : Hardware Setup
3.1.7 Results
With the Li-Fi communication system ready and set, few tests are performed to check whether
the program is able to communicate via visible light. Following graphs are plotted as a result
of instructions sent from transmitter to receiver side. Figure 3.3 displays the number in the
keypad pressed vs time at the transmitter side and Figure 3.4 displays the voltage level of
LDR vs time at the receiver side after pressing of a certain key in the keypad.
Figure 3.4: Instructions received and the corresponding LDR voltage vs time at receiver side
17
3.2 Software
Computer Vision and Image Processing have been used in a number of tasks involving
automatic detection and monitoring. In this project, a Computer Vision based
methodology called Hand gesture recognition has been implemented which controls a
number of media player functionalities like Play/Pause or Volume Up/Down with a
simple recognition of hand gesture and thus, easing out the problem of manual controlling
of these functionalities by a visually impaired.
3.2.1 Introduction
Gesture recognition has been a very interesting problem in Computer Vision community
for a long time. This is particularly due to the fact that segmentation of foreground object
from a cluttered background is a challenging problem in real-time. The most obvious
reason is because of the semantic gap involved when a human looks at an image and a
computer looking at the same image. Humans can easily figure out what’s in an image but
for a computer, images are just 3-dimensional matrices.
3.2.2 OpenCV
One of the most prominent and available tools to perform computer vision is OpenCV.
“An open source computer vision library” which consists of more than 2500 algorithms,
capable of working as a comprehensive set of both computer vision and machine learning
algorithms and can be used to detect and recognize faces, identify objects, track
movements etc. Figure 3.5 explains the most common use of OpenCV with Python.
18
3.2.3 Block Diagram of System
This project implements computer vision and gesture recognition techniques and develops
a vision based low cost input device for controlling the VLC player through gestures. In
this firstly, video from webcam is captured known as image acquisition and then image
pre-processing is done using convex hull algorithm. By using Python programming,
interfacing with computer is done to control VLC Media Player. The process of image
processing is based upon the treatment of digitalized images, in order to analyze their
content or manipulate them. This hand gesture recognition technique will not only replace
the use of mouse to control the VLC player but also provide an easy and efficient way for
visually impaired to listen music [9]. The whole proposed software system is explained in
Figure 3.6.
The first step in hand gesture recognition is to find the hand region by eliminating all the
other unwanted portions in the video sequence. An efficient method to separate
foreground from background is by using the concept of running averages. The system
looks over a particular scene for 30 frames. After figuring out the background model
using running averages, the current frame which holds the foreground object (hand in our
case) in addition to the background is used. The absolute difference between the
background model (updated over time) and the current frame (hand) is calculated to
obtain a difference image that holds the newly added foreground object (hand). Running
average is calculated using the formula given in equation (1). [10]
(1)
18
19
3.2.5 Motion Detection and Thresholding
To detect the hand region from this difference image, thresholding of the difference image is
required, so that only the hand region becomes visible and all the other unwanted regions are
painted as black. Thresholding is the assignment of pixel intensities to 0's and 1's based a
particular threshold level so that the object of interest alone is captured from an image. If x(n)
represents the pixel intensity of an input image at a particular pixel coordinate, then threshold
decides how nicely the image is going to be segmented into a binary image as explained in
equation (2) [10].
(2)
After thresholding the difference image, contours in the resulting image is found out. The
contour with the largest area is assumed to be the hand. Contour is the outline or boundary of
an object located in an image.
As shown in Figure 3.7, there are four intermediate steps to count the fingers, given a
segmented hand region [11].
20
3.2.8 Simulation Results
With the image processing tools ready and set, few tests are performed to check whether
the program is able to detect hand gesture and being able to control various functionalities
of media player.
As a matter of fact, the program was able to detect and count the number of fingers and
correspondingly was able to produce the contour and thresholding image from the original
image as well as shown in Figure 3.8.
After testing the program to count the number of fingers, the program was checked to see
if the functionalities of the media player can be controlled with the recognition of simple
hand gestures. The program was successfully able to detect and control the functionalities
of media player like Play/Pause and Volume Up/Down. This was done with the help of
convex hull algorithm and counting of the convexity defects in the image. Figure 3.9
shows the recognition of simple hand gesture and hence, in this case, lowering down of
volume.
21
CHAPTER 4
APPLICATIONS
Figure 4.1: Distribution of blind and people with low vision around the globe.
Source: https://www.slideshare.net/InternationalCentreforEyeHealth/epidemiologyandvisualimp
The mode used is Li-Fi which uses visible light spectrum ranging from 430 THz to 750 THz.
This spectrum ensures high data rates, more bandwidth and less latency. Figure 4.2 depicts an
increase in data rate with different LED technologies.
22
4.3 Secure Communication
As Li-Fi works on VLC (Visible Light Communication), it is one of the most secured
modes of transmission. As light cannot pass an opaque object hence one can control the
area of transmission using opaque objects.
In the race for les latency, more bandwidth and better use of spectrum RF spectrum has far
exceeded safe frequencies region. The current 5G technology is expected to release small
amounts of radiation and can’t be used near nuclear power plants because of radiation it
emits. On the other hand Li-Fi which uses a simple LED for its transmitter which even act
as a source of light. In near future every light source can act as a light source and a Li-Fi
transmitter. Figure 4.3 depicts an increase trend in demand for high speed data over the
years.
23
CHAPTER 5
CONCLUSION AND
FUTURE WORK
The future work of this project would be centered around the conversion of data received
on the receiver side via visible light communication into an audio based output. This will
improve the system in such a way that if a care taker misses to read the text to be
displayed on the receiving end, the voice instructions will help the person in reminding
the instructions received from a visually impaired person.
Currently, the prototype made in this project consists of 3 led’s through which the visible
light communication has been made possible but this is has resulted into a short range of
transmission. Using a large grid of led matrix can be useful for longer range of
communication and even increase in number of applications like transmission of images.
24
REFERENCES
[1] “American Foundation for the Blind (AFB).” The Grants Register 2019, 2018, 55–56. [Online]. Available:
https://doi.org/10.1007/978-1-349-95810-8_72. [Accessed Dec. 15, 2019].
[2] “Vision Impairment and Blindness.” World Health Organization. [Online]. Available: https://www.who.int/news-
room/fact-sheets/detail/blindness-and-visual-impairment. [Accessed Dec. 15, 2019].
[3] Sandip Das, Ankan Chakraborty, Debjani Chakraborty, Sumanjit Moshat, "PC to PC Data Transmission using
Visible Light Communication", 2017 International Conference on Computer Communication and Informatics
(ICCCI -2017), Jan. 05 – 07, 2017, Coimbatore, INDIA.
[9] Aekta Patel, “VLC Media Player Controlling Using Hand Gesture” International Journal of Advanced Scientific
and Technical Research. [Online]. Available: http://www.rspublication.com/ijst/index.html. [Accessed Dec. 12,
2019].
[10] “Hand Gesture Recognition using Python and OpenCV - Part 1” [Online]. Available:
https://gogul.dev/software/hand-gesture-recognition-p1 [Accessed Dec. 3, 2019].
[11] “Hand Gesture Recognition using Python and OpenCV - Part 2” [Online]. Available:
https://gogul.dev/software/hand-gesture-recognition-p2 [Accessed Dec. 3, 2019].
25
APPENDIX I
26
APPENDIX II
SENSOR SCHEMATIC
LDR SENSOR
27