Documente Academic
Documente Profesional
Documente Cultură
DIGITAL VOCALIZER
Delna Domini1, Sreejith s2, Aksa David3
B.Tech, Electronics and Communication, Holy Grace Academy of Engineering, Mala, India
Asst. Professor, Electronics and Communication, Holy Grace Academy of Engineering, Mala, India
Asst. Professor, Electronics and Communication, Holy Grace Academy of Engineering, Mala, India
Abstract: This paper presents a gesture recognition model; known as Digital Vocalizer. Digital Vocalizer is an arduino UNO
based system and this system is designed to make the communication gap between dumb, deaf and blind communities and their
communication with the normal people as less as possible. Here in this paper, we mentioned the people by normal people who
have the abilities to see, hear and talk. Deaf people make use of the sign languages or gestures to convey what he/she needs to
say; but it is impossible to understand by a person who can hear. So we come on a conclusion to make an easy prototype by
taking some of those gestures and convert them into visual and audio forms. So the communication gap can be reduced as much
as possible. For that we make use of arduino UNO Board as Atmega 328 Controller to interface all of the actuators and sensors.
Sensors are fixed on the palm and fingers. Those sensed values will give the information about the parameters like finger bend,
hand position angle; and these are converted into electrical signal and switch to arduino UNO. Arduino UNO takes the actions
according to the gestures.
I. INTRODUCTION
Communication is a crucial part of human beings life; the Gesture. “Microcontroller and Sensors Based Gesture
lack of proper communication will cause severe problems. Vocalizer” depicts a vocalizer by using 8051
The communication is mainly done by gestures and speech. microcontroller. In reference [4] a Survey on Gesture
So a complete coordination of these two is necessary. A Recognition has been done in reference [6]. Again a brief
number of hardware techniques are used for gathering description about the Data Glove is there on reference [7].
information about body positioning; typically either image- The methods to improve the recognition are being framed
based (using cameras, moving lights etc) or device-based in reference [8]. The requirements of a Self organized
(using instrumented gloves, position trackers etc.), recognition system are explained in reference [9]. A Free
although hybrids are beginning to come about [1], [2], [3]. Hand Tracking technology is explained in reference [10]. A
However, first we need to get the data; after that Real-Time Gesture Recognition system is discussed in
we need to recognize those detected data from the glove reference [11]. Reference [12] gives an analysis of
and it is considered as the second step. And the researches Dynamic Hand Gesture Recognition System. Reference
on this are in progress. This research paper deals with the [13] shows a system with multiple sensors which will give
data from a digital data glove for the use of recognition of more accurate values. Reference [14] explains a
Nonspecific-User Hand Gesture Recognition technology.
gestures. This system converts these signs and into
An Algorithm and its Application for 3D Interaction is
visual and speech. The conversion of data to visual and discussed in reference [15].
speech is achieved by the audio processor.
III. METHODOLOGY
II. BACKGROUND
Block diagram of the whole system is shown Fig.1. The
A lot of research works have been done in the field of system consists of:
gesture recognition; many are there in progress. The most
Data Glove
recent research; “Recognition of Hand Gestures Using
Tilt Detection
Range Images” is explained in reference [1]. Reference [2]
gives a small frame work of the Hybrid Classifiers. Bend Detection
Reference [3] gives a brief explanation of classification Adriano Board
Audio Processor
A. Data Glove
Flex sensors and accelerometer as tilt sensors are the
sensors which are placed on the data glove. The output of Fig.4: flex sensors as variable analog voltage dividers
the accelerometer is detected by the tilt detection module,
while the output of the flex sensors is detected by the bend As the amount of carbon increases the
detection module. The combination of these results of two resistance decreases. Circuit diagram of flex sensor is
sensors is given to the arduino UNO for further operations. shown below in Fig.5.
This combined result will give the overall gesture of the
hand.
I: Bend sensor
‘Flex Sensor’ or ‘Bend Sensor’ is a sensor that changes its
resistance depending on the amount of bend on the sensor.
In this system, five flex sensors are there on the data glove.
This are stitched on each of the finger of the hand glove
and they records the static and dynamic movements of the
fingers. Fig.2 shows how a flex sensor looks like. They are
usually in the form of a thin strip from 1”-5” long that vary
in resistance range.
Fig.5: Circuit diagram of flex sensor
In this system, Accelerometer is used to sense the tilting or Gesture recognition is handled by arduino UNO. The main
slanting of the hand. An accelerometer is a device that can function of arduino UNO is to compare the detected values
measure dynamic acceleration (vibrations) and static by the sensors with the predefined values. According to
acceleration (gravity). Being that it measures this, it can that comparison it sends some binary addresses to the LCD
sense movements/lack-of-movements, tilts and even and the Audio processor. The arduino UNO would have
rotation also. A microcontroller is required to read and two input sections; they are from the flex sensors and from
record the tilt. Here in this system, we are using arduino the accelerometer.
microcontroller. The unit of the output of the accelerometer
is mg (Milligram). Hence the mapping of the readings is
necessary. Thereby we can display these readings on the
computer.
ADXL335 accelerometer is used in this digital
vocalizer. The Output of accelerometer is analog and it
ranges from 1.5 to 3.5 volts. The output of the
accelerometer is given to the arduino. ADXL335 interface
with Arduino is shown below in Fig.6.
Fig.6: arduino UNO
C. Audio Processor
Audio processor speaks out against the significant gesture.
Here we are using aPR33A3. It is capable of producing 8
B. Gesture Detection voice messages. For that the voice messages must be
recorded. To record these messages, a microphone can be
used. A lot of microphones are available in the market. pre-amplification can be used along with microphone
Here in this system an electret microphone is used. signal in order to activate the analog input of arduino and it
is shown in Fig.13. The output waveform of the
I. ELECTRET MICROPHONE
microphone with the pre-amplification circuit is shown in
A microphone is a transducer which converts sound energy Fig.14.
to electrical signals. Transducers are devices which convert
energy from one form to other. Its working principle is
opposite to a speaker. They are available in different shapes
and sizes. Commonly used microphone is electret
condenser microphone. It is shown in Fig.9. It is used in
mobile phones, laptops, etc. It is mainly used to detect
minor sounds or air vibrations. The top of the electret
microphone is covered by a porous material. It filters out
the dust particles.
.
Fig.13:Pre-amplification circuit for Electret Microphone
aPR33A3 to speak those particular words. The summary is communities and their communication with the normal
that it is necessary to know the 8 bit digital addresses of people who can speak, see and talk as less as possible. The
each word or sentence. The microcontroller sends these sign language is being by the dump communities used for
addresses to aPR33A3. The address will locate the their communication. But these sign languages cannot be
allophone of the word. The aPR33A3 gives an output used for the communication of dump communities with the
signal and it can be amplified to make it louder. The blind as well as normal people. Here we can use this
speaker takes the output of this amplifier as the input and it Digital Vocalizer which will generate voice and display
speaks the voice messages corresponding to the 8-bit according to these signs. This display will help deaf
addresses. communities to reduce the communication gap also.
The future enhancements:
D. LCD Display 1. A provision for ZigBee standard for
“Digital Vocalizer”.
The audio processor gives a provision for the 2. A new design with TTS256t to improve message
communication of dumb with the normal people and with storage capacity.
the blind people as well. But the communication gap 3. Accurate monitoring of the static dynamic
between the dump people and the deaf people cannot be movements involved in “Digital Vocalizer”.
reduced with the help of audio processor IC. To avoid this 4. A system with whole jacket, which could be
condition a display can be attached along with the audio capable of conveying the movements of animals.
processor. 5. The replacement of virtual reality application like
joy sticks with the “Digital Vocalizer” (Data
Glove).
VI. REFERENCES
Vision and Pattern Recognition (CVPR’03), ISBN # [12] Attila Licsár and Tamás Szirány, “Dynamic
1063-6919/03, pp.1-6 Training of Hand Gesture Recognition System”,
[8] Masumi Ishikawa and Hiroko Matsumura, Proceedings of the 17th International Conference on
“Recognition of a Hand-Gesture Based on Selforganization Pattern Recognition (ICPR’04), ISBN # 1051-
Using a Data Glove”, ISBN # 0-7803- 4651/04,
5871-6/99, pp. 739-745 [13] A Method of Hand Gesture Recognition based on
[9] Hand Gesture Recognition Using Accelerometer Multiple Sensors, Fan Wei, Chen Xiang, Wang Wen-hui,
Sensor for Traffic Light Control System. Zhang Xu ,Yang Ji-hai,978-1-4244-4713-8/10/$25.00 ©2010
Shirke Swapnali, Student of ENTC Dept., SKNCOE,PUNE IEEE
Pune,India [14] Nonspecific-User Hand Gesture Recognition By
[10] Gloved and Free Hand Tracking based Hand Gesture Using MEMS Accelerometer, Jayaraman D
Recognition, Student Member of IEEE
978-1-4673-5250-5/13/$31.00 ©2013 IEEE, ICETACS 2013 Department of ECE,ICICES2014 - S.A.Engineering College,
[11] Toshiyuki Kirishima, Kosuke Sato and Kunihiro Chennai, Tamil Nadu, India
Chihara, “Real-Time Gesture Recognition by Learning [15] lianfeng Liu, Zhigeng Pan, Xiangcheng Li "An Accelerometer-
and Selective Control of Visual Interest Points”, IEEE Based
TRANSACTIONS ON PATTERN ANALYSIS AND Gesture Recognition Algorithm and its Application for 3D
MACHINE INTELLIGENCE, VOL. 27, NO. 3, Interaction";
MARCH 2005, pp. 351-364 ComSIS Vol. 7, No. I, Special Issue, February 2010.