Sunteți pe pagina 1din 9

Automatic Sign

Language Recognition
Group Members:
Umang Desai:60002160020
Manthan Doshi:60002160025
Sagar Goradia:60002160040
Introduction and Stages Involved
 Sign language is an important part of life for deaf and mute people. They rely on it for
everyday communication with their peers. A sign language consists of a well-structured
code of signs, and gestures, each of which has a particular meaning assigned to it.This
project aims to a medium of communication between people who cannot understand
sign language and physically challenged people.
 Stages involved in the project:
 The system will be trained using images from a Sign Language database training set
and the efficiency of this classification will be tested using the testing database set.
 We will then propose to modify the system to work with a input via a webcam. This
will test the robustness of our system against different background, size and angle
variation.
Hand gestures
Image Feature
recorded
preprocessing Extraction
using webcam

Classification
and Training
recognition Dataset
using SVM

Output in
form of Text
Image processing
 Before performing feature extraction, the images must be processed in such a way that only the useful
information is considered and the redundant, distracting noise and superficial data are neglected.
 Simultaneously, skin color is detected using YCbCr model.
Feature Extraction
 PCA-Principal component analysis:
PCA algorithm is dimensional reduction algorithm.It is an unsupervised algorithm and is used to keep only the
principal components of higher variance and drop some information in the image.The image needs to be
converted into vectors for PCA.
 LDA-Linear Discriminant Analysis:
Though PCA reduces dimensions but when dealing with multi-class data it’s necessary to reduce dimensions in
a way that inter class separation is also taken care of. LDA is an algorithm used for the same.This is a
supervised algorithm.

These both algorithms helps in reducing the dimensions,removes noise, redundancy and extract principal and
important features of the input which will help in classification and recognition of the image.
Support Vector Machine

 The feature vector produced in the above step is fed into an image classifier. In this paper, we
have used Support Vector Machine (SVM) for classification. By using SVM classifier we can
maximize accuracy and avoid overfitting of data.
 We will have to train the SVM with testing datasets containing several images.
 The matching will be done by classification of features which we will get previously with the
trained network of the SVM.
Conclusion

 In this system, still hand image frame is captured using a webcam.

 These frames are processed to get enhanced features.

 Then feature extraction and classification algorithms are used to translate


the sign language into English text.
Things aimed to be done in semester 8
Thank You.

S-ar putea să vă placă și