Documente Academic
Documente Profesional
Documente Cultură
SUBASHRI R
(1339003)
In partial fulfillment for the award of the degree
of
MASTER OF TECHNOLOGY
in
INFORMATION TECHNOLOGY
SUPERVISOR
Dr.S.Nagarajan,PhD
Professor,
Department of IT
Hindustan University
Padur.
EXTERNAL EXAMINER
ABSTRACT
We propose two novel local transform features: local gradient patterns (LGP) and binary
histograms of oriented gradients (BHOG). LGP assigns one if the neighboring gradient of a
given pixel is greater than its average of eight neighboring gradients and zero otherwise, which
makes the local intensity variations along the edge components robust. BHOG assigns one if the
histogram bin has a higher value than the average value of the total histogram bins, and zero
otherwise, which makes the computation time fast due to no further post-processing and SVM
classification. Automatic people counting are there but this is not accurate. Heterogeneous face
recognition (HFR) involves matching two face images from alternate imaging modalities, such
as an infrared image to a photograph or a sketch to a photograph. Charged with the task of
outputting a measure of similarity between a given pair of face images, such challenges manifest
performed by most face recognition is face detection, face counting and face recognition.
Heterogeneous face recognition is proposed in four HFR scenarios. Detection and identification
of human faces have been largely addressed mainly focussing on 2D still images. To represent
face images using given databases or from camera option. The matching of image can be done
using a Kernel Prototype.Our experimental results indicate that the proposed LGP and BHOG
feature attain accurate detection performance and fast computation time, respectively, and the
hybrid feature improves face and human detection performance considerably and automatic
people counting using trellis optimization algorithm . The merits of the proposed approach,
called prototype random subspace (P-RS), are demonstrated on four different heterogeneous
scenarios: 1) near infrared (NIR) to photograph, 2) thermal to photograph, 3) viewed sketch to
photograph, and 4) forensic sketch to photograph.
ACKNOWLEDGEMENT
Verghese,
Pro
Chancellor,
Mr.
Ashok
Verghese
and
Dr. Aby Sam, Directors of Hindustan University for their support and encouragement.
The
author
wishes
to
express
his
sincere
thanks
and
gratitude
S.NAGARAJAN, supervisor
to
for the
TABLE OF CONTENTS
CHAPTER NO.
TITLE
PAGE NO.
ABSTRACT
LIST OF FIGURES
xviii
xxiv
INTRODUCTION
1.1
SYNOPSIS
LITERATURE SURVEY 14
SYSTEM ANALYSIS
3.1
3.2
3.3
iii
EXISTING SYSTEM
11
11
PROPOSED SYSTEM
11
11
FEASIBILITY SYSTEM
REQUIREMENT SPECIFICATION
4.1
4.2
SYSTEM REQUIREMENT
14
SOFTWARE DESCRIPTION
5.1
FEATURES OF DOTNET
5.2
5.3
14
5.4
5.5
FEATURES OF SQL-SERVER
SOFTWARE DESCRIPTION
6.1
14
ARCHITECTURE DIAGRAM
6.2
DATAFLOW DIAGRAM
6.3
6.4
SEQUENCE DIAGRAM
6.5
COLLABORATION DIAGRAM
6.6
CLASS DIAGRAM
MODULES
TESTING
9.2
TYPES OF TESTING
9.3
14
14
10
CONCLUSION
14
11
REFERANCE
14
BHOG
CLR
FERET
FRVT
IL
LBP
LGP
CHAPTER-1
INTRODUCTION
1.1 SYNOPSIS
Aim of the project is multiple human face detection, counting and recognition using
LGP, BHOG and Trellis optimization algorithm. Recognition of Human actions in established
all the way through .NET Framework .
FACE and human detection, counting and recognition is an important topic in the field of
computer vision.
It has been widely used for practical and real-time applications in many areas such as
digital media (cell phone, smart phone, digital camera), intelligent user interfaces,
intelligent visual surveillance, and interactive games.
Conventional face and human detection methods usually take the pixel color (or intensity)
directly as information cue. The challenges in designing automated face recognition algorithms
are numerous.
CHAPTER-2
LITERATURE SURVEY
Enhanced Local Texture Feature Sets for Face Recognition Under Difficult
Lighting Conditions
Difficult light condition is important for face detection and recognition this paper
to solve this one.
to kernel-based nonlinear mapping), and then, LDE is applied in such a new space. As a result of
diffusion mapping, the margin between samples belonging to different classes is enlarged, which
is helpful in improving classification accuracy. The experiments are conducted on five public
face databases: Yale, Extended Yale, PF01, Pose, Illumination, and Expression (PIE), and Facial
Recognition Technology (FERET). The results show that the performances of the proposed
ELDE are better than those of LDE andmany state-of-the-art discriminant analysis techniques.
The task of face recognition has been actively researched in recent years. This paper
provides an up-to-date review of major human face recognition research. We first present an
overview of face recognition and its applications. Then, a literature review of the most recent
face recognition techniques is presented. Description and limitations of face databases which are
used to test the performance of these face recognition algorithms are given. A brief summary of
the face recognition vendor test (FRVT) 2002, a large scale evaluation of automatic face
recognition technology, and its conclusions are also given. Finally, we give a summary of the
research results.
CHAPTER-3
SYSTEM ANALYSIS
3.1 EXISTING SYSTEM
Heterogeneous face recognition involves matching two face images from alternate
imaging modalities, such as an infrared image to a photograph or a sketch to a
photograph .
where the gallery databases are populated with photographs but the probe images are
often limited to some alternate modality.
Multiple face detected & recognized using prototype random subspaces, LGP and BHOG
by making use of
Operational Feasibility
Technical Feasibility
Economic Feasibility
Operational Feasibility:
Proposed system is beneficial since it turned into information system analyzing the face
detection and counting that will meet the organizations operating requirements.
In security, multiple face detection and counting is used.Accurate face detection and
counting without missing any values.
Technical Feasibility:
Technical feasibility centers on the existing computer system (hardware , software, etc..)
and to what extent it can support the proposed addition. For example, if the current computer is
operating at 80% capacity. This involves, additional hardware (RAM and PROCESSOR) will
increase the s software and normal hardware configuration is enough , so the system is more
feasible on this criteria.
Economic Feasibility:
Economic feasibility is the most frequently used method for evaluating the
effectiveness of a candidate system. More commonly known as cost / benefit analysis, the
procedure is to determine the benefits and saving that are expected from a candidate and
compare them with the costs. If the benefits outweigh cost. Then the decision is made to design
and implement the system. Otherwise drop the system. such that it can be used to analysis the
detection and counting. So it does not requires any extra equipment or hardware to implement.
So it is economically feasible to use.
CHAPTER-4
REQUIREMENT SPECIFICATION
4.1 SYSTEM REQUIREMENT SPECIFICATION
The software requirements specification is produced at the culmination of the
analysis task. The function and performance allocated to software as part of system
engineering are refined by establishing a complete information description as functional
representation of system behavior, an indication of performance requirements and design
constraints, appropriate validation criteria.
PentiumIV
Speed
1.8GHz
RAM
512MB
Hard Disk
80GB
4.2.2 SoftwareRequirements:
Language
Operating system
Windows XP
DataBase
Definition:
UML is a general-purpose visual modeling language that is used to specify, visualize,
construct, and document the artifacts of the software system.
UML is a language:
It will provide vocabulary and rules for communications and function on conceptual and
physical representation. So it is modeling language.
UML Specifying:
Specifying means building models that are precise, unambiguous and complete. In
particular, the UML address the specification of all the important analysis, design and
implementation decisions that must be made in developing and displaying a software intensive
system.
UML Visualization:
The UML includes both graphical and textual representation. It makes easy to visualize
the system and for better understanding.
UML Constructing:
UML models can be directly connected to a variety of programming languages and it is
sufficiently expressive and free from any ambiguity to permit the direct execution of models.
UML Documenting:
UML provides variety of documents in addition raw executable codes.
Goal of UML:
The primary goals in the design of the UML were:
Provide users with a ready-to-use, expressive visual modeling language so they can
develop and exchange meaningful models.
Telecommunications
Transportation
Defense/Aerospace
Retails
Medical Electronics
Scientific Fields
Distributed Web
Rules of UML
The UML has semantic rules for
Things:
Things are the data abstractions that are first class citizens in a model. Things are of 4 types
Structural Things
Behavioral Things
Grouping Things
An notational Things
Relationships:
Relationships tie the things together. Relationships in the UML are
Dependency
Association
Generalization
Specialization
CHAPTER-5
SOFTWARE DISCRIPTION
5. SOFTWARE DESCRIPTION
5.1. Features of Dot net
5.2. The Dot Net framework
5.3. Languages supported by Dot Net
5.4. Objectives of Dot Net Framework
5.5. Features of Dot Net
5.1. FEATURES OF DOTNET
Microsoft .NET is a set of Microsoft software technologies for rapidly building
and integrating XML Web services, Microsoft Windows-based applications, and Web solutions.
The .NET Framework is a language-neutral platform for writing programs that can easily and
securely interoperate. Theres no language barrier with .NET: there are numerous languages
available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET
framework provides the foundation for components to interact seamlessly, whether locally or
remotely on different platforms. It standardizes common data types and communications
protocols so that components created in different languages can easily interoperate.
.NET is also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for
instance) and services (like Passport, .NET My Services, and so on).
Loading and executing programs, with version control and other such features.
The following features of the .NET framework are also worth description:
Managed Code
The code that targets .NET, and which contains certain extra Information - metadata to describe itself. Whilst both managed and unmanaged code can run in the runtime, only
managed code contains the information that allows the CLR to guarantee, for instance, safe
execution and interoperability.
Managed Data
With Managed Code comes Managed Data. CLR provides memory allocation
and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by
default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not.
Targeting CLR can, depending on the language youre using, impose certain constraints on the
features available. As with managed and unmanaged code, one can have both managed and
unmanaged data in .NET applications - data that doesnt get garbage collected but instead is
looked after by unmanaged code.
one language to interoperate with types in another language, including cross-language exception
handling. As well as ensuring that types are only used in appropriate ways, the runtime also
ensures that code doesnt attempt to access memory that hasnt been allocated to it.
Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to
the family.
Visual Basic .NET has been updated to include many new and improved language features
that make it a powerful object-oriented programming language. These features include
inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured
exception handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language
can use the classes, objects, and components you create in Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just some of the
enhancements made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.
C# is Microsofts new language. Its a C-style language that is essentially C++ for Rapid
Application Development. Unlike other languages, its specification is just the grammar of the
language. It has no standard library of its own, and instead has been designed with the intention
of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest transition for Java-language developers into
the world of XML Web Services and dramatically improves the interoperability of Java-language
programs with existing software written in a variety of other programming languages.
Active State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be integrated into the Visual
Studio .NET environment. Visual Perl includes support for Active States Perl Dev Kit.
Other languages for which .NET compilers are available include
FORTRAN
COBOL
Eiffel
Windows Forms
XML
WEB
SERVICES
Base Class Libraries
Common Language Runtime
Operating System
C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by the
CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET
Framework; it manages the execution of the code and also makes the development process
easier by providing services.
C#.NET is a CLS-compliant language. Any objects, classes, or components that created
in C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS-compliant languages in C#.NET .The
use of CLS ensures complete interoperability among applications, regardless of the languages
used to create the application.
that must be performed when an object is destroyed. The sub finalize procedure is called
automatically when an object is destroyed. In addition, the sub finalize procedure can be called
only from the class it belongs to or from derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The .NET Framework monitors
allocated resources, such as objects and variables. In addition, the .NET Framework
automatically releases memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by
applications. When the garbage collector comes across an object that is marked for garbage
collection, it releases the memory occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us to define multiple
procedures with the same name, where each procedure has a different set of arguments.
Besides using overloading for procedures, we can use it for constructors and properties in a
class.
MULTITHREADING
C#.NET also supports multithreading. An application that supports multithreading can
handle multiple tasks simultaneously, we can use multithreading to decrease the time taken
by an application to respond to user interaction.
2. QUERY
3. FORM
4. REPORT
5. MACRO
TABLE:
A database is a collection of data about a specific topic.
VIEWS OF TABLE
We can work with a table in two types,
1. Design View
2. Datasheet View
Design View
To build or modify the structure of a table we work in the table design view. We
can specify what kind of data will be hold.
Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view
mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either dynaset (if you edit
it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the
dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it,
such as deleting or updating.
CHAPTER-6
SYSTEM DESIGN
6.1 ARCHITECTURE DIAGRAM
Alert Generation
Login
Facedetection
Face Counting
Filtering
Techniques
Local Server
Server
Image
Comparison
1: Verification Authentication
2: Unautherized
3: Testing Image
Detect face
4: Face Counting
5: Filters Used in image
Final Result
6.5COLLABORATIVE DIAGRAM
4: Face Counting
1: Verification Authentication
Facedet
Login
ection
Face
Counting
2: Unautherized
Filtering
Techniques
7: Training Image
3: Testing Image
Image
Comparison
Server
6.7.CLASS DIAGRAM
Class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system classes,
their attributes, and the relationship between the classes
FaceDetection
Multiple Face Rec
Face Recognition
PRS
Harscade()
Frame Grabber()
Infrared()
thermal()
forensic sketch()
viewed sketch()
Counting
Detected face
Comparison
Global & Local Db Comparison
Harscade detection()
Getfiles()
getfiles_db()
CHAPTER-7
TECHNIQUES AND ALGORITHM
Techniques & Algorithm:
Adaboost Algorithm
BHOG are feature descriptors used in computer vision and image processing for the purpose of
object detection.
This method is similar to that of edge orientation histograms, and shape contexts and shape
contexts, but differs in that it is computed on a dense grid of uniformly spaced cells and uses
overlapping local contrast normalization for improved accuracy.
Support Vector Machine classifier is a binary classifier which looks for an optimal hyper plane as
a decision function. Once trained on images containing some particular object, the SVM
classifier can make decisions regarding the presence of an object, such as a human being, in
additional test images.
Ada-boost Algorithm:
This algorithm is used for detect the faces in quickly.This is same for the
support vector machine process.
otherwise.
variations
LBP is a simple yet very efficient texture operator which labels the pixels of an
image by thresholding the neighborhood of each pixel and considers the result as a binary
number. Due to its discriminative power and computational simplicity, LBP texture operator has
become a popular approach in various applications. It can be seen as a unifying approach to the
traditionally divergent statistical and structural models of texture analysis.
CHAPTER-8
MODULES
Modules:
Credential Creation
Image Filter
Image Comparison
Credential Creation:
Custom authentication schemes should set the Authenticated property
to true to indicate that a user has been authenticated. When a user submits his or her login
information, the Login control first raises the Logging In event, then the authenticate event, and
finally the Logged In event.
This module denotes normal human face detection, counting & recognition in video as
well as in live streaming using lbp, lgp and trellis optimization Algorithm.
Image Filter
Four types of filters are used
1. near infrared
2. Thermal to infrared
3. Viewed sketch
4. Forensic sketch
Near Infrared:
The use of near infrared (NIR) imaging brings a new dimension
for face detection and recognition presented an NIR-based face detection method.
Thermal to infrared
For face recognition has been accurate identification under variable illumination
conditions.
Forensic Sketch:
Forensic technique that has been rou- tinely used in criminal investigations.
Viewed Sketch:
Face recognition algorithm is a novel way of helping criminal searches by
accurately matching the features of the picture from the viewed sketch.
Image Comparison:
It will compare the frames to find the exact human and his action
CHAPTER-9
TESTING AND IMPLEMENTATION
9.1 TESTING:
engineering.
The goal of the software testing is to convince system developer and customers that the
software is good enough for operational use. Testing is a process intended to build
confidence in the software.
And conducted
systematically.
validation.
Alpha Testing
Beta Testing
It is also called as glass-box testing. It is a test case design method that uses the
control structure of the procedural design to derive test cases.
Using white box testing methods, the software engineer can derive test
cases
that
1. Guarantee that all independent parts within a module have been exercised at
only once.
2. Exercise all logical decisions on their true and false sides.
Its also called as behavioral testing. It focuses on the functional requirements of the
software.
A black box testing enables a software engineering to derive sets of input conditions that
will fully exercise all functional requirements for a program.
ALPHA TESTING:
Alpha testing is the software prototype stage when the software is first able to run. It will not
have all the intended functionality, but it will have core functions and will be able to accept
inputs and generate outputs. An alpha test usually takes place in the developer's offices on a
separate system.
BETA TESTING:
The beta test is a live application of the software in an environment that cannot be controlled
by the developer. The beta test is conducted at one or more customer sites by the end user of
the software.
UNIT TESTING:
In this testing we test each module individually and integrate with the overall system.
Unit testing focuses verification efforts on the smallest unit of software design in the module.
This is also known as module testing. The module of the system is tested separately. This testing
is carried out during programming stage itself. In this testing step each module is found to
working satisfactorily as regard to the expected output from the module. There are some
validation checks for fields also. It is very easy to find error debut in the system.
VALIDATION TESTING:
At the culmination of the black box testing, software is completely assembled as a
package, interfacing error have been uncovered and corrected and a final series of software tests.
That is, validation tests begin, validation testing can be defined many ways but a simple
definition is that validation succeeds when the software functions in manner that can be
reasonably expected be the customer. After validation tests has been conducted one of the two
possible conditions exists
CHAPTER-10
CONCLUSION
We conclude that the proposed local transform features and its hybrid feature are very
effective for face detection in terms of performance and operating speed using lbp, lgp and
bhog. Trellis Optimization method is used to estimate count of the face.The proposed method
leads to excellent matching accuracies across four different HFR scenarios (near infrared,
thermal infrared, viewed sketch, and forensic sketch). Results were compared against a
leading commercial face recognition engine
CHAPTER-11
REFERENCES
3. Chen, X., P. J. Flynn, and K. W. Bowyer. IR and Visible Light Face Recognition.Computer
Vision and Image Understanding 99, no. 3 (2005): 332-358.
4. Lu, J., KN Plataniotis, and AN Venetsanopoulos. Face Recognition using Kernel Direct
Discriminant Analysis Algorithms. Neural Networks, IEEE Transactions on 14, no. 1 (2003):
117-126.
5. Pereira, D. Face Recognition using Uncooled Infrared Imaging, Electrical Engineer Thesis,
Naval Postgrduate School, Monterey, CA (2002).
6. Lee, C. K. Infrared Face Recognition, MSEE Thesis, Naval Postgrduate School, Monterey,
CA (2004).