Sunteți pe pagina 1din 34

BIOMETRICS

is the process by which a person’s unique physical and other traits are detected and recorded by an
electronic device or system as a means of confirming identity. The term “biometrics” derives from the
word “biometry”, which refers to the statistical analysis of biological observations and phenomena.
Since biometric identifiers are unique to individuals, they are more reliable in verifying identity than
token and knowledge-based methods, such as identity cards and passwords.
Biometric identifiers, or modalities, are often categorized as either “physiological” or “behavioral”.
Physiological biometric identifiers are related to a person’s physicality and include: fingerprint
recognition, hand geometry, odour/scent, iris scans, DNA, palm print and facial recognition.
Behavioral characteristics are related to the pattern of behavior of a person and include: keystroke
dynamics, gait analysis, voice recognition, mouse use characteristics, signature analysis and
cognitive biometrics.
Biometric technologies are systems or applications that are designed to employ biometric data
derived from biometric identifiers or modalities. A biometric system is an automated process that: i)
collects or captures biometric data via a biometric identification device, such as an image scanner for
fingerprints or palm vein patterns or a camera to collect facial and iris scans, ii) extracts the data from
the actual submitted sample, iii) compares the scanned data from those captured for reference, iv)
matches the submitted sample with templates and v) determines or verifies whether the identity of the
biometric data holder is authentic.
Biometric technologies therefore consist of both hardware and software. A biometric identification
device is hardware that gathers, reads and compares biometric data. Biometric data is a sample
taken from individual which is unique to their own person. Software embedded within biometric
technologies includes a biometric engine that processes gathered biometric data. The software
typically works in tandem with the hardware to operate the biometric data capture process, extract
the data, and undertake comparison, including data matching.
Biometric technologies can also be classified further according to the type of biometrics being used in
the system. The technologies are typically used to either identify persons and their characteristic
against a database, such as criminal records, or to authenticate the identity of persons to grant them
access to computing resources, devices or facilities.

Facial Recognition:
People typical use faces to recognize other individuals. Advancements in computing over the past
few decades has now enabled similar recognition automatically. Early face recognition algorithms
used simple geometric models, but the recognition process has now matured into a science of
sophisticated mathematical representations and matching processes. Major advancements and
initiatives in the past 10 to 15 years have propelled face recognition technology. Face recognition can
be used for both verification and identification.
Automated face recognition is a relatively new concept. Developed in the 1960s, the first semi-
automated system for face recognition required the administrator to locate features (such as eyes,
ears, nose, and mouth) on the photographs before it calculated distances and ratios to a common
reference point, which were then compared to reference data. In the 1970s, specific subjective
markers were used such as hair color and lip thickness to automate the recognition.
The problem with both of these early solutions was that the measurements and locations were
manually computed. In 1988, a new applied principle component analysis was developed, along with
a standard linear algebra technique, to address the face recognition problem. This was considered
somewhat of a milestone as it showed that less than 100 values were required to accurately code a
suitably aligned and normalized face image.
In 1991, scientists discovered that by using eigenfaces techniques, faces in images could be
detected in images, a discovery that enabled reliable real-time automated face recognition systems.
Eigenface are mathematical equations that adequately reduce statistical complexity in face image
representation. The technology first captured the public’s attention from the media reaction to a trial
implementation at the January 2001 Super Bowl, which captured surveillance images and compared
them to a database of digital mugshots. This demonstration initiated much-needed analysis on how to
use the technology to support national needs while being considerate of the public’s social and
privacy concerns. Today, face recognition technology is being used to combat passport fraud,
support law enforcement, identify missing children, and minimize benefit / identity fraud.
There are two predominant approaches to the face recognition problem: geometric (feature based)
and photometric (view based). As researcher interest in face recognition continued, many different
algorithms were developed, three of which have been well studied in face recognition literature:
Principal Components Analysis (PCA), Linear Discriminant Analysis (LOA), and Elastic Bunch Graph
Matching (EBGM).

Principal Components Analysis (PCA), commonly referred to as the use of eigenfaces, is the
technique that was pioneered in 1988. With PCA, the probe and gallery images must be the same
size and must first be normalized to line up the eyes and mouth of the subjects within the images.
The PCA approach is then used to reduce the dimension of the data by means of data compression
basics and reveals the most effective low dimensional structure of facial patterns. This reduction in
dimensions removes information that is not useful and precisely decomposes the face structure into
orthogonal (uncorrelated) components known as eigenfaces. Each face image may be represented
as a weighted sum (feature vector) of the eigenfaces, which are stored in a 1 D array. A probe image
is compared against a gallery image by measuring the distance between their respective feature
vectors. The PCA approach typically requires the full frontal face to be presented each time;
otherwise the image results in poor performance. The primary advantage of this technique is that it
can reduce the data needed to identify the individual to one-one thousandth of the data presented.

Linear Discriminant Analysis (LDA) is a statistical approach for classifying samples of unknown
classes based on training samples with known classes. This technique aims to maximize between-
class (i.e., across users) variance and minimize within-class (i.e., within user) variance. In When
dealing with high dimensional face data, this technique faces the small sample size problem that
arises where there are a small number of available training samples compared to the dimensionality
of the sample space.

Elastic Bunch Graph Matching (EBGM) relies on the concept that real face images have many
nonlinear characteristics that are not addressed by the linear analysis methods discussed earlier,
such as variations in illumination (outdoor lighting vs. indoor fluorescents), pose (standing straight vs.
leaning over) and expression (smile or frown). A Gabor wavelet transform creates a dynamic link
architecture that projects the face onto an elastic grid. The Gabor jet is a node on the elastic grid,
notated by circles on the image below, which describes the image behavior around a given pixel. It is
the result of a convolution of the image with a Gabor filter, which is used to detect shapes and to
extract features using image processing. [A convolution expresses the amount of overlap from
functions, blending the functions together.] Recognition is based on the similarity of the Gabor filter
response at each Gabor node. This biologically based method using Gabor filters is a process
executed in the visual cortex of higher mammals. The difficulty with this method is the requirement of
accurate landmark localization, which can sometimes be achieved by combining PCA and LDA
methods.
Signature Recognition:
Writing is human physical expression but concurrently an acquired skill. Signature recognition
requires an individual to supply a sample of text which serves as a base of measurement of their
writing. The purpose of the signature recognition process is to identify the writer of a given sample,
while the purpose of a signature verification process is to confirm or reject the sample. Writing
sample can be examined by way of two separate techniques.
The first technique is static. It requires the individual to supply their signature on paper, where it will
be digitized through an optical scanner or camera. The data, in turn, run through a software algorithm
that recognizes the text by way of analyzing its shape. The this technique is referred to as an “off-
line” mode of recognition.
Off-line handwriting recognition is an important form of biometric identification because signatures are
a socially accepted identification method which are commonly used for bank, credit card and various
business transactions. Off-line signature processing are typically used in office automation systems
that validate cheques, credit cards, contracts and historical documents.
Static, off-line handwriting recognition is performed after a text sample has been completed and
digitally captured. The optically captured image data is then converted into a bit pattern. Off-line
signature processing has total nearly 40 features including analysis of center of gravity, edges, and
curves for authentication. Off-line signature recognition can thus be a challenging task due to normal
variability in signatures and the fact that dynamic information regarding the pen path is not available.
Moreover, sample data is normally limited to only a small number of signatures per individual. Shape
matching is normally treated by determining and matching key points so as to avoid the problems
associated with the detection and parameterization of curves.
The first technique for signature recognition is dynamic. Dynamic signature recognition is a biometric
modality that uses, for recognition purposes, the anatomic and behavioral characteristics that an
individual exhibits when signing his or her name (or other phrase).
Dynamic signature devices should not be confused with off-line electronic signature capture systems
that are used to capture a graphic image of the signature and are common in locations where
merchants are capturing signatures for transaction authorizations.
Data such as the dynamically captured direction, stroke, pressure, and shape of an individual’s
signature can enable handwriting to be a reliable indicator of an individual’s identity (i.e.,
measurements of the captured data, when compared to those of matching samples, are a reliable
biometric for writer identification.)
The first signature recognition system was developed in 1965. Dynamic signature recognition
research continued in the 1970s focusing on the use of static or geometric characteristics (what the
signature looks like) rather than dynamic characteristics (how the signature was made). Interest in
dynamic characteristics surged with the availability of better acquisition systems accomplished
through the use of touch sensitive technologies.
In 1977, a patent was awarded for a “personal identification apparatus” that was able to acquire
dynamic pressure information.
Dynamic signature recognition uses multiple characteristics in the analysis of an individual’s
handwriting. These characteristics vary in use and importance from vendor to vendor and are
collected using contact sensitive technologies, such as PDAs or digitizing tablets, which acquires the
signature in real time.
Most of the features used are dynamic characteristics rather than static and geometric
characteristics, although some vendors also include these characteristics in their analyses. Common
dynamic characteristics include the velocity, acceleration, timing, pressure, and direction of the
signature strokes, all analyzed in the X, Y, and Z directions.
The X and Y position are used to show the changes in velocity in the respective directions while the Z
direction is used to indicate changes in pressure with respect to time.
Some dynamic signature recognition algorithms incorporate a learning function to account for the
natural changes or drifts that occur in an individual’s signature over time. The most popular pattern
recognition techniques applied for signature recognition are dynamic time warping, hidden Markov
models and vector quantization. Combinations of different techniques also exist.
The characteristics used for dynamic signature recognition are almost impossible to replicate. Unlike
a graphical image of the signature, which can be replicated by a trained human forger, a computer
manipulation, or a photocopy, dynamic characteristics are complex and unique to the handwriting
style of the individual. Despite this major strength of dynamic signature recognition, the
characteristics historically have a large intra-class variability (meaning that an individual’s own
signature may vary from collection to collection), often making dynamic signature recognition difficult.
Recent research has reported that static writing samples can be successfully analyzed to overcome
this issue.

Fingerprint Matching:
Over the years, fingerprint identification has become one of the most well-known and publicized
biometric modalities. Because of their uniqueness and consistency over time, fingerprints have been
used for identification for over a century. The first recorded use of fingerprints for identification was in
1858. With advances in computing capabilities by the 1960s, fingerprint identification is now a highly
automated technique. Fingerprints are examined for external characteristics, specifically the friction
ridge patterns unique to every individual.
Fingerprint identification is popular because of the inherent ease in acquisition, the numerous
sources (10 fingers) available for collection. Fingerprint identification is mainly used by law
enforcement to catalogue criminals and by immigration agencies to track and issue travel documents.
The practice of using fingerprints as a method of identifying individuals has been in use since 1888
when Sir Francis Galton defined some of the points or characteristics from which fingerprints can be
identified. Consequently, “Galton Points” are the foundation for the science of fingerprint
identification, which has expanded and transitioned over the past century.
Fingerprint identification began its transition to automation in the late 1960s along with the
emergence of computing technologies. With the advent of computers, a subset of the Galton Points,
referred to as minutiae, has been utilized to develop automated fingerprint technology.
A fingerprint usually appears as a series of dark lines that represent the high, peaking portion of the
friction ridge skin, while the valley between these ridges appears as white space and are the low,
shallow portion of the friction ridge skin. Fingerprint identification is based primarily on the minutiae,
or the location and direction of the ridge endings and bifurcations (splits) along a ridge path.
A variety of sensor types — optical, capacitive, ultrasound, and thermal — are used for collecting the
digital image of a fingerprint surface. Optical sensors take an image of the fingerprint, and are the
most common sensor today.
The two main categories of fingerprint matching techniques are minutiae-based matching and pattern
matching. Pattern matching simply compares two images to see how similar they are. Pattern
matching is usually used in fingerprint systems to detect duplicates. The most widely used
recognition technique, minutiae-based matching, relies on the minutiae points: specifically the
location and direction of each point.

Multi-Factor Authentication:
Multi-factor authentication is a method of multi-faceted access control which a user can pass by
successfully presenting authentication factors from at least two of the three categories:
• knowledge factors (“things only the user knows”), such as passwords or passcodes;
• possession factors (“things only the user has”), such as ATM cards or hardware tokens; and
• inherence factors (“things only the user is”), such as biometrics
Knowledge factors are the most commonly used form of authentication. In this form, the user is
required to prove knowledge of a secret in order to authenticate, such as a password.
A password is a secret word or string of characters that is used for user authentication. This is the
most commonly used mechanism of authentication. Many multi-factor authentication techniques rely
on password as one factor of authentication. Variations include both longer ones formed from
multiple words (a passphrase) and the shorter, purely numeric, personal identification number (PIN)
commonly used for ATM access. Traditionally, passwords are expected be memorized.
Many secret questions such as “Where were you born?”, are poor examples of a knowledge factor
because they may be known to a wide group of people, or be able to be researched.
Possession factors include both connected and disconnected tokens. Connected tokens are devices
that are physically connected to the computer to be used, and transmit data automatically. There are
a number of different types, including card readers, wireless tags and USB tokens. Disconnected
tokens have no connections to the client computer. They typically use a built-in screen to display the
generated authentication data, which is manually typed in by the user.
Inherited factors are usually associated with the user, and typically include biometric methods,
including fingerprint readers, retina scanners or voice recognition.
Requiring more than one independent factor increases the difficulty of providing false credentials.
Two-factor authentication requires the use of two of three independent authentication factors, as
identified above. The number and the independence of factors is important, since more independent
factors imply higher probabilities that the bearer of the identity credential actually does hold that
identity.
Multi-factor authentication is sometimes confused with “strong authentication”. However, “strong
authentication” and “multi-factor authentication”, are fundamentally different processes. Soliciting
multiple answers to challenge questions can typically be considered strong authentication, but,
unless the process also retrieves “something the user has” or “something the user is”, it is not
considered multi-factor authentication.

History of Biometrics
Stephen Mayhew
Introduction
The term “biometrics” is derived from the Greek words “bio” (life) and “metrics” (to measure).
Automated biometric systems have only become available over the last few decades, due to
significant advances in the field of computer processing. Many of these new automated techniques,
however, are based on ideas that were originally conceived hundreds, even thousands of years ago.
One of the oldest and most basic examples of a characteristic that is used for recognition by humans
is the face. Since the beginning of civilization, humans have used faces to identify known (familiar)
and unknown (unfamiliar) individuals. This simple task became increasingly more challenging as
populations increased and as more convenient methods of travel introduced many new individuals
into- once small communities. The concept of human-to-human recognition is also seen in
behavioral-predominant biometrics such as speaker and gait recognition. Individuals use these
characteristics, somewhat unconsciously, to recognize known individuals on a day-to-day basis.
Other characteristics have also been used throughout the history of civilization as a more formal
means of recognition. Some examples are:
• In a cave estimated to be at least 31,000 years old, the walls are adorned with paintings believed to
be created by prehistoric men who lived there. Surrounding these paintings are numerous handprints
that are felt to “have acted as an unforgettable signature” of its originator.
• There is also evidence that fingerprints were used as a person’s mark as early as 500 B.C.
“Babylonian business transactions are recorded in clay tablets that include fingerprints.”
• Joao de Barros, a Spanish explorer and writer, wrote that early Chinese merchants used
fingerprints to settle business transactions. Chinese parents also used fingerprints and footprints to
differentiate children from one another.
• In early Egyptian history, traders were identified by their physical descriptors to differentiate
between trusted traders of known reputation and previous successful transactions, and those new to
the market.
• The 14th century Persian book “Jaamehol-Tawarikh” includes comments about the practice of
identifying persons from their fingerprints.
• In 1684 Dr. Nehemiah Grew published friction ridge skin observations in “Philosophical
Transactions of the Royal Society of London” paper.
• Dutch anatomist Govard Bidloo’s 1685 book, “Anatomy of the Human Body” also described friction
ridge skin details.
• In 1686, Marcello Malpighi, an anatomy professor at the University of Bologna, noted fingerprint
ridges, spirals and loops in his treatise.
In 1788, German anatomist and doctor J. C. A. Mayer wrote “Anatomical Copper-plates with
Appropriate Explanations” containing drawings of friction ridge skin patterns, noting that “Although the
arrangement of skin ridges is never duplicated in two persons, nevertheless the similarities are closer
among some individuals. Mayer was the first to declare that friction ridge skin is unique.
By the mid-1800s, with the rapid growth of cities due to the industrial revolution and more productive
farming, there was a formally recognized need to identify people. Merchants and authorities were
faced with increasingly larger and more mobile populations and could no longer rely solely on their
own experiences and local knowledge. Influenced by the writings of Jeremy Bentham and other
Utilitarian thinkers, the courts of this period began to codify concepts of justice that endure with us to
this day. Most notably, justice systems sought to treat first time offenders more leniently and repeat
offenders more harshly. This created a need for a formal system that recorded offenses along with
measured identity traits of the offender. The first of two approaches was the Bertillon system of
measuring various body dimensions, which originated in France. These measurements were written
on cards that could be sorted by height, arm length or any other parameter. This field was called
anthropometries.
The other approach was the formal use of fingerprints by police departments. This process emerged
in South America, Asia, and Europe. By the late 1800s a method was developed to index fingerprints
that provided the ability to retrieve records as Bertillon’s method did but that was based on a more
individualized metric-fingerprint patterns and ridges. The first such robust system for indexing
fingerprints was developed in India by Azizul Haque for Edward Henry, Inspector General of Police,
Bengal, India. This system, called the Henry System, and variations on it are still in use for classifying
fingerprints.
True biometric systems began to emerge in the latter half of the twentieth century, coinciding with the
emergence of computer systems. The nascent field experienced an explosion of activity in the 1990s
and began to surface in everyday applications in the early 2000s.
Timeline of Biometrics History
1858 – First systematic capture of hand images for identification purposes is recorded
Sir William Herschel, working for the Civil Service of India, recorded a handprint on the back of a
contract for each worker to distinguish employees from others who might claim to be employees
when payday arrived. This was the first recorded systematic capture of hand and finger images that
were uniformly taken for identification purposes.
1883 – Twain writes about fingerprints in “Life on the Mississippi”
In “A Thumb-Print and What Came of It,” one of the stories serialized in Mark Twain’s “Life on the
Mississippi” the author wrote about fingerprints, and described a process for taking them. In 1894’s
“The Tragedy of Pudd’nhead Wilson”, Twain again mentions the use of fingerprints for identification.
In the story, a man on trial calls on the comparison of his fingerprints to those left at the crime scene
to prove his innocence.
1870 – Bertillon develops anthropometries to identify individuals
Alphonse Bertillon developed “Bertillonage” or anthropometries, a method of identifying individuals
based on detailed records of their body measurements, physical descriptions and photographs.
Repeat criminal offenders often provided different aliases when arrested. Bertillon noted that
although they could change their names, they could not change certain elements of their bodies.
Police authorities throughout the world used his system, until its use quickly faded when it was
discovered that some people shared the same measurements.
1892 – Galton develops a classification system for fingerprints
Sir Francis Galton wrote a detailed study of fingerprints in which he presented a new classification
system using prints from all ten fingers. The characteristics (minutiae) that Galton used to identify
individuals are still used today. These details are often referred to as Galton’s details.
1896 – Henry develops a fingerprint classification system
Sir Edward Henry, Inspector General of the Bengal Police, was in search of a method of identification
to implement concurrently or to replace anthropometries. Henry consulted Sir Francis Galton
regarding fingerprinting as a method of identifying criminals. Once the fingerprinting system was
implemented, one of Henry’s workers, Azizul Haque, developed a method of classifying and storing
the information so that searching could be performed easily and efficiently. Sir Henry later establish-
ed the first British fingerprint files in London. The Henry Classification System, as it came to be
known, was the precursor to the classification system used for many years by the Federal Bureau of
Investigation (FBI) and other criminal justice organizations that perform tenprint fingerprint searches.
In July 1901 the Fingerprint Branch at New Scotland Yard (Metropolitan Police) was created using
the Henry System of Fingerprint Classification.
1903 – NY State Prisons begin using fingerprints
“The New York Civil Service Commission established the practice of fingerprinting applicants to pre-
vent them from having better qualified persons take their tests for them.” This practice was adopted
by the New York state prison system where fingerprints were used “for the identification of criminals
in 1903. In 1904 the fingerprint system accelerated when the United States Penitentiary at
Leavenworth, Kansas, and the St. Louis, Missouri Police Department both established fingerprint
bureaus. During the first quarter of the 20th century, more and more local police identification
bureaus established fingerprint systems. The growing need and demand by police officials for a
national repository and clearinghouse for fingerprint records led to an Act of Congress on July 1,
1921, establishing the Identification Division of the FBI.”
1903 – Bertillon System collapses
Two men, determined later to be identical twins, were sentenced to the US Penitentiary at
Leavenworth, KS, and were found to have nearly the same measurements using the Bertillon system.
Although the basis of this story has been subsequently challenged, the story was used to argue that
Bertillon measurements were inadequate to differentiate between these two individuals.
1936 – Concept of using the iris pattern for identification is proposed
Ophthalmologist Frank Burch proposed the concept of using iris patterns as a method to recognize
an individual.
1960s – Face recognition becomes semi-automated
The first semi-automatic face recognition system was developed by Woodrow W. Bledsoe under
contract to the US Government. This system required the administrator to locate features such as
eyes, ears, nose and mouth on the photographs. This system relied solely on the ability to extract
useable feature points. It calculated distances and ratios to a common reference point that was
compared to the reference data.
1960 – First model of acoustic speech production is created
A Swedish Professor, Gunnar Fant, published a model describing the physiological components of
acoustic speech production. His findings were based on the analysis of x-rays of individuals making
specified phonic sounds. These findings were used to better understand the biological components of
speech, a concept crucial to speaker recognition.
1963 – Hughes research paper on fingerprint automation is published
1965 -Automated signature recognition research begins
North American Aviation developed the first signature recognition system in 1965.
1969 – FBI pushes to make fingerprint recognition an automated process
In 1969, the Federal Bureau of Investigation (FBI) began its push to develop a system to automate its
fingerprint identification process, which was quickly becoming overwhelming and required many man-
hours. The FBI contracted the National Institute of Standards and Technology (NIST) to study the
process of automating fingerprint identification. NIST identified two key challenges: (1) scanning
fingerprint cards and identifying minutiae and (2) comparing and matching lists of minutiae.
1970s – Face Recognition takes another step towards automation
Goldstein, Harmon, and Lesk used 21 specific subjective markers such as hair color and lip thickness
to automate face recognition. The problem with both of these early solutions was that the
measurements and locations were manually computed.
1970 – Behavioral components of speech are first modeled
The original model of acoustic speech production, developed in 1960, was expanded upon by Dr.
Joseph Perkell, who used motion x-rays and included the tongue and jaw. The model provided a
more detailed understanding of the complex behavioral and biological components of speech.
1974- First commercial hand geometry systems become available
The first commercial hand geometry recognition systems became available in the early 1970s,
arguably the first commercially available biometric device after the early deployments of fingerprinting
in the late 1960s. These systems were implemented for three main purposes: physical access
control; time and attendance; and personal identification.
1975 – FBI funds development of sensors and minutiae extracting technology
The FBI funded the development of scanners and minutiae extracting technology, which led to the
development of a prototype reader. At this point, only the minutiae were stored because of the high
cost of digital storage. These early readers used capacitive techniques to collect the fingerprint
characteristics. Over the next decades, NIST focused on and led developments in automatic methods
of digitizing inked fingerprints and the effects of image compression on image quality, classification,
extraction of minutiae, and matching. The work at NIST led to the development of the M40 algorithm,
the first operational matching algorithm used at the FBI. Used to narrow the human search, this
algorithm produced a significantly smaller set of images that were then provided to trained and
specialized human technicians for evaluation. Developments continued to improve the available
fingerprint technology.
1976 – First prototype system for speaker recognition is developed
Texas Instruments developed a prototype speaker recognition system that was tested by the US Air
Force and The MITRE Corporation.
1977 – Patent is awarded for acquisition of dynamic signature information
Veripen, Inc. was awarded a patent for a “Personal identification apparatus” that was able to acquire
dynamic pressure information. This device allowed the digital capture of the dynamic characteristics
of an individual’s signature characteristics. The development of this technology led to the testing of
automatic handwriting verification (performed by The MITRE Corporation) for the Electronic Systems
Division of the United States Air Force.
1980s – NIST Speech Group is established
The National Institute of Standards and Technology (NIST) developed the NIST Speech Group to
study and promote the use of speech processing techniques. Since 1996, under funding from the
National Security Agency, the NIST Speech Group has hosted yearly evaluations –the NIST Speaker
Recognition Evaluation Workshop-to foster the continued advancement of the speaker recognition
community.
1985 – Concept that no two irides are alike is proposed
Drs. Leonard Flom and Aran Safir, ophthalmologists, proposed the concept that no two irides are
alike.
1985 – Patent for hand identification is awarded
The commercialization of hand geometry dates to the early 1970s with one of the first deployments at
the University of Georgia in 1974. The US Army began testing hand geometry for use in banking in
about 1984. These deployments predate the concept of using the geometry of a hand for
identification as patented by David Sidlauskas.
1985 – Patent for vascular pattern recognition is awarded to Joseph Rice
The technology uses the subcutaneous blood vessel pattern to achieve recognition.
1986 – Exchange of fingerprint minutiae data standard is published
The National Bureau of Standards (NBS) – now the National Institutes of Standards and Technology
(NIST) – published, in collaboration with ANSI, a standard for the exchange of fingerprint minutiae
data (ANSI/ NBS-I CST 1-1986). This was the first version of the current fingerprint interchange
standards used by law enforcement agencies around the world today.
1986 – Patent is awarded stating that the iris can be used for identification
Drs. Leonard Flom and Aran Safir were awarded a patent for their concept that the iris could be used
for identification. Dr. Flom approached Dr. John Daugman to develop an algorithm to automate
identification of the human iris.
1988 – First semi-automated facial recognition system is deployed
In 1988, the Lakewood Division of the Los Angeles County Sheriff’s Department began using
composite drawings (or video images) of a suspect to conduct a database search of digitized
mugshots.
1988 – Eigenface technique is developed for face recognition
Kirby and Sirovich applied principle component analysis, a standard linear algebra technique, to the
face recognition problem. This was a milestone because it showed that less than one hundred values
were required to approximate a suitably aligned and normalized face image.
1991 – Face detection is pioneered, making real time face recognition possible
Turk and Pentland discovered that while using the eigenfaces techniques, the residual error could be
used to detect faces in images. The result of this discovery meant that reliable real time automated
face recognition was possible. They found that this was somewhat constrained by environmental
factors, but the discovery caused a large spark of interest in face recognition development.
1992 – Biometric Consortium is established within US Government
The National Security Agency initiated the formation of the Biometric Consortium and held its first
meeting in October of 1992. The Consortium was chartered in 1995 by the Security Policy Board,
which was abolished in 2001. Participation in the Consortium was originally limited to government
agencies; members of private industry and academia were limited to attending in an observer
capacity. The Consortium soon expanded its membership to include these communities and
developed numerous working groups to initiate and/or expand efforts in testing, standards
development, interoperability, and government cooperation. With the explosion of biometric activities
in the early 2000s, the activities of these working groups were integrated into other organizations
(such as INCITS, ISO, and the NSTC Subcommittee on Biometrics) in order to expand and
accelerate their activities and impacts. The Consortium itself remains active as a key liaison and
discussion forum between government, industry, and academic communities.
1993 – FacE REcognition Technology (FERET) program is initiated
The FacE REcogntion Technology (FERET) Evaluation was sponsored from 1993-1997 by the
Defense Advanced Research Products Agency (DARPA) and the DoD Counterdrug Technology
Development Program Office in an effort to encourage the development of face recognition
algorithms and technology. This evaluation assessed the prototypes of face recognition systems and
propelled face recognition from its infancy to a market of commercial products.
1994- First iris recognition algorithm is patented
Dr. John Daugman was awarded a patent for his iris recognition algorithms. Owned by Iridian
Technologies, the successor to lriScan, Inc. – this patent is the cornerstone of most commercial iris
recognition products to date.
1994 – Integrated Automated Fingerprint Identification System (IAFIS) competition is held
The next stage in fingerprint automation occurred at the end of the Integrated Automated Fingerprint
Identification System (IAFIS) competition. The competition identified and investigated three major
challenges: (1) digital fingerprint acquisition, (2) local ridge characteristic extraction, and (3) ridge
characteristic pattern matching. The demonstrated model systems were evaluated based on specific
performance requirements. Lockheed Martin was selected to build the FBI’s IAFIS.
1994 – Palm System is benchmarked
The first known Automated Fingerprint Identification Systems (AFIS) system built to support palm
prints is believed to have been built by a Hungarian company known as RECOWARE Ltd. In late
1994, latent experts from the United States benchmarked this palm system, RECOderm™, in
Hungary and invited RECOWARE Ltd. to the 1995 International Association for Identification (I AI)
conference in Costa Mesa, California. The palm and fingerprint identification technology embedded in
the RECOderm TM System was bought by Lockheed Martin Information Systems in 1997.
1994 – INSPASS is implemented
The Immigration and Naturalization Service Passenger Accelerated Service System (INSPASS) was
a biometrics implementation that allowed travelers to bypass immigration lines at selected airports
throughout the US until it was discontinued in late 2004. Authorized travelers received a card
encoded with their hand geometry information. Rather than being processed by an Immigration
Inspector, INSPASS travelers presented their tokens (cards) with the encoded information and their
hands to the biometric device. Upon verification of the identity claimed, the individual could proceed
to the customs gate, thus bypassing long inspection lines and speeding entry into the US.
1995 – Iris prototype becomes available as a commercial product
The joint project between the Defense Nuclear Agency and lriscan resulted in the availability of the
first commercial iris product.
1996 – Hand geometry is implemented at the Olympic Games
A major public use of hand geometry occurred at the 1996 Atlanta Olympic Games where hand
geometry systems were implemented to control and protect physical access to the Olympic Village.
This was a significant accomplishment because the systems handled the enrollment of over 65,000
people. Over 1 million transactions were processed in a period of 28 days.
1996 – NIST begins hosting annual speaker recognition evaluations
Under funding from the National Security Agency, the National Institute of Standards and Technology
(NIST) Speech Group began hosting yearly evaluations in 1996. The NIST Speaker Recognition
Evaluation Workshop aims to foster the continued advancement of the speaker recognition
community.
1997 – First commercial, generic biometric interoperability standard is published
Sponsored by NSA, the Human Authentication API (HA-API) was published as the first commercial,
generic biometric interoperability standard and focused on easing integration of and allowing for
interchangeability and vendor independence. It was a breakthrough in biometric vendors working
together to advance the industry through standardization and was the precursor to subsequent
biometric standardization activities.
1998- FBI launches COOlS (DNA forensic database)
The FBI launched Combined DNA Index System (CODIS) to digitally store, search, and retrieve DNA
markers for forensic law enforcement purposes. Sequencing is a laboratory process taking between
40 minutes and several hours.
1999 – Study on the compatibility of biometrics and machine readable travel documents is
launched
The International Civil Aviation Organization’s (ICAO) Technical Advisory Group on Machine
Readable Travel Documents (TAG/MRTD) initiated a study to determine the “compatibility of
currently available biometric technologies with the issuance and inspection processes relevant to
MRTDs; and quantifying these compatibilities to determine whether one or more technologies
could/should be adopted as the international standard for application in MRTDs.”
1999 – FBI’s IAFIS major components become operational
IAFIS, the FBI’s large-scale ten-fingerprint (open-set) identification system, became operational. Prior
to the development of the standards associated with this system, a fingerprint collected on one
system could not be searched against fingerprints on another system. The development of this
system addressed the issues associated with communication and information exchange between
standalone systems as well as the introduction of a national network for electronic submittal of
fingerprints to the FBI. IAFIS is used for criminal history background checks and identification of
latent prints discovered at crime scenes. This system provides automated tenprint and latent search
capabilities, electronic image storage of fingerprints and facial images, and electronic exchange of
fingerprints and search responses.
2000 – First Face Recognition Vendor Test (FRVT 2000) is held
Multiple US Government agencies sponsored the Face Recognition Vendor Test (FRVT) in 2000.
FRVT 2000 served as the first open, large-scale technology evaluation of multiple commercially
available biometric systems. Additional FRVTs have been held in 2002 and 2006, and the FRVT
model has been used to perform evaluations of fingerprint (2003) and iris recognition (2006). FRVT’s
primary purpose is to evaluate performance on large-scale databases.
2000 – West Virginia University biometrics degree program is established
West Virginia University (WVU) and the FBI, in consultation with professional associations such as
the International Association for Identification, established a bachelor’s degree program in Biometric
Systems in 2000. While many universities have long had biometrics-related courses, this is the first
biometrics-based degree program. WVU encourages program participants to obtain a dual-degree in
Computer Engineering and Biometric Systems as the biometric systems degree is not accredited.
2001 – Face recognition is used at the Super Bowl in Tampa, Florida
A face recognition system was installed at the Super Bowl in January 2001 in Tampa, Florida, in an
attempt to identify “wanted” individuals entering the stadium. The demonstration found no “wanted”
individuals but managed to misidentify as many as a dozen innocent sports fans. Subsequent media
and Congressional inquiries served to introduce both biometrics and its associated privacy concerns
into the consciousness of the general public.
2002 – ISO/IEC standards committee on biometrics is established
The International Organization for Standardization (ISO) established the ISO/IEC JTC1
Subcommittee 37 (JTC1 /SC37) to support the standardization of generic biometric technologies. The
Subcommittee develops standards to promote interoperability and data interchange between
applications and systems.
2002 – M 1 Technical Committee on Biometrics is formed
The M1 Technical Committee on Biometrics is the US Technical Advisory Group (TAG) to the JTC1
ISC37. This technical committee reports to the InterNational Committee on Information Technology
Standards (INCITS), an accredited organization of the American National Standards Institute (ANSI),
which facilitates the development of standards among accredited organizations.
2002 – Palm Print Staff Paper is submitted to Identification Services Committee
In April 2002, a Staff Paper on palm print technology and Integrated Automated Fingerprint
Identification System (IAFIS) palm print capabilities was submitted to the Identification Services (IS)
Subcommittee, Criminal Justice Information Services Division (CJIS) Advisory Policy Board (APB).
The Joint Working Group called “for strong endorsement of the planning, costing, and development of
an integrated latent print capability for palms at the CJIS Division of the FBI.” As a result of this
endorsement and other changing business needs for law enforcement, the FBI announced the Next
Generation IAFIS (NGI) initiative. A major component of the NGI initiative is development of the
requirements for and deployment of an integrated National Palm Print Service.
2003 – Formal US Government coordination of biometric activities begins
The National Science & Technology Council, a US Government cabinet-level council, established a
Subcommittee on Biometrics to coordinate biometrics R&D, policy, outreach, and international
collaboration.
2003 – ICAO adopts blueprint to integrate biometrics into machine readable travel documents
On May, 28 2003, The International Civil Aviation Organization (ICAO) adopted a global, harmonized
blueprint for the integration of biometric identification information into passports and other Machine
Readable Travel Documents (MRTDs) … Facial recognition was selected as the globally
interoperable biometric for machineassisted identity confirmation with MRTDs.
2003 – European Biometrics Forum is established
The European Biometrics Forum is an independent European organisation supported by the
European Commission whose overall vision is to establish the European Union as the World Leader
in Biometrics Excellence by addressing barriers to adoption and fragmentation in the marketplace.
The forum also acts as the driving force for coordination, support and strengthening of the national
bodies.
2004 – US-VISIT program becomes operational
The United States Visitor and Immigrant Status Indication Technology (US-VISIT) program is the
cornerstone of the DHS visa issuance and entry I exit strategy. The US-VISIT program is a
continuum of security measures that begins overseas at the Department of State’s visa issuing posts,
and continues through arrival to and departure from the US. Using biometrics, such as digital inkless
fingerprints and digital photographs, the identity of visitors requiring a visa is now matched at each
step to ensure that the person crossing the US border is the same person who received the visa. For
visa-waiver travelers, the capture of biometrics first occurs at the port of entry to the US. By checking
the biometrics of a traveler against its databases, US-VISIT verifies whether the traveler has
previously been determined inadmissible, is a known security risk (including having outstanding
wants and warrants), or has previously overstayed the terms of a visa. These entry I exit procedures
address the US critical need for tighter security and its ongoing commitment to facilitate travel for the
millions of legitimate visitors welcomed each year to conduct business, learn, see family, or tour the
country.
2004 – DOD implements ABIS
The Automated Biometric Identification System (ABIS) is a Department of Defense (DoD) system
implemented to improve the US Government’s ability to track and identify national security threats.
The associated collection systems include the ability to collect, from enemy combatants, captured
insurgents, and other persons of interest, ten rolled fingerprints, up to five mug shots from varying
angles, voice samples (utterances), iris images, and an oral swab to collect DNA.
2004 – Presidential directive calls for mandatory government-wide personal identification card
for all federal employees and contractors
In 2004, President Bush issued Homeland Security Presidential Directive 12 (HSPD-12) for a
mandatory, government-wide personal identification card that all federal government departments
and agencies will issue to their employees and contractors requiring access to Federal facilities and
systems. Subsequently, Federal Information Processing Standard (FIPS) 201, Personal Identity
Verification (PIV) for Federal Employees and Contractors, specifies the technical and operational
requirements for the PIV system and card. NIST Special Publication 800-76 (Biometric Data
Specification for Personal Identity Verification) is a companion document to FIPS 201 describing how
the standard will be acquiring, formatting and storing fingerprint images and templates for collecting
and formatting facial images; and specifications for biometric devices used to collect and read
fingerprint images. The publication specifies that two fingerprints be stored on the card as minutia
templates.
2004 – First statewide automated palm print databases are deployed in the US
In 2004, Connecticut, Rhode Island and California established statewide palm print databases that
allow law enforcement agencies in each state to submit unidentified latent palm prints to be searched
against each other’s database of known offenders.
2004 – Face Recognition Grand Challenge begins
The Face Recognition Grand Challenge (FRGC) is a US Government-sponsored challenge problem
posed to develop algorithms to improve specific identified areas of interest in face recognition.
Participating researchers analyze the provided data, try to solve the problem, and then reconvene to
discuss various approaches and their results – an undertaking that is driving technology
improvement. Participation in this challenge demonstrates an expansive breadth of knowledge and
interest in this biometric modality.
2005 – US patent for iris recognition concept expires
The broad US patent covering the basic concept of iris recognition expired in 2005, providing
marketing opportunities for other companies that have developed their own algorithms for iris
recognition. However, the patent on the lrisCodes® implementation of iris recognition developed by
Dr. Daugman will not expire until 2011.
2005 – Iris on the Move is announced at Biometrics Consortium Conference
At the 2005 Biometrics Consortium conference, Sarnoff Corporation (now SRI International)
demonstrated Iris on the Move, a culmination of research and prototype systems sponsored by the
Intelligence Technology Innovation Center (ITIC), and previously by the Defense Advanced Research
Projects Agency (DARPA). The system enables the collection of iris images from individuals walking
through a portal.
2008 – U.S. Government begin coordinating biometric database use
Finger image and facial quality measurement algorithms and related toolset development was
finalized. An iris quality measurement algorithm was also developed.
The FBI and Department of Defense also started working on next generation databases designed to
include iris, face and palm data, in addition to fingerprint records.
The Department of Homeland Security denied an individual entry into the U.S. after cross-matched
biometric data identified the individual as a known or suspected terrorist
2010 – U.S. national security apparatus utilizes biometrics for terrorist identification
A fingerprint from evidence collected at the believed 9/11 planning location was positively matched to
a GITMO detainee. Other fingerprints were identified from items seized at other locations associated
with 9/11.
2011 – Biometric identification used to identify body of Osama bin Laden
Along with DNA, the CIA used facial recognition technology to identify the remains of Osama bin
Laden with 95 percent certainty.
2013 – Apple includes fingerprint scanners into consumer-targed smartphones
Touch ID is a fingerprint recognition feature, designed and released by Apple Inc., that was made
available on the iPhone 5S, the iPhone 6 and iPhone 6 Plus, the iPad Air 2, and the iPad Mini 3.
Touch ID is heavily integrated into iOS devices, allowing users to unlock their device, as well as
make purchases in the various Apple digital media stores (iTunes Store, the App Store, iBookstore),
and to authenticate Apple Pay online or in apps. On announcing the feature, Apple made it clear that
the fingerprint information is stored locally in a secure location on the Apple A7 (in iPhone 5S and
iPad mini 3 (APL0698), A8 (in iPhone 6 and iPhone 6 Plus), or A8X (in iPad Air 2) chip, rather than
being stored remotely on Apple servers or in iCloud, making it very difficult for external access.

The importance of two-factor authentication


Yutaka Deguchi
This is a guest post by Yutaka Deguchi, general manager of mofiria Corporation.
In recent years, the use of “two-factor authentication” has been increasing as a personal identification
method.
The methods of identity verification are roughly divided into the following three:
1) Use user’s knowledge: (Someting you know)
2) Use user’s possession: (Something you have)
3) Use characteristics of users themselves: (Something you are, biometrics)
Multi-factor authentication is a method of identity verification using multiple elements of these three,
and two-factor authentication is using two elements of these three.
For example, ATM using a cash card as 1) and a secret number as 2) realizes two-factor
authentication.
Even in the case of using two levels of authentication, both passwords and answers to secret
questions are 1), so it can not be said to be two-factor authentication.
From the viewpoint of prevention from spoofing, 1) is very weak against leakage of knowledge such
as leakage of passwords stored in the system and theft of secret numbers by observing users. Many
problems actually occurred. 2) is also very weak against the theft of possession such as IC card by
others.
By introducing two-factor authentication, it is possible to prevent spoofing against the theft of a
password and theft of a card. Recently, the number of systems that realize high security using two-
fctor authentication is increasing.
Furthermore, some systems, where high security is required, have to be introduced two-factor
authentication.
For example, in a system that accesses “my number” that was introduced last year, Ministry of
Internal Affairs and Communications obliged the sytem to introduce two-factor authentication.
Among two-factor authentication, 3), using biometric information, is better than others from the user
side, because of not requiring the user to remember something or carry around something.
Moreover, there is basically no risk of theft or loss.
With such features, biometric authentication is increasingly introduced to two-factor authentication.
Among biometrics, vein authentication can realize extremely high security from the viewpoint of
prevention from spoofing compared with fingerprints or faces, because it is easy to make a forgery of
fingerprints or faces using their photographs.
That is why vein authentication has been increasing to be introduced by two-factor authentication.
In the future, opportunities to see vein authentication will increase, especially in fields requiring high
security such as the educational field dealing with student’s personal information or the medical field
dealing with patient’s personal information.
DISCLAIMER: BiometricUpdate.com blogs are submitted content. The views expressed in this blog
are that of the author, and don’t necessarily reflect the views of BiometricUpdate.com.

Keystroke recognition
Rawlson King
Keystroke recognition has been defined by both industry and academics as the process of measuring
and assessing a typing rhythm on digital devices, including on: computer keyboards, mobile phones,
and touch screen panels.
A noted typing measurement, keystroke recognition, often called “keystroke dynamics”, refers to the
detailed timing information that describes exactly when each key was pressed on a digital device and
when it was released as a person types. Though biometrics tend to rely on physical traits like
fingerprint and face or behavioral characteristics, many consider keystroke dynamics a biometric.
Biometrics Research Group, Inc. defines biometrics as measurable physical and behavioral
properties that make it possible to authenticate an individual person’s identity. Biometrics are
therefore used as a collective term for the technologies used to measure a person’s unique
characteristics and thus authenticate identity.
Keystroke dynamics uses a unique biometric template to identify individuals based on typing pattern,
rhythm and speed. The raw measurements used for keystroke dynamics are known as “dwell time”
and “flight time”. Dwell time is the duration that a key is pressed, while flight time is the duration
between keystrokes. Keystroke dynamics can therefore be described as a software-based algorithm
that measures both dwell and flight time to authenticate identity.
In 2004, researchers at MIT looked at the idea of authentication through keystroke biometrics and
identified a few major advantages and disadvantages to the use of this biometric for authentication.
Firstly, the researchers concluded that measuring keystroke dynamics is an accessible and
unobtrusive biometric as it requires very little hardware besides a keyboard, making it easily
deployable for use in enterprises, vis-a-vis workstation log-ins and other access security points, at
relatively low cost. Secondly, as each keystroke is captured entirely by key pressed and press time,
data can be transmitted over low bandwidth connections.
Other research studies on the keystroke biometrics point out other benefits, including: its ability to
seamlessly integrate with existing work environments and security systems with minimal alterations
with no additional hardware, along with its non-invasive nature and scalability.
That being said, MIT researchers also identified disadvantages to the use of keystroke dynamics as
an authentication tool. Firstly, typing patterns can be erratic and inconsistent as something like
cramped muscles and sweaty hands can change a person’s typing pattern significantly. Also, MIT
found that typing patterns vary based on the type of keyboard being used, which could significantly
complicate verification.
Despite these concerns, other studies note the general preference for biometrics in multi-factor
authentication, coupled with high awareness levels will benefit the continued growth in the keystroke
dynamics market, while its attributes and low prices are expected to drive the use of the technology in
a range of end-use applications.

Electronic ID (eID)
Stephen Mayhew
An eID card is typically a government-issued document for online and offline identification.
The typical electronic identity card has the format of a regular bank card, with printed identity
information on the surface, such as personal details and a photograph, as well as an embedded
microchip. An eID is more reliable than paper-based ID because it provides more data security with
built-in privacy features. The use of digital signatures makes it harder or even impossible to make a
forged ID as the duplicate ones would invalidate existing digital signatures.
A citizen with an eID has the ability to use it or various different services, thus making the card multi-
purposed. One of the unique aspects of the eID is its ability to authenticate the holder not only in the
real world, but also in the virtual world. eID enables it holders to authenticate themselves securely
when using an online service, while protecting their privacy.
Apart from online authentication, eID cards can provide users with the option to sign electronic
documents with a digital signature or both government and private transactions. An eID is designed
to be a trusted authentication mechanism for citizens and businesses to identify themselves in order
to electronically access services from across government.
Convenience to both users and the authorities is therefore a major advantage of eID systems. In
theory, an eID can be the only piece of identification that a citizen requires for all interactions with
government. The cards can therefore be used for multiple purposes, including as a health insurance
card for countries with socialized medicine, a social security card, a driver’s license, and or general
identification. A further unique feature of an eID system is the ability to provide instant multi-lingual
support through online systems.
ID systems also reduce duplication in terms of the time and effort necessary to issue identification
across different government departments or varying services. eID systems also allows authorities to
centralize the storage of data about citizens,making information footprints accessible from one
credential. This benefit will predictably cause many governments to study, approve and implement
the technology within the next decade.
Greece, New Zealand and Rwanda have been actively studying their implementation. Brazil, France,
Indonesia, Poland, Russia, Malaysia and the Philippines have been actively issuing electronic identity
cards that will replace conventional identity cards.
Supranational institutions such as the European Union have long been developing technology and
policy frameworks for eID deployment. Once a EU framework is completely standardized and
accepted by member states, over 700 million such cards could be issued.
Biometrics to Replace Passwords in 2015
Stephen Mayhew
CATEGORIES Access Control | Behavioral Biometrics | Civil / National ID | Explaining
Biometrics | Facial Recognition | Fingerprint Recognition | Iris / Eye Recognition | Surveillance | Voice
Biometrics
What are biometrics? By definition, biometrics are both the science and technology of measuring and
analyzing biological data such as DNA, fingerprints, eye retinas and irises, voice patterns, facial
patterns, and hand measurements. This is done mainly for authentication purposes and drawn from
the fact that no two persons are identical in physical and biological make-up. Over the past decade,
the technology has gained wide popularity due to technical optimization, miniaturization, software
improvement and most importantly, declines in price. The most utilized type of biometrics is
fingerprinting, followed by iris and facial recognition.
The application of biometrics has evolved through the years. At first it was mainly used to ensure
public security, but now private organizations use it to secure their computer networks and physical
security. As biometric technology continues to evolve, it is expected that it will ultimately provide
security capabilities for individuals.
David Nahamoo, IBM’s chief technology officer, stated in speech late last year that he expected that
biometrics would replace passwords by 2015.
“Over the next five years, your unique biological identity and biometric data – facial definitions, iris
scans, voice files, even your DNA – will become the key to safeguarding your personal identity and
information and replace the current user ID and password system,” stated Nahamoo. “We have been
moving from devices like desktops and laptops to smart devices such as mobile phones and tablets –
all property that is easily lost, stolen or misplaced. These devices are not yet outfitted with operating
systems and security elements that are as strong as immobile devices of the past. Biometric security
can strengthen those weaknesses.”
In 2006, Microsoft had released a fingerprint scanner device to allow users to access its operating
system without a password, while Google recently released a facial recognition feature to allow users
to access their Andriod-based smartphones. While both of these technologies are rudimentary and
have been subjected to security breaches, they both are harbingers of how we will access a
constellation of smart telecommunication and computing devices in the future, along with other
applications, such as personal banking.
According to Nahamoo: “Biometric data will allow you to walk up to an ATM and access your bank
account by simply speaking your name and looking into the camera. Yes, we’ve all seen the thriller
sci-fi movies where a person is forced by the villain to scan their eye or finger to unlock a door. But
that’s fiction. In reality, ATM cameras using facial and iris recognition may be able to detect stress,
pupil dilation, and changes in heart rate and breathing patterns to establish a confidence level that
the user is not in danger.”
IBM also predicts that biometrics will eventually integrate with a wider number of commonplace
technologies available in today’s consumer electronics to enhance security.
“We can take advantage of the advanced technology being used in the smart devices, such as
microphones, touch screens and high definition cameras to fully employ biometric security options,”
states Nahamoo. “While there is already some adoption of facial and voice recognition, combining
these and other biometric data points in the near future can eliminate the hassle of memorizing,
storing and securing account IDs and passwords and at the same time give users a greater security
confidence.”
Find biometric solutions in our biometrics service directory.
Determining the best biometric reader for an
application
CATEGORIES Explaining Biometrics
This guest post was contributed by Anixter.

Biometric physical access control is the most secure means for allowing access due to the unique
physical characteristics of every individual. Biometric physical access control solutions provide
stronger authentication methods than a PIN, access card or physical keys, which can be lost or
stolen and facilitate an unauthorized entry. Biometrics is the only credential that positively
authenticates the person before he or she accesses a secure area.

There are multiple ways of using an individual’s unique characteristics as a means of access. These
could include a fingerprint, finger vein scan, iris recognition, hand geometry, 3D facial recognition,
video facial recognition or touchless fingerprint. Before deciding on which technology fits your
application, there are four things you should consider: convenience, acceptability, speed and
environment. An easy way to remember these criteria is to ask, what is the C.A.S.E. for this biometric
reader?

C – Convenience (Easy to use) Can the user easily approach and present their credential or
biometric?

A – Acceptability (User Acceptance & Security) How well do the users accept interfacing with the
technology and how well does it meet the level of security for the risk? What is the false reject rate
(FRR) and the false accept rate (FAR)?

S – Speed & Accuracy (Throughput) How quickly can the user get through the presentation
process and how accurate is the technology when the credential or biometric is presented (false
reject)?

E – Environment (Applications) Where is the reader being used – indoor, outdoor, lighting, vandal
prone, dirty conditions, hands-free or other unique requirements?

The chart below provides a quick and easy guide to making your C.A.S.E.
Technology Convenience Acceptability Speed Environment

Fingerprint Most widely used There is still some Throughput is Indoor, outdoor
Recognition biometric; intuitive and resistance to enrolling a high, reads and ruggedized
easy to use; finger placed fingerprint biometric; biometric in less models
on sensor; lit with red light FAR is extremely low; than 1 second
FRR affects a small
number of users; high
quality readers reduce
false rejects

Fingerprint Reader aligns finger for Combines two biometrics Reads are slower Applications
and Vein correct fingerprint and into one template (dual - read time is 1.5 where security
Technology Convenience Acceptability Speed Environment

vein scanning modal) to create a high- seconds requirements are


security biometric reader higher and
throughput is not
a major concern

Iris Hands-free camera sensor User acceptance to the It takes about Excellent choice
Recognition is adjustable for varied technology is high once two seconds to for hands-free
heights; correct distance is they understand it is iris align your eyes environments;
required to read and not retina; this is a with the reader indoor use
biometrics; the unique very high-security and have it
pattern in the human iris is biometric technology – the authenticate the
formed by 10 months of chance of two irises being user
age and remains identical is 1 in 10^78
unchanged throughout
one’s lifetime

Touchless Very easy to use - waving Being touchless, the Very fast read; Indoor
Fingerprint hand across the sensor resistance to using a wave users do not environments,
reads the biometric reader is low; security is even need to high-traffic
adjustable; since the stop walking entrances, high-
reader captures biometric motion to use security
data from one to four reader entrances
fingers, the security can be
adjusted for lower and
higher security
applications

3D Facial Hands free - correct stance Some resistance to FRR and slow Indoor
Recognition in front of the sensor is presenting facial reads can occur applications; best
required; monitor shows biometrics; 3D technology for facial results without
placement; lower height compares 40,000 data changes, glasses backlighting
installation is required to points for very low FAR and incorrect issues; hands
accommodate all users positioning at free/ touchless
the reader for clean room
environments

Hand The reader is intuitive on Well accepted as it is not 1:1 verification; Indoor or
Geometry how to place your hand taking any usable a pin code must outdoor with
biometric data; less secure be used to specialized
than other biometrics retrieve the enclosure; good
biometric for users with
template for dirty/ worn
verification hands; often used
for time and
attendance
applications

Video Facial Looking at the camera FRR and FAR are is Best applications Indoor or
Recognition captures the biometric higher than other include uses as outdoor;
template biometric technologies dual consistent
authentication lighting is
required for the
camera
The right biometric system is an effective method to identify and authenticate personnel, but it is
important to remember to take into account the customer’s needs and requirements. It is not a one-
biometric-technology-fits-all world, so develop your C.A.S.E. and understand the need for
convenience, accuracy, speed and the environment where the system will be deployed to ensure that
the customer’s expectations are met.

Palm print recognition


Stephen Mayhew
Palm print recognition inherently implements many of the same matching characteristics that have
allowed fingerprint recognition to be one of the most well-known and best publicized biometrics.
Both palm and finger biometrics are represented by the information presented in a friction ridge
impression. This information combines ridge flow, ridge characteristics, and ridge structure of the
raised portion of the epidermis. The data represented by these friction ridge impressions allows a
determination that corresponding areas of friction ridge impressions either originated from the same
source or could not have been made by the same source.
Because fingerprints and palms have both uniqueness and permanence, they have been used for
over a century as a trusted form of identification. However, palm recognition has been slower in
becoming automated due to some restraints in computing capabilities and live-scan technologies.
Palm identification, just like fingerprint identification, is based on the aggregate of information
presented in a friction ridge impression. This information includes the flow of the friction ridges (Level
1 Detail), the presence or absence of features along the individual friction ridge paths and their
sequences (Level 2 Detail), and the intricate detail of a single ridge (Level 3 detail).
To understand this recognition concept, one must first understand the physiology of the ridges and
valleys of a fingerprint or palm. When recorded, a fingerprint or palm print appears as a series of dark
lines and represents the high, peaking portion of the friction ridged skin while the valley between
these ridges appears as a white space and is the low, shallow portion of the friction ridged skin.
Palm recognition technology exploits some of these palm features. Friction ridges do not always flow
continuously throughout a pattern and often result in specific characteristics such as ending ridges or
dividing ridges and dots. A palm recognition system is designed to interpret the flow of the overall
ridges to assign a classification and then extract the minutiae detail — a subset of the total amount of
information available, yet enough information to effectively search a large repository of palm prints.
Minutiae are limited to the location, direction, and orientation of the ridge endings and bifurcations
(splits) along a ridge path.
A variety of sensor types: capacitive, optical, ultrasound and thermal, can be used for collecting the
digital image of a palm surface; however, traditional live-scan methodologies have been slow to
adapt to the larger capture areas required for digitizing palm prints. Challenges for sensors
attempting to attain high-resolution palm images are still being dealt with today. One of the most
common approaches, which employs the capacitive sensor, determines each pixel value based on
the capacitance measured, made possible because an area of air (valley) has significantly less
capacitance than an area of palm (ridge). Other palm sensors capture images by employing high
frequency ultrasound or optical devices that use prisms to detect the change in light reflectance
related to the palm. Thermal scanners require a swipe of a palm across a surface to measure the
difference in temperature over time to create a digital image. Capacitive, optical, and ultrasound
sensors require only placement of a palm.
Some palm recognition systems scan the entire palm, while others require the palms to be
segmented into smaller areas to optimize performance. Maximizing reliability within either a
fingerprint or palm print system can be greatly improved by searching smaller data sets. While
fingerprint systems often partition repositories based upon finger number or pattern classification,
palm systems partition their repositories based upon the location of a friction ridge area. Latent
examiners are very skilled in recognizing the portion of the hand from which a piece of evidence or
latent lift has been acquired. Searching only this region of a palm repository rather than the entire
database maximizes the reliability of a latent palm search.
Like fingerprints, the three main categories of palm matching techniques are minutiae-based
matching, correlation-based matching, and ridge-based matching. Minutiae-based matching, the
most widely used technique, relies on the minutiae points described above, specifically the location,
direction, and orientation of each point. Correlation-based matching involves simply lining up the
palm images and subtracting them to determine if the ridges in the two palm images correspond.
Ridge-based matching uses ridge pattern landmark features such as sweat pores, spatial attributes,
and geometric characteristics of the ridges, and/or local texture analysis, all of which are alternates to
minutiae characteristic extraction. This method is a faster method of matching and overcomes some
of the difficulties associated with extracting minutiae from poor quality images.
The advantages and disadvantages of each approach vary based on the algorithm used and the
sensor implemented. Minutiae-based matching typically attains higher recognition accuracy, although
it performs poorly with low quality images and does not take advantage of textural or visual features
of the palm.
Processing using minutiae-based techniques may also be time consuming because of the time
associated with minutiae extraction. Correlation-based matching is often quicker to process but is
less tolerant to elastic, rotational, and translational variances and noise within the image. Some ridge-
based matching characteristics are unstable or require a high-resolution sensor to obtain quality
images. The distinctiveness of the ridge-based characteristics is significantly lower than the minutiae
characteristics.
Just as with fingerprints, standards development is an essential element in palm recognition because
of the vast variety of algorithms and sensors available on the market. Interoperability is a crucial
aspect of product implementation, meaning that images obtained by one device must be capable of
being interpreted by a computer using another device. Major standards efforts for palm prints
currently underway are the revision to the ANSI NIST ITL-2000 Type-15.
Many, if not all, commercial palm AFIS systems comply with the ANSI NIST ITL-2000 Type-15 record
for storing palm print data. Several recommendations to enhance the record type are currently being
“vetted” through workshops facilitated by the National Institute for Standards and Technology.
Specifically, enhancements to allow the proper encoding and storage of Major Case Prints,
essentially any and all friction ridge data located on the hand, are being endorsed to support the
National Palm Print Service initiative of the FBI’s NGI.
Unlike several other biometric applications, a large-scale U.S. government sponsored evaluation has
not been performed for palm recognition. The amount of data currently available for test purposes
has hindered the ability for not only the federal government but also the vendors in efficiently testing
and benchmarking commercial palm systems. The FBI Laboratory is currently encoding its hard-copy
records into three of the most popular commercial palm recognition systems. This activity, along with
other parallel activities needed for establishing a National Palm Print Service, will address these
limitations and potentially provide benchmark data for U.S. government evaluations of palm systems.
Sources: NIST, FBI

Gesture recognition
Adam Vrankulj
Gesture recognition has been defined as the mathematical interpretation of a human motion by a
computing device. Gestures can originate from any bodily motion or state but commonly originate
from the face or hand.
Ideally, gesture recognition enables humans to communicate with machines and interact naturally
without any mechanical intermediaries. Utilizing sensors that detect body motion, gesture recognition
makes it possible to control devices such as televisions, computers and video games, primarily with
hand or finger movement. With this technology you can change television channels, adjust the
volume and interact with others through your TV.
Recognizing gestures as input allows computers to be more accessible for the physically-impaired
and makes interaction more natural in a gaming or 3D virtual world environment. Using gesture
recognition, it is even possible to point a finger at the computer screen so that the cursor will move
accordingly. This could potentially make conventional input devices such as mouse, keyboards and
even touch-screens redundant.
Gesture recognition, along with facial recognition, voice recognition, eye tracking and lip movement
recognition are components of what software and hardware designers and developers refer to as a
“perceptual user interface”.
The goal of perceptual user interface is to enhance the efficiency and ease of use for the underlying
logical design of a stored program, a design discipline known as usability. In personal computing,
gestures are most often used for input commands. Hand and body gestures can be amplified by a
controller that contains “accelerometers” and gyroscopes to sense tilting, rotation and acceleration of
movement, or the computing device can be outfitted with a camera so that software in the device can
recognize and interpret specific gestures. A wave of the hand, for instance, might terminate the
program.
Arguably, one of the most famous gesture recognition applications is the “Wiimote”, which is used to
obtain input movement from users of Nintendo’s Wii gaming platform. The device is the main
controller for the Wii console. It contains an “accelerometer” in the controller which works to measure
acceleration along three axes. An extension that contains a gyroscope can be added to the controller
to improve rotational motions. The controller also contains an optical sensor allowing to determine
where it is pointing. For that, a sensor bar highlighting IR LEDs is used to track movement.
Microsoft is also a leader in gesture recognition technology. The firm’s line of motion sensing input
devices for its Xbox 360 and Xbox One video game consoles and Windows PCs are centered around
a webcam-style, add-on peripheral. The unit allows users to control and interact with their gaming
console or computer without the need for a game controller, through a natural user interface using
gestures. The technology uses synchronous camera input derived from the user’s movement.
Systems that incorporate gesture recognition rely on algorithms. Most developers differentiate
between two different algorithmic approaches in gesture recognition: a 3D-based and appearance-
based model. The most popular method makes use of 3D information from key body parts in order to
obtain several important parameters, like palm position or joint angles. In contrast, appearance-based
systems use images or videos for direct interpretation.
In addition to the technical challenges of implementing gesture recognition, there are also social
challenges. Gestures must be simple, intuitive and universally acceptable. Further, input systems
must be able to distinguish nuances in movement.

Two-Factor Authentication (2FA)


Rawlson King
Two-factor authentication, or 2FA, is a method of accessing computing and financial resources or
physical facilities, with more than just a password or personal information number (PIN or passcode).
Using a singular password or passcode to access such resources makes a user susceptible to
security threats, because it represents a only a single piece of information that a malicious person
needs to acquire.
The additional security that 2FA provides thus ensures that additional information is required to sign
in to computing resources, access cash or a building. Two-factor authentication therefore creates an
extra level of security which is often referred to as “multi-factor authentication”. Using a username
and password or passcode, together with a piece of information that only the user knows, makes it
harder for potential intruders to gain access and steal that person’s personal data or identity.
Multi-factor authentication is a method of multi-faceted access control which a user can pass by
successfully presenting authentication factors from at least two of the three categories:
• knowledge factors (“things only the user knows”), such as passwords or passcodes;
• possession factors (“things only the user has”), such as ATM cards or hardware tokens; and
• inherence factors (“things only the user is”), such as biometrics
Requiring more than one independent factor increases the difficulty of providing false credentials.
Two-factor authentication requires the use of two of three independent authentication factors, as
identified above. The number and the independence of factors is important, since more independent
factors imply higher probabilities that the bearer of the identity credential actually does hold that
identity.
Multi-factor authentication is sometimes confused with “strong authentication”. However, “strong
authentication” and “multi-factor authentication”, are fundamentally different processes. Soliciting
multiple answers to challenge questions can typically be considered strong authentication, but,
unless the process also retrieves “something the user has” or “something the user is”, it is not
considered multi-factor authentication.
The most typical scenario where two-factor authentication is emerging is within the banking sector.
When a bank customer uses an automated teller machine (ATM), one authentication factor is the
physical ATM card the customer uses in the machine (“something the user has”). The second factor
is the PIN or passcode the customer enters through the keypad (“something the user knows”).
Without the corroborating verification of both of these factors, authentication does not succeed. This
scenario illustrates the basic concept of most multi-factor authentication systems: the combination of
a knowledge factor and a possession factor.
The combined use of these multiple factors allow financial institutions to combat identity theft and
bank fraud by increasing overall security, by reducing the potential for users to be falsely
authenticated. As many research analysts have noted, banks can augment traditional passwords or
passcodes with two-factor authentication measures that include biometric identification measures.
While a biometric identifier in theory could replace the personal identification number, a customer
should instead be asked to supply a PIN or password to supplement a biometric identifier, making it
part of a more secure two-factor authentication process. Some banks in Asia currently leverage
biometric identifiers such as finger vein and palmprint recognition, in conjunction with ATM cards to
provide a two-factor ATM authentication solution to their clientele.
With continuing challenges to secured digital environments, users can expect the increased
deployment of two-factor authentication solutions in order to mitigate risk in computing, banking and
physical environments.

Finger vein recognition


Finger vein recognition is a method of biometric authentication that uses pattern recognition
techniques based on images of human finger vein patterns beneath the skin’s surface. Finger vein
recognition is used to identify individuals and to verify their identity.
Finger vein recognition is a biometric authentication system that matches the vascular pattern in an
individual’s finger to previously obtained data. Hitachi developed and patented a finger vein
identification system in 2005. The technology is mainly used for credit card authentication,
automobile security, employee time and attendance tracking, computer and network authentication,
end point security and automated teller machines.
To obtain the pattern for the database record, an individual inserts a finger into an attester terminal
containing a near-infrared light-emitting diode (LED) light and a monochrome charge-coupled device
(CCD) camera. The hemoglobin in the blood absorbs near-infrared LED light, which makes the vein
system appear as a dark pattern of lines. The camera records the image and the raw data is digitized
and held in a database of registered images.
Blood vessel patterns are unique to each individual. Unlike other biometric systems however, blood
vessel patterns are almost impossible to counterfeit because they are located beneath the skin’s
surface and can only be obtained from a living person.

Footprint identification
Rawlson King
Footprint identification is the measurement of footprint features for recognizing the identity of a user.
A footprint is a universal and easy way to capture a personal “identifier” which does not change much
over time.
Footprint-based measurements constitutes one of many new possibilities to realize biometric
authentication. It is an experimental technology that is currently under development at a number of
universities and research institutes.
Footprint identification is projected to become a new emerging alternative to access control in
wellness domains such as spas and thermal baths. It has also been recommended as a technology
to identify new born babies at hospitals.
Since footprints are not intended to support large-scale high security applications, such as electronic
banking or building access control, the storage of footprint features does not necessarily imply
security threats. On the other hand, due to the practice of wearing shoes, it is difficult for impostors to
obtain footprints for forgery attacks. Thus, footprint-based recognition could potentially be an
alternative for high-security military applications.
Multiple variations of footprint identification are currently being developed by various research groups
working worldwide. As this technology evolves, most versions are projected to use approaches
comparable to state-of-the-art hand geometry, palm print and fingerprint techniques. Such a
technology would examine friction ridge, texture and foot shape and even foot silhouette.
Current prototypes of footprint identification technology use cameras to capture naked
footprints. Then the images undergo pre-processing, followed by the extraction of two features:
shape using gradient vector flow (GVF), and minutiae extraction respectively. Matching is then
effected based on these two features followed by a fusion of these two results for either a “reject” or
“accept” decision. Shape matching features are typically based on cosine similarity while texture is
based on miniature score matching.
A high recognition rate by verifying raw footprints directly is difficult to obtain, because people stand
in various positions with different distances and angles between the two feet. To achieve robustness
in matching an input pair of footprints with those of registered footprints, the input pair of footprints
must be normalized in position and direction. Such normalization might remove useful information for
recognition, so geometric information of the footprint should be included to ensure the standardization
and normalization of capture image foot features.

Gait Recognition
Rawlson King
Gait recognition is a behavioral biometric modality that identifies people based on their unique
walking pattern.
In comparison with other first-generation biometric modalities that include fingerprint and iris
recognition, gait has the advantage of being unobtrusive, in that it requires no subject contact.
Gait recognition is based on the notion that each person has a distinctive and idiosyncratic way of
walking, which can easily be discerned from a biomechanic viewpoint. Human movement does
consist of synchronized movements of hundreds of muscles and joints, though basic movement
patterns are similar, gait does vary from one person to another in terms of timing and magnitude.
As a consequence, minor variations in gait style can be used as a biometric identifier to identify
individuals.
Gait recognition groups spatial-temporal parameters, such as step length, step width, walking speed
and cycle time with kinematic parameters, such as joint rotation of the hip, knee and ankle, mean
joint angles of the hip, knee and ankle and thigh, trunk and foot angles. Also considered is the
correlation between step length and the height of an individual.
Because human ambulation is one form of human movements, gait recognition is closely related to
vision methods that detect, track, and analyze human behaviors in human motion analysis. Gait
recognition technologies are currently in their infancy. Currently, there are two main types of gait
recognition techniques in development.
The first is gait recognition based on the automatic analysis of video imagery. This approach is the
most popular approach studied and involves analysis of video samples of a subject’s walk and the
trajectories of joints and angles. A mathematical model of the motion is created, and is subsequently
compared against other samples in order to determine identity.
The second method uses a radar system, which records the gait cycle that the various body parts of
the subject creates. This data is then compared to other samples in order to perform identification.
In both models, human body analysis is employed in an unobtrusive way using technical
instrumentation that measures body movements, body mechanics and the activity of specific muscle
groups.
Such technologies are projected for use in criminal justice and national security applications.
Currently, the technology is under development at the Georgia Institute of Technology, MIT and the
Lappeenranta University of Technology.

Facial Thermography
Rawlson King
In the mid-1990s, it was demonstrated by scientist Francine J. Prokoski that facial thermograms are
unique to individuals, and that methods and systems for positive biometric identification using facial
thermograms could be developed.
Thermograms, generally, are visual displays of the amount of infrared energy emitted, transmitted,
and reflected by an object, which are then converted into a temperature, and displayed as an image
of temperature distribution.
Infrared energy, and infrared light itself, is electromagnetic radiation with longer wavelengths than
those of visible light, extending from the nominal red edge of the visible spectrum at 700 nanometres
(nm) to 1 mm. This range of wavelengths corresponds to a frequency range of approximately 430
THz down to 300 GHz, and includes most of the thermal radiation emitted by objects near room
temperature.
Infrared light is emitted or absorbed by molecules when they change their rotational-vibrational
movements. These movements can be observed through spectroscopy, which is the study of the
interaction between matter and radiated energy.
Historically, spectroscopy originated through the study of visible light dispersed according to its
wavelength, by way of a prism. Later the concept was expanded greatly to comprise any interaction
with radiative energy as a function of its wavelength or frequency. Spectroscopic data is often
represented by a spectrum, which is a plot of the response of interest as a function of wavelength or
frequency.
Facial thermography works by detecting heat patterns created by the branching of blood vessels that
are emitted from the skin. These patterns, known as thermograms, are highly unique. As a
consequence, identical twins have different thermograms.
Thermography works very much like facial recognition, except that an infrared camera is used to
capture the images. Prokoski however found that facial thermogram technology capability is
inherently more accurate and more robust over varying lighting and environment conditions than is
the use of video images.
The technology involves the use of biosensor data for uniquely and automatically identifying
individuals. Due to the connectedness of the physiological systems of the human body, elemental
shapes can in general be derived from any biological sensor data which can be presented as an
image. The elemental shapes and their locations provide an identification capability.
Biosensors which produce very detailed localized data, such as high-resolution infrared imagers, can
result in unique identification of an individual from the determination of elemental shapes and their
distribution. The market for such devices and services includes all facilities which seek to restrict
access to physical areas, distribution systems or information files.
Thermograms as a mechanism of biometric identification are advantageous since the technology is
non-intrusive. While not used prolifically, electronic thermography is increasing used as non-ionizing,
noninvasive alternative for medical diagnostics. It is believed that vascular heat emissions that
present on the human face can provide physiologic indicators of underlying health or disease.
General thermography has been used to attempt to diagnosis breast cancers, through efficacy of
using the technology for that purpose has been questioned as scientifically suspect.

Retinal Scan Technology


Rawlson King
Developed in the 1980s, retinal scanning is one of the most well-known biometric technologies, but it
is also one of the least deployed. Retinal scans map the unique patterns of a person’s retina. The
blood vessels within the retina absorb light more readily than the surrounding tissue and are easily
identified with appropriate lighting.
A retinal scan is performed by casting an unperceived beam of low-energy infrared light into a
person’s eye as they look through the scanner’s eyepiece. This beam of light traces a standardized
path on the retina.
Once the scanner device captures a retinal image, specialized software compiles the unique features
of the network of retinal blood vessels into a template. Retinal scan algorithms require a high-quality
image and will not let a user enroll or verify until the system is able to capture an image of sufficient
quality. The retina template generated is typically one of the smallest of any biometric technology.
Retinal scan is a highly dependable technology because it is highly accurate and difficult to spoof, in
terms of identification. The technology, however, has notable disadvantages including difficult image
acquisition and limited user applications. Often enrollment in a retinal scan biometric system is
lengthy due to requirement of multiple image capture, which can cause user discomfort. However,
once user is acclimated to the process, an enrolled person can be identified with a retinal scan
process in seconds.
Retinal scan technology has robust matching capabilities and is typically configured to do one-to-
many identification against a database of users. However, because quality image acquisition is so
difficult, many attempts are often required to get to the point where a match can take place.
While the algorithms themselves are robust, it can be a difficult process to provide sufficient data for
matching to take place. In many cases, a user may be falsely rejected because of an inability to
provide adequate data to generate a match template.
Because retinal blood vessels are more absorbent of log-energy infrared light than the rest of the
eye, the amount of reflection varies during the scan. The pattern of variations is converted to
computer code and stored in a database.
Retinal scans should therefore not be confused with another ocular-based technology, iris
recognition, which is described as the process of recognizing a person by analyzing the random
pattern of the iris.
The retina’s intricate network of blood vessels is a physiological characteristic that remains stable
throughout the life of a person.
As with fingerprints and iris patterns, genetic factors do not determine the exact pattern of blood
vessels in the retina. This allows retinal scan technology to differentiate between identical twins and
provide robust identification.
The retina contains at least as much individual data as a fingerprint, but, unlike a fingerprint, is an
internal organ and is less susceptible to either intentional or unintentional modification. Certain eye-
related medical conditions and diseases, such as cataracts and glaucoma, can render a person
unable to use retina-scan technology, as the blood vessels can be obscured.
Retinal scan devices are mainly used for physical access applications and are usually used in
environments requiring exceptionally high degrees of security and accountability such as high-level
government, military, and corrections applications. Retinal scanning has been utilized by several U.S.
government agencies including the Federal Bureau of Investigation (FBI), the Central Intelligence
Agency(CIA), and NASA.
Retinal scanning is also used for medical diagnostic applications. Examining the eyes using retinal
scanning can aid in diagnosing chronic health conditions such as congestive heart failure and
atherosclerosis.
Diseases such as AIDS, syphilis, malaria, chicken pox and Lyme disease, as well as hereditary
diseases, such as leukemia, lymphoma, and sickle cell anemia, also impact the eyes and can be
detected using retinal scan technology.
August 30, 2013 – Correction: Originally this item stated: “In recent years, however retinal
scan technology has been deployed commercially. Retinal scan technology is now located in
prisons, used for ATM identity verification and used by state and municipal governments to
prevent welfare fraud.” This statement was incorrect. While iris scan technology is used in
prisons, ATMs and by governments to prevent welfare fraud, retinal scan technology is still
expensive and emerging, and as thus, is not widely deployed for such applications.

Iris Recognition
Stephen Mayhew
CATEGORIES Explaining Biometrics | Iris / Eye Recognition
Iris recognition is the process of recognizing a person by analyzing the random pattern of the iris. The
automated method of iris recognition is relatively young, existing in patent only since 1994.
The iris is a muscle within the eye that regulates the size of the pupil, controlling the amount of light
that enters the eye. It is the colored portion of the eye with coloring based on the amount of melatonin
pigment within the muscle.
Although the coloration and structure of the iris is genetically linked, the details of the patterns are
not. The iris develops during prenatal growth through a process of tight forming and folding of the
tissue membrane. Prior to birth, degeneration occurs, resulting in the pupil opening and the random,
unique patterns of the iris. Although genetically identical, an individual’s irides are unique and
structurally distinct, which allows for it to be used for recognition purposes.
In 1936, ophthalmologist Frank Burch proposed the concept of using iris patterns as a method to
recognize an individual. In 1985, Drs. Leonard Flam and Aran Safir, ophthalmologists, proposed the
concept that no two irides are alike, and were awarded a patent for the iris identification concept in
1987. Dr. Flom approached Dr. John Daugman to develop an algorithm to automate identification of
the human iris. In 1993, the Defense Nuclear Agency in the United States began work to test and
deliver a prototype unit, which was successfully completed by 1995 due to the combined efforts of
Drs. Flom, Safir, and Daugman. In 1994, Dr. Daugman was awarded a patent for his automated iris
recognition algorithms. In 1995, the first commercial products became available. In 2005, the broad
patent covering the basic concept of iris recognition expired, providing marketing opportunities for
other companies that have developed their own algorithms for iris recognition. The patent on the
“lrisCodes” implementation of iris recognition developed by Dr. Daugman expired in 2011.
Before recognition of the iris takes place, the iris is located using landmark features. These landmark
features and the distinct shape of the iris allow for imaging, feature isolation, and extraction.
Localization of the iris is an important step in iris recognition because, if done improperly, resultant
noise (e.g., eyelashes, reflections, pupils, and eyelids) in the image may lead to poor performance.
Iris imaging requires use of a high quality digital camera. Today’s commercial iris cameras typically
use infrared light to illuminate the iris without causing harm or discomfort to the subject. Upon
imaging an iris, a 2D Gabor wavelet filters and maps the segments of the iris into phasors (vectors).
These phasors include information on the orientation and spatial frequency (“what” of the image) and
the position of these areas (“where” of the image). This information is used to map the lrisCodes.
Iris patterns are described by an lrisCode using phase information collected in the phasors. The
phase is not affected by contrast, camera gain, or illumination levels. The phase characteristic of an
iris can be described using 256 bytes of data using a polar coordinate system. Also included in the
description of the iris are control bytes that are used to exclude eyelashes, reflection(s), and other
unwanted data.
To perform the recognition, two lrisCodes are compared. The amount of difference between two
lrisCodes – Hamming Distance (HD) – is used as a test of statistical independence between the two
lrisCodes. If the HD indicates that less than one-third of the bytes in the lrisCodes are different, the
lrisCode fails the test of statistical significance, indicating that the lrisCodes are from the same iris.
Therefore, the key concept to iris recognition is failure of the test of statistical independence.

Hand Geometry Recognition


Stephen Mayhew
CATEGORIES Access Control | Explaining Biometrics | Surveillance
Hand geometry recognition is the longest implemented biometric type, debuting in the market in the
late 1980s. The systems are widely implemented for their ease of use, public acceptance, and
integration capabilities. One of the shortcomings of the hand geometry characteristic is that it is not
highly unique, limiting the applications of the hand geometry system to verification tasks only.
Hand geometry systems have the longest implementation history of all biometric modalities. David
Sidlauskas developed and patented the hand geometry concept in 1985 and the first commercial
hand geometry recognition systems became available the next year. The 1996 Olympic Games
implemented hand geometry systems to control and protect physical access to the Olympic Village.
Many companies implement hand geometry systems in parallel with time clocks for time and
attendance purposes. Walt Disney World has used a similar “finger” geometry technology system for
several years to expedite and facilitate entrance to the park and to identify guests as season ticket
holders to prevent season ticket fraud.
The devices use a simple concept of measuring and recording the length, width, thickness, and
surface area of an individual’s hand while guided on a plate. Hand geometry systems use a camera
to capture a silhouette image of the hand. The hand of the subject is placed on the plate, palm down,
and guided by five pegs that sense when the hand is in place.
The resulting data capture by a Charge-Coupled Device (CCD) camera of the top view of the hand
including example distance measurements.
The image captures both the top surface of the hand and a “side image” that is captured using an
angled mirror. Upon capture of the silhouette image, 31,000 points are analyzed and 90
measurements are taken; the measurements range from the length of the fingers, to the distance
between knuckles, to the height or thickness of the hand and fingers. This information is stored in
nine bytes of data; an extremely low number compared to the storage needs of other biometric
systems.
The enrollment process of a hand geometry system typically requires the capture of three sequential
images of the hand, which are evaluated and measured to create a template of the user’s
characteristics. Upon the submission of a claim, the system recalls the template associated with that
identity; the claimant places his/her hand on the plate; and the system captures an image and
creates a verification template to compare to the template developed upon enrollment. A similarity
score is produced and, based on the threshold of the system, the claim is either accepted or rejected.

Speaker Recognition
Stephen Mayhew
Speaker, or voice, recognition is a biometric modality that uses an individual’s voice for recognition
purposes. (It is a different technology than “speech recognition”, which recognizes words as they are
articulated, which is not a biometric.) The speaker recognition process relies on features influenced
by both the physical structure of an individual’s vocal tract and the behavioral characteristics of the
individual.
It is a popular choice for remote authentication due to the availability of devices for collecting speech
samples (e. g., telephone network and computer microphones). Due to its ease of integration,
speaker recognition is different from some other biometric methods in that speech samples are
captured dynamically or over a period of time, such as a few seconds. Analysis occurs on a model in
which changes over time are monitored, which is similar to other behavioral biometrics such as
dynamic signature, gait, and keystroke recognition.
The physiological component of voice recognition is related to the physical shape of an individual’s
vocal tract, which consists of an airway and the soft tissue cavities from which vocal sounds originate.
To produce speech, these components work in combination with the physical movement of the jaw,
tongue, and larynx and resonances in the nasal passages. The acoustic patterns of speech come
from the physical characteristics of the airways.
Motion of the mouth and pronunciations are the behavioral components of this biometric. There are
two forms of speaker recognition: text dependent (constrained mode) and text independent
(unconstrained mode).
In a system using “text dependent” speech, the individual presents either a fixed or prompted phrase
that is programmed into the system and can improve performance especially with cooperative users.
A “text independent” system has no advance knowledge of the presenter’s phrasing and is much
more flexible in situations where the individual submitting the sample may be unaware of the
collection or unwilling to cooperate, which presents a more difficult challenge.
Speech samples are waveforms with time on the horizontal axis and loudness on the vertical access.
The speaker recognition system analyzes the frequency content of the speech and compares
characteristics such as the quality, duration, intensity dynamics, and pitch of the signal.
In “text dependent” systems, during the collection or enrollment phase, the individual says a short
word or phrase (utterance), typically captured using a microphone that can be as simple as a
telephone. The voice sample is converted from an analog format to a digital format, the features of
the individual’s voice are extracted, and then a model is created. Most “text dependent” speaker
verification systems use the concept of Hidden Markov Models (HMMs), random based models that
provide a statistical representation of the sounds produced by the individual. The HMM represents
the underlying variations and temporal changes over time found in the speech states using the quality
I duration / intensity dynamics / pitch characteristics mentioned above.
Another method is the Gaussian Mixture Model, a state-mapping model closely related to HMM, that
is often used for unconstrained “text independent” applications. Like HMM, this method uses the
voice to create a number of vector “states” representing the various sound forms, which are
characteristic of the physiology and behavior of the individual.
These methods all compare the similarities and differences between the input voice and the stored
voice “states” to produce a recognition decision. After enrollment, during the recognition phase, the
same quality / duration / loudness / pitch features are extracted from the submitted sample and
compared to the model of the claimed or hypothesized identity and to models from other speakers.
The other-speaker (or “anti-speaker”) models contain the “states” of a variety of individuals, not
including that of the claimed or hypothesized identity. The input voice sample and enrolled models
are compared to produce a “likelihood ratio,” indicating the likelihood that the input sample came from
the claimed or hypothesized speaker. If the voice input belongs to the identity claimed or
hypothesized, the score will reflect the sample to be more similar to the claimed or hypothesized
identity’s model than to the “anti-speaker” model.
The seemingly easy implementation of speaker recognition systems contributes to the process major
weakness and susceptibility to transmission channel and microphone variability and noise.
Systems can face problems when end users have enrolled on a clean landline phone and attempt
verification using a noisy cellular phone. The inability to control the factors affecting the input system
can significantly decrease performance. Speaker verification systems, except those using prompted
phrases, are also susceptible to spoofing attacks through the use of recorded voice. Anti-spoofing
measures that require the utterance of a specified and random word or phrase are being
implemented to combat this weakness.
For example, a system may request a randomly generated phrase, to prevent an attack from a pre-
recorded voice sample. The user cannot anticipate the random sample that will be required and
therefore cannot successfully attempt a “playback” spoofing attack on the system.
Current research in the area of “text independent” speaker recognition is mainly focused on moving
beyond the low-level spectral analysis previously discussed. Although the spectral level of
information is still the driving force behind the recognitions, fusing higher-level characteristics with the
low level spectral information is becoming a popular laboratory technique.
Speaker recognition characteristics such as rhythm, speed, modulation and intonation are based on
personality type and parental influence; and semantics, idiolects, pronunciations and idiosyncrasies
are related to birthplace, socio-economic status, and education level.
Higher-level characteristics can be combined with the underlying low-level spectral information to
improve the performance of “text independent” speaker recognition systems.

Vascular Pattern Recognition


Stephen Mayhew
Vascular pattern recognition, also commonly referred to as vein pattern authentication, uses near-
infrared light to reflect or transmit images of blood vessels. Researchers have determined that the
vascular pattern of the human body is unique to a specific individual and does not change as people
age.
Potential for the use of the technology can be traced to a research paper prepared in 1992 by Dr. K.
Shimizu, in which he discussed optical trans-body imaging and potential optical CT scanning
applications. The first paper about the use of vascular patterns for biometric recognition was
published in 2000. That paper described technology that used subcutaneous blood vessels in the
back of the hand that was the first to become a commercially available vascular pattern recognition
system. Additional research improved that technology and inspired additional research and
commercialization of finger- and palm-based systems.
Typically, the technology either identifies vascular patterns in the bank of hands or fingers. To identify
patters in hands, near-infrared rays generated from a bank of light emitting diodes (LEDs) penetrate
the skin of the back of the hand. Due to the difference in absorbance of blood vessels and other
tissues, the reflected near-infrared rays produce an image processing techniques producing an
extracted vascular pattern. From the extracted vascular pattern, various feature rich data such as
vessel branching points, vessel thickness and branching angels are extracted and stored as a
template.
In vascular pattern in fingers, near-infrared rays generated from a bank of LEDs penetrate the finger
or hand and are absorbed by the hemoglobin the blood. The areas in which the rays are absorbed
(i.e., veins) appear as dark areas similar to a shadow in an image taken by a charge-construct device
camera. Image processing can then construct a vein pattern from the captured image. Next this
pattern is digitized and compressed so that it can be registered as a template.
Both technologies are touted since the technology is difficult to forge, is contact-less, has many,
varied and users and is capable of one to one and one to many matching. Vascular patterns are
difficult to recreate because they are inside the hand and, for some approaches, blood needs to flow
to register an image. Users do not touch the sensing surface, which address hygiene concerns and
improves user acceptance. Then technology has been deployed in ATMs, hospital and universities in
Japan. Applications include ID verification, high security physical access control, high security data
access and point-of-sale access control. The technology is also highly respected due to its dual
matching capacity, as users vascular patterns can be matched against personalized ID cards and
smart cards or against a database of many scanned vascular patterns.

What is Biometric Identification?


Stephen Mayhew
CATEGORIES Access Control | Behavioral Biometrics | Explaining Biometrics | Facial
Recognition | Fingerprint Recognition | Iris / Eye Recognition | Voice Biometrics
Biometric technologies use physical characteristics, such as voice tone or hand shape, to identify
people automatically. Behaviors, such as handwriting style, can also be used by computers in this
way. The term “identify” is used here quite loosely. There is actually nothing in your voice, hand
shape or any biometric measure to tell the computer your name, age or citizenship. External
documents (passport, birth certificate, naturalization papers) or your good word establishing these
facts must be supplied at the time you initially present yourself to the biometric system for
“enrollment”. At this initial session, your biometric characteristic, such as an eye scan, is recorded
and linked to this externally-supplied personal information. At future sessions, the computer links you
to the previously supplied information using the same physical characteristic. Even if the biometric
system works perfectly, the personal data in the computer, such as your voting eligibility, is only as
reliable as the original “source” documentation supplied. Once the computer knows your claimed
identity, it can usually recognize you whenever you present the required biometric characteristic. No
biometric identification system, however, works perfectly.
Problems are generally caused by changes in the physical characteristic. Even fingerprints change
as cuts, cracks and dryness in the skin come and go. It is far more likely that the computer will not
recognize your enrollment characteristic than link you to the characteristic of someone else, but both
types of errors do occur.
Identification: Positive and Negative
Biometric systems are of two types: “verification” and “identification”. Some professionals prefer the
to use the descriptions “positive identification” and “negative identification” to emphasize the opposite
nature of two approaches. A positive identification system requires you to identify yourself when
submitting a biometric measure. Your submitted measure is then checked against the measure given
when you enrolled in the system to affirm that they match. Biometric measures are always “fuzzy” to
some extent, changing over time and circumstance of collection.
If the submitted and stored biometric measures are “close enough”, it is assumed that you are indeed
the person enrolled under the identity you claimed. If the presented and enrolled characteristics are
not “close enough”, you will generally be allowed to try again. If multiple attempts are allowed, the
number of users “falsely rejected” can be under one percent, although there are always some people
chronically unable to use any system who must be given alternate means of identification. The
possibility that an impostor will be judged “close enough”, even given multiple attempts, is usually
less than one in ten. The threat of being caught in 9 out of 10 attempts is enough to deter most
impostors, particularly if penalties for fraud are involved.
Positive identification using biometrics can be made totally voluntary. People not wishing to use the
system can instead supply the source documents to human examiners each time they access the
system.
In “negative identification” applications, found in driver licensing and social service eligibility systems
where multiple enrollments are illegal, a user claims not to be previously enrolled. In fact, a negative
identification biometric system does not require any identity claim by the users. If a user offers an
identity, it is only for the purpose of linking to outside records to establish proof of age or citizenship.
The biometric measures themselves cannot establish name, age, or citizenship and therefore do not
prevent their misrepresentation during enrollment. These systems do, however, prevent a person
from enrolling more than once under any identity. Apart from the “honor” system, where each
person’s word is accepted, there are no alternatives to biometrics for negative identification.
During enrollment, the system must compare the presented characteristic to all characteristics in the
database to verify that no match exists. Because of the ongoing changes in everyone’s body, errors
can occur in the direction of failing to recognize an existing enrollment, perhaps at a rate of a few
percent. But again, only the most determined fraudster, unconcerned about penalties, would take on
a system weighted against him/her with these odds. False matches of a submitted biometric measure
to one connected to another person in the database are extremely rare and can always be resolved
by the people operating the system.
Negative identification applications cannot be made voluntary. Each person wishing to establish an
identity in the system must present the required biometric measure. If this were not so, fraudsters
could establish multiple enrollments simply by declining to use the biometric system. On the other
hand, negative identification can be accomplished perfectly well without linkage to any external
information, such as name or age. This information is not directly necessary to prove you are not
already known to the system, although it may be helpful if identification errors occur.

Dynamic Signature
Stephen Mayhew
Dynamic signature is a biometric modality that uses, for recognition purposes, the anatomic and
behavioral characteristics that an individual exhibits when signing his or her name (or other phrase).
Dynamic signature devices should not be confused with electronic signature capture systems that are
used to capture a graphic image of the signature and are common in locations where merchants are
capturing signatures for transaction authorizations.
Data such as the dynamically captured direction, stroke, pressure, and shape of an individual’s
signature can enable handwriting to be a reliable indicator of an individual’s identity (i.e.,
measurements of the captured data, when compared to those of matching samples, are a reliable
biometric for writer identification.)
The first signature recognition system was developed in 1965. Dynamic signature recognition
research continued in the 1970s focusing on the use of static or geometric characteristics (what the
signature looks like) rather than dynamic characteristics (how the signature was made). Interest in
dynamic characteristics surged with the availability of better acquisition systems accomplished
through the use of touch sensitive technologies.
In 1977, a patent was awarded for a “personal identification apparatus” that was able to acquire
dynamic pressure information.
Dynamic signature recognition uses multiple characteristics in the analysis of an individual’s
handwriting. These characteristics vary in use and importance from vendor to vendor and are
collected using contact sensitive technologies, such as PDAs or digitizing tablets. Most of the
features used are dynamic characteristics rather than static and geometric characteristics, although
some vendors also include these characteristics in their analyses. Common dynamic characteristics
include the velocity, acceleration, timing, pressure, and direction of the signature strokes, all analyzed
in the X, Y, and Z directions.
The X and Y position are used to show the changes in velocity in the respective directions while the Z
direction is used to indicate changes in pressure with respect to time.
Some dynamic signature recognition algorithms incorporate a learning function to account for the
natural changes or drifts that occur in an individual’s signature over time.
The characteristics used for dynamic signature recognition are almost impossible to replicate. Unlike
a graphical image of the signature, which can be replicated by a trained human forger, a computer
manipulation, or a photocopy, dynamic characteristics are complex and unique to the handwriting
style of the individual. Despite this major strength of dynamic signature recognition, the
characteristics historically have a large intra-class variability (meaning that an individual’s own
signature may vary from collection to collection), often making dynamic signature recognition difficult.
Recent research has reported that static writing samples can be successfully analyzed to overcome
this issue.

Verification vs. Identification Systems


Stephen Mayhew
Biometrics are used for different purposes, but they are generally part of either a verification system
or an identification system. The differences between these two types of systems can make a
difference in how quickly the system operates and how accurate it is as the size of a biometric
database increases.
Verification Systems
Verification systems seek to answer the question “Is this person who they say they are?” Under a
verification system, an individual presents himself or herself as a specific person. The system checks
his or her biometric against a biometric profile that already exists in the database linked to that
person’s file in order to find a match.
Verification systems are generally described as a 1-to-1 matching system because the system tries to
match the biometric presented by the individual against a specific biometric already on file.
Because verification systems only need to compare the presented biometric to a biometric reference
stored in the system, they can generate results more quickly and are more accurate than
identification systems, even when the size of the database increases.
Identification Systems
Identification systems are different from verification systems because an identification system seeks
to identify an unknown person, or unknown biometric. The system tries to answer the questions “Who
is this person?” or “Who generated this biometric?” and must check the biometric presented against
all others already in the database. Identification systems are described as a 1-to-n matching system,
where n is the total number of biometrics in the database. Forensic databases, where a government
tries to identify a latent print or DNA discarded at a crime scene, often operate as identification
systems.
Find identification and verification solutions in our biometrics service directory.

S-ar putea să vă placă și