Sunteți pe pagina 1din 27

Ho Chi Minh City University of Technology

Faculty of Electrical and Electronics Engineering


Department of Telecommunications

Principal Component Analysis


PCA
Lectured by Ha Hoang Kha, Ph.D.
Ho Chi Minh City University of Technology
Email: hahoangkha@gmail.com

Face detection and recognition

Detection

PCA

Recognition

Sally

H. H. Kha

Applications of Face Recognition


Digital photography

PCA

H. H. Kha

Applications of Face Recognition


Digital photography
Surveillance

PCA

H. H. Kha

Consumer application: iPhoto 2009

http://www.apple.com/ilife/iphoto/
PCA

H. H. Kha

Starting idea of eigenfaces


1. Treat pixels as a vector

x
2. Recognize face by nearest neighbor

y1...y n
k argmin y k

k
PCA

H. H. Kha

The space of all face images


When viewed as vectors of
pixel values, face images
are extremely highdimensional
100x100 image = 10,000
dimensions
Slow and lots of storage

But very few 10,000dimensional vectors are


valid face images
We want to effectively
model the subspace of face
images
PCA

H. H. Kha

Efficient Image Storage

PCA

H. H. Kha

Efficient Image Storage

PCA

H. H. Kha

Efficient Image Storage

PCA

10

H. H. Kha

Geometrical Interpretation

PCA

11

H. H. Kha

Geometrical Interpretation

PCA

12

H. H. Kha

Dimensionality Reduction
The set of faces is a subspace of the set
of images
Suppose it is K dimensional
We can find the best subspace using
PCA
This is like fitting a hyper-plane to the
set of faces
- spanned by vectors u1, u2, ..., uK

Any face:
PCA

x +w1u1+w2u2++wkuk
13

H. H. Kha

Principal Component Analysis


A N x N pixel image of a face, represented as a vector
occupies a single point in N2-dimensional image space.
Images of faces being similar in overall configuration, will
not be randomly distributedin this huge image space.
Therefore, they can be described by a low dimensional
subspace.

Main idea of PCA for faces:


To find vectors that best account for variation of face images in
entire image space.
These vectors are called eigenvectors.
Construct a face space and project the images into this face space
(eigenfaces).

Image Representation
Training set of m images of size
N*N are represented by vectors of
size N2
x1,x2,x3,,xM
Example

1 2 3
3 1 2

4 5 1 33

1
2

3

3
1

2
4

5
1
91

Principal Component Analysis (PCA)


Given: N data points x1, ,xN in Rd

We want to find a new set of features that are linear


combinations of original ones:
u(xi) = uT(xi )

(: mean of data points)


Choose unit vector u in Rd that captures the most data
variance

PCA

16

H. H. Kha

Principal Component Analysis


Direction that maximizes the variance of the projected data:
N

Maximize
subject to ||u||=1
Projection of data point
N
1/N

Covariance matrix of data

The direction that maximizes the variance is the eigenvector


associated with the largest eigenvalue of
PCA

17

H. H. Kha

Eigenfaces (PCA on face images)

1. Compute covariance matrix of face images


2. Compute the principal components (eigenfaces)

K eigenvectors with largest eigenvalues

3. Represent all face images in the dataset as linear


combinations of eigenfaces

PCA

Perform nearest neighbor on these coefficients

18

H. H. Kha

Eigenfaces example
Training
images
x1,,xN

PCA

19

H. H. Kha

Eigenfaces example
Top eigenvectors: u1,uk
Mean:

PCA

20

H. H. Kha

Representation and reconstruction


Face x in face space coordinates:

PCA

21

H. H. Kha

Representation and reconstruction


Face x in face space coordinates:

=
Reconstruction:

=
x^

PCA

w1u1+w2u2+w3u3+w4u4+

22

H. H. Kha

Recognition with eigenfaces


Process labeled training images
Find mean and covariance matrix
Find k principal components (eigenvectors of ) u1,uk
Project each training image xi onto subspace spanned by
principal components:
pi=(wi1,,wik) = (u1T(xi ), , ukT(xi ))

Given novel image x


Project onto subspace:
p=(w1,,wk) = (u1T(x ), , ukT(x ))
Optional: check reconstruction error x^ x to determine
whether image is really a face
Classify as closest training face in k-dimensional
subspace
PCA

23

H. H. Kha

Recognition
The distance of p to each face class is defined by
k2 = ||p-pk||2; k = 1,,N
A distance threshold c, is half the largest distance
between any two face images:
c = maxj,k {||pj-pk||}; j,k = 1,,N

PCA

24

H. H. Kha

Recognition
Find the distance between the original image x and
its reconstructed image from the eigenface space, xf,
2 = || x x^ ||2
Recognition process:
IF c
then input image is not a face image;
IF <c AND kc for all k
then input image contains an unknown face;
IF <c AND k*=mink{ k} < c
then input image contains the face of individual k*

PCA

25

H. H. Kha

PCA

General dimensionality reduction technique


Preserves most of variance with a much more compact
representation
Lower storage requirements (eigenvectors + a few
numbers per face)
Faster matching

What are the problems for face recognition?

PCA

26

H. H. Kha

Limitations
Global appearance method: not robust to
misalignment, background variation

PCA

27

H. H. Kha

S-ar putea să vă placă și