Sunteți pe pagina 1din 53

Linear Algebra Review

Most slides are courtesy of Prof. Octavia I. Camps, Penn State


University, with some modification
Why do we need Linear Algebra?

We will associate coordinates to


3D points
i ts iin th
the scene
s
2D points in the CCD array
2D points in the image
Coordinates will be used to
Perform geometrical transformations
Associate 3D with 2D points
Images are matrices of numbers
We will find properties of these numbers

2
2D Vector x2 P
v = ( x1 , x2 ) v

x1
Magnitude: || v ||= x1 + x2
2 2

If || v ||= 1 , v Is a UNIT vector

v x x
= 1 , 2 Is a unit vector
|| v || || v || || v ||
x2
Orientation: = tan
1

x1
3
Vector Addition

v + w = ( x1 , x2 ) + ( y1 , y2 ) = ( x1 + y1 , x2 + y2 )

V+w
v
w

4
Vector Subtraction

v w = ( x1 , x2 ) ( y1 , y2 ) = ( x1 y1 , x2 y2 )

V-w
v
w

5
Scalar Product

av = a( x1 , x2 ) = (ax1 , ax2 )

av
v

6
Linearly independent vectors

No vector is a linear combination of


others,
or equivalently
1v1 + 2 v2 + 3v3 + K + i vi = 0 onlyy for 1 = 2 = K = i = 0

7
Basis

span(V): span of a set of vectors V is


all linear combinations of vectors vi ,
p
i.e. a vector space.
Basis of a vector space: a set of
vectors that are linearly independent
and that span the space.

8
Inner (dot) Product
v
w v.w = ( x1 , x2 ).(
) ( y1 , y2 ) = x1 y1 + x2 . y2

The inner product is a SCALAR!

v.w = ( x1 , x2 ).( y1 , y2 ) =|| v || || w || cos

v.w = 0 v w
9
Orthonormal Basis
basis
as s vectors
ctors ar
are p
perpendicular
rp n cu ar to each
ach oth
otherr
and have unit length.

P
x2 i = (1,0) || i ||= 1
v ij= 0
j j = (0,1) || j ||= 1
i x1

v = ( x1 , x2 ) v = x1.i + x2 .j

v.i = ( x1.i + x2 .j).i = x1.1 + x2 .0 = x1


v.j = ( x1.i + x2 .j).j = x1.0 + x2 .1 = x2 10
Vector (cross) Product
u w

u = v w
v
The cross product is a VECTOR!

Magnitude: || u || = || v.w ||=|| v ||| w || sin

u v u v = (v w) v = 0
Orientation:
u w u w = (v w) w = 0

11
Vector Product Computation
i = (1,0,0) || i ||= 1
j = (0,1,0) || j ||= 1 i j = 0, i k = 0, j k = 0
k = (0,0,1) || k ||= 1
u = v w = ( x1 , x2 , x3 ) ( y1 , y2 , y3 )
i j k
u w
u = x1 x2 x3
v
y1 y2 y3
= ( x2 y3 x3 y2 )i + ( x3 y1 x1 y3 ) j + ( x1 y2 x2 y1 )k
12
Matrices
a11 a12 L a1m
a a22 L a2 m
Sum:
21 Cnm = Anm + Bnm
Anm = a31 a32 L a3m

M M O M cij = aij + bij
an1 an 2 L anm
A and B must have the same
dimensions
Example:
2 5 6 2 8 7
3 1 + 1 5 = 4 6

13
Matrices
P d
Product: A and B must have
Cn p = Anm Bm p compatible dimensions

Ann Bnn Bnn Ann


m
cij = aik bkj
k =1

Examples:
2 5 6 2 17 29 6 2 2 5 18 32
3 1.1 5 = 19 11 1 5.3 1 = 17 10

14
Matrices
Transpose:
Cmn = A T
nm ( A + B) = A + B
T T T

cij = a ji ( AB)T = B T AT

If A =A
T
A is symmetric

Examples: T
T 6 2
6 2 6 1 1 5 = 6 1 3
1 5 = 2 5 2 5 8
3 8

15
Matrices
Determinant:
Determinant: A must be square

a11 a12 a11 a12


det = = a11a22 a21a12
a21 a22 a21 a22

a11 a12 a13


det a21 a22 a23 a21 a23 a21 a22
a22 a23 = a11 a12 + a13
a32 a33 a31 a33 a31 a32
a31 a32 a33

2 5
Example: det = 2 15 = 13
3 1

16
Matrices
Inverse: A must be square
1 1
Ann A n n =A n n Ann = I
1
a11 a12 1 a22 a12
a = a
21 a22 a11a22 a21a12 21 a11

1
6 2 1 5 2
Example: 1 5 = 28 1 6

1
6 2 6 2 1 5 2 6 2 1 28 0 1 0
1 5 .1 5 = 28 1 6 .1 5 = 28 0 28 = 0 1

17
2D Geometrical
Transformations
2D Translation
P
P

t
P

19
2D Translation Equation

P
t
ty P P = ( x, y )
y t = (t x , t y )
x tx

P' = ( x + t x , y + t y ) = P+t

20
2D Translation using Matrices

P P = ( x, y )
t
ty P
t = (t x , t y )
y
t P
x tx x
x + t x 1 0 t x
P' = y
y + t y 0 1 t y 1

21
Homogeneous Coordinates
Multiply the coordinates by a non-zero
scalar and add an extra coordinate equal to
that scalar. For example,

( x, y ) ( x z , y z , z ) z 0
( x, y, z ) ( x w, y w, z w, w) w 0
NOTE: If the scalar is 1, there is no need
for the multiplication!

22
Back to Cartesian Coordinates:

Divide by the last coordinate and eliminate it. For


example,

( x, y , z ) z 0 ( x / z , y / z )
( x, y, z , w) w 0 ( x / w, y / w, z / w)

23
2D Translation using
g
Homogeneous Coordinates
P P = ( x, y ) ( x, y,1) t
t
ty P t = (t x , t y ) (t x , t y ,1) P
y x + t x 1 0 t x x

P' y + t y = 0 1 t y y
x tx
1 0 0 1 1

T
P' = T P
24
Scaling
P

25
Scaling Equation
P
Sy.y
P = ( x, y ) ( x, y,1)
P
y
P' = ( s x x, s y y ) ( s x x, s y y,1)

x Sx.x sx x sx 0 0 x
P' s y y = 0 sy 0 y
1 0 0 1 1
P' = S P S
26
Scaling & Translating P=T.P
S P=S.P

P=T.P=T.(S.P)=(T.S).P

27
Scaling & Translating
P T P T ( P) (T ) P
P=T.P=T.(S.P)=(T.S).P

1 0 t x s x 0 0 x
P' ' = T S P = 0 1 t y 0 s y 0 y =
0 0 1 0 0 1 1
sx 0 t x x sx x + t x

= 0 s y t y y = s y y + t y
0 0 1 1 1
28
Translating
g & Scaling
g
Scaling & Translating
P P (T P) ( T) P
P=S.P=S.(T.P)=(S.T).P

s x 0 0 1 0 t x x
P' ' = S T P = 0 s y 0 0 1 t y y =
0 0 1 0 0 1 1
s x 0 s xt x x sx x + s xt x

= 0 s y s yt y y = s y y + s yt y
0 0 1 1 1
29
Rotation
P

30
Rotation Equations
Counter--clockwise rotation by an angle
Counter

x' cos sin x


P
P y ' = sin
Y i cos y

P
y P' = R.P
x
X
X

31
Degrees of Freedom
x' cos sin x
y ' = sin
cos y

R is 2x2 4 elements

BUT! There is only 1 degree of freedom:

The 4 elements must satisfy the following constraints:

i.e. R is an orthogonal
RR = R R = I
T T
matrix

32
Orthogonal matrix
For square matrix A,
A is an orthogonal matrix
iff AAT = AT A = I

iff columns
l of
f A are an orthonomal
th l basis
b i

iff rows of A are an orthonomal basis

Orthogonal matrix preserves length


|| Ax ||=|| x ||
33
Scaling, Translating & Rotating

Order matters!

P = S.P
P=T
P =T.PP=(T
=(T.S).P
S) P
P=R.P=R.(T.S).P=(R.T.S).P

R.T.S R.S.T T.S.R


34
3D Rotation
R t ti offP
Points
i ts
Rotation around the coordinate axes,
axes counter
counter--clockwise
clockwise::
1 0 0
Rx ( ) = 0 cos sin i
0 sin cos
P
P cos 0 sin
Y R y ( ) = 0 1 0
y P sin 0 cos
x cos sin 0
X
X
Rz ( ) = sin cos 0
z 0 0 1

35
3D Rotation (axis & angle)

n = [ n1 n3 ] , ||n||=1, angle ,
T
n2
n12 n1n2 n1n3 0 n3 n2

R = I cos + I (1 cos ) n1n2 n2 2 n2 n3 + sin n3 0 n1
n1n3 n2 n3 n32 n2 n1 0

36
3D Translation
T sl ti off Points
P i ts

Translate by a vector t=(tx,ty,tx)T:

1 0 0 tx
0 1 0 t y
P T =
t 0 0 1 tz

Y x 0 0 0 1
x
P
z y
z
37
Change of basis
Linear transform: y = Ax ( A is square )

Express x and y in a new basis E


x = Px
y = Py
y

P: change of bas
basiss matrix
matr x

The same transform represented in E


y ' = PAP 1 x '
Matrix A and B are similar if B = PAP 1
38
39
Ax = x (I A) x = 0 hence (I A) is singular
The eigenvalues of A are the roots of the
characteristic equation

p ( ) = det(I A) = 0

If A and B are similar, i.e., B = PAP 1


(1) and
(1)A d B have
h the
h same eigenvalues
l
(2)if x is an eigenvector of A, Px is an
eigenvector of B

40
Spectral theorem:

if A is symmetric
(1)all eigenvalues of A are real.
(2)there is an orthonormal basis consisting of
eigenvectors of A

1
2
A = U U = U
T UT
.

N
U is orthogonal.
Columns of U are eigenvectors of A.
A
41
A is positive (semi)definite:
for any nonzero vector x,
x x T
Ax > ()0

if A is symmetric
y and positive
p (semi)definite
( )
all eigenvalues of A are positive(nonnegative)

42
Rank and Nullspace
A x=b





m n n 1 m 1

43
Least Squares
Ax = b
More equations than unknowns
Look for solution which minimizes ||Ax-b|| = (Ax-b)T(Ax-b)
Solve ( Ax b)T ( Ax b)
=0
xi

Same as the solution to A Ax = A b


T T

1
LS solution x = ( A A) A b when A
T T T
A is invertible
44
45
46
Properties of SVD
2 are eigenvalues of ATA
C l
Columns off U (u t off AAT
( 1 , u2 , u3 ) are eigenvectors
i
Columns of V (v1 , v2 , v3 ) are eigenvectors of ATA

|| A ||F = ai2, j = i2
i, j

47
48
Solving ( A A) x = A b when
t t

rank(A) < n
A+ = V +U T pseudoinverse of A

1
, if i = j and ii 0
+ ij = ii
0

x = A+ b is the solution with minimum ||x||

49
Least squares solution of
homogeneous
g equation
q Ax=0

Mi i i || Ax
Minimize A || subject t || x ||= 1
bj t to
A = UDV T
|| UDV T x ||=|| DV T x || and || x ||=|| V T x ||
y = V T x minimize || DV T x || subject to || V T x ||= 1
or || Dy || subject to || y ||= 1
diagonal elements of D in descending order
0

0
y = x = Vy last column of V
.
50
1

Enforce orthonormality
constraints on an estimated
rotation matrix R
R ' = UDV T
replace by R = UIV T I is identity matrix

51
Gauss-Newton iteration
Minimize || f ( x) || , where
f ( x) = ( f1 ( x), T
) f 2 ( x)),..., f m ( x))
Approximate
pp f(x)
( ) by
y

f ( x) = f ( xk ) + J k k ,
J k = f ( xk ) k = x xk
Mi i i
Minimize || f ( x) || J k J k k = J k f ( xk )
T T

52
Levenberg Marquardt iteration
Change J k J k k = J k f ( xk )
T T

To ( J k T J k + I ) k = J k T f ( xk )

Avoid singular J k J k
T

Control step size
When || f ( x) || reduces rapidly
rapidly, decrease
Otherwise increase

53

S-ar putea să vă placă și