Documente Academic
Documente Profesional
Documente Cultură
• Reading Assignments
M. Petrou and P. Bosdogianni, Image Processing: The Fundamentals, John
Wiley, 2000 (pp. 37-44 - examples of SVD, hard copy).
E. Trucco and A. Verri, Introductory Techniques for 3D Computer Vision, Pren-
tice Hall (appendix 6, hard copy)
K. Kastleman, Digital Image Processing, Prentice Hall, (Appendix 3: Mathe-
matical Background, hard copy).
F. Ham and I. Kostanic. Principles of Neurocomputing for Science and Engi-
neering, Prentice Hall, (Appendix A: Mathematical Foundation for Neuro-
computing, hard copy).
-2-
• Definition
- Any real mxn matrix A can be decomposed uniquely as
A = UDV T
• An example
1 2 1 6 10 6
A = 2 3 2 , then AAT = AT A = 10 17 10
1 2 1 6 10 6
1 28. 86
0. 14
2 =
3 0
The eigenvectors of AAT , AT A are:
The expansion of A is
2
A= Σ
i=1
T
i ui vi
Important: note that the second eigenvalue is much smaller than the first; if
we neglect it from the above summation, we can represent A by introducing
relatively small errors only:
1. 11 1. 87 1. 11
A = 1. 87 3. 15 1. 87
1. 11 1. 87 1. 11
i ≠ 0 for all i
A−1 = VD−1U T
1 1 1
where D−1 = diag(
,
,...,
)
1 2 n
1/
if i > t
D0−1 =
i
î 0 otherwise
Ax = b
- The ratio given below is related to the condition of A and measures the
degree of singularity of A (the larger this value is, the closer A is to being
singular)
1/ n
r = Ax − b
- The vector x * which yields the smallest possible residual is called a least-
squares solution (it is an approximate solution).
- The least-squares solution x with the smallest norm ||x|| is unique and it is
given by:
AT Ax = AT b or x = ( AT A)−1 AT b = A+ b
Example:
−11 2
0
x
2 3 = 7
1
x2
2 −1 5
-5-
−. 148 0
+ . 180 . 246 2. 492
x= A b= 7 =
. 164 . 189 −. 107 0. 787
5
x = A+ b ≈ VD−1 T
0 U b
1/ if i > t
D0−1 =
i
î 0 otherwise
x = A−1 b ≈ VD−1 T
0 U b
• Homogeneous Systems
Ax = 0
||x|| = 1
min||x||=1 || Ax||
-6-
n =0)
solution is x = av n (a is a constant)
n−k+1 = ... =
n = 0)