Sunteți pe pagina 1din 4

Theorems 3.

If A is diagonalizable and Bk is a basis for the eigenspace corresponding to λk for each k,


VIT The following are equivalent for an m × m square matrix A then the B1 ∪ B2 ∪ B3 ∪ · · · ∪ Bp form an eigenvector basis for Rn .

1. A is invertible Orthogonal Decomposition Theorem


2. A is row equivalent to Im Let W be any finite-dimensional subspace of V . Then each vector v ∈ V can be written uniquely
in the form v = y + z, y ∈ W , z ∈ W ⊥ . In fact, if {u1 , u2 , u3 , . . . , up } is an orthogonal basis for
3. The system Ax = 0 has only the trivial solution hv,uj i
W , then y = c1 · u1 + c2 · u2 + c3 · u3 + · · · + cp · up , with cj = kuj k2
, for j ∈ {1, 2, 3, . . . , p}, and
4. The system Ax = b has a unique solution ∀ b ∈ Rm
z = v − y.
5. N ullity A = 0
The Gram-Schmidt Process
6. Rank A = m Given a basis for {x1 , x2 , x3 , . . . , xn } for a subspace W of V , we can generate an orthogonal basis
7. The columns of A form a basis for Rm {v1 , v2 , v3 , . . . , vn } for W such that
Span {x1 , x2 , x3 , . . . , xk } = Span {v1 , v2 , v3 , . . . , vk } ∀k ∈ {1, 2, 3, . . . , n}. In fact,
8. det A 6= 0
k−1
!
9. 0 is not an eigenvalue of A. X hxk , vj i
vk = xk − · vj
Corollaries j=1
kvj k2

1. A is a product of elementary matrices The Cauchy-Schwarz Inequality


2. If A has a left-inverse or a right-inverse, A has an inverse. ∀u, v ∈ V , |hu, vi| ≤ kuk · kvk
3. If A is factored as a product of square matrices A1 . . . Ak then A is invertible ⇐⇒ Ai is Spectral Theorem for Symmetric Matrices An n × n symmetric matrix A has the following
invertible ∀ i ∈ {1 . . . k} properties:
4. A is invertible ⇐⇒ Ax = b has a unique solution ∀ b ∈ Rm 1. The eigenspaces are mutually orthogonal.
The Rank Theorem The following hold for the matrix Am×n 2. A has n real eigenvalues, counting algebraic multiplicities.

1. The row rank and the column rank of a matrix A are equal. This number is called the rank 3. A is orthogonally diagonalizable.
of A. 4. The geometric multiplicity of each eigenvalue λ equals its algebraic multiplicity.
2. The rank of A is equal to the number of pivot position in the RREF matrix form of A. Principal Axes Theorem
3. rank (A) + nullity (A) = n Let A be a symmetric n × n matrix. Then there is an orthogonal change of variable x = P · y that
transforms the quadratic form xT · A · x into a quadratic form y T · D · y, which has no cross
Corollaries product terms (equivalent to D being a diagonal matrix).
1. Am×m is invertible ⇐⇒ rank (A) = m SVD of a Matrix
Rank Theorem for Linear Transformations Let A be an m × n matrix with rank r. Then A can be factored as a product U × σ × V as follows:
Suppose T : V → W is a linear transformation and V is finite dimensional. Then, σ is an m × n matrix containing an r × r diagonal matrix D with the r non-zero singular values of
Rank T + N ullity T = dim V A in descending order along the main diagonal, placed along the upper left hand corner. The
remaining entries of σ are 0.
Fundamental Theoren of Linear Algebra U is an m × m orthogonal matrix, and V is an n × n orthogonal matrix.
Every vector space V has a basis. More precisely, if v ∈ V is a non-zero vector, then ∃ a basis B of Addendum: V has as its columns the orthonormal basis {v1 , v2 , v3 , . . . , vn } of eigenvectors of
V such that v ∈ B. AT × A. To obtain U , we extend {A · v1 , A · v2 , A · v3 , . . . , A · vr } to an orthogonal basis of Rm ,
Diagonalization Theorem and then normalize the vectors to obtain an orthonormal basis {u1 , u2 , u3 , . . . , um }. U has the
vectors ui as its columns.
1. An n × n matrix A is diagonalizable ⇐⇒ A has n linearly independent eigenvectors.
Cayley-Hamilton Theorem Let q denote the characteristic polynomial of the n × n square matrix A.
2. In this case, A = P · D · P −1 where the columns of P are n linearly independent eigenvectors Then q(A) = 0.
of A, and the diagonal entries of D are the eigenvalues corresponding to these eigenvectors.
Definitions Coordinates Given an ordered basis B = {u1 , u2 , u3 , . . . , un }, we 2. hu + v, wi = hu, wi + hv, wi
Vector Space A vector space is a non-empty set of objects V called can express a vector uniquely in the form 3. hc · u, vi = c · hu, vi
vectors, together with an associated field F of scalars, with two u = x1 · u1 + x2 · u2 + x3 · u3 + · · · + xn · un .
operations called addition and scalar multiplication. The scalars xi are called the coordinates of u relative to the 4. hu, ui ≥ 0, and hu, ui = 0 ⇐⇒ u = 0
∀ u, v, w ∈ V and c, d ∈ F ordered basis B. A vector space with an inner product is called an inner product
A · [v]B = v =⇒ [v]B = A−1 v, where A = [u1 , u2 , . . . , un ] space.
1. Closure u + v ∈ V and c · u ∈ V
Matrix of an LT [T ]B→C · [v]B = [T (v)]C Length and distance For any inner product hu, vi
2. The following hold for addition
Similarity An n × n matrix B is said to be similar to an n × n p
(a) (u + v) + w = u + (v + w) matrix A, if ∃ an invertible matrix P such that B = P · A · P −1 1. kuk = hu, ui
(b) ∃ 0 | u+0=0+u=u 2. kc · uk = |c| · kuk
Non-singularity A linear transformation T from V to W is said to
(c) ∀ u ∃ e | u+e=0 be non-singular if N ul T = {0}. 3. dist (u, v) = ku − vk
(d) u+v =v+u
Linear Independence A set of vectors S is said to be linearly u
4. kuk =1
3. Additionally independent if every finite subset of S is linearly independent.
(a) c · (u + v) = c · u + c · v Span If S is a subset of V , then Span S is the smallest subspace of Orthogonality Two vectors u and v are said to be orthogonal if
(b) (c + d) · u = c · u + c · v V that contains S. This coincides with the earlier definition for hu, vi = 0
(c) c · (d · u) = (c · d) · u finite subsets, and holds for infinite subsets. Orthogonal set A set {u1 , u2 , u3 , . . . , up } is orthogonal if
(d) 1·u=u Basis A subset S of a space V is a basis ⇐⇒ S is linearly hui , uj i = 0, for i 6= j
independent, and Span S = V . Orthogonal complement If W is a subspace of V , then a vector v
Subspace A subspace of a vector space V over F is a non-empty
Determinant If A ∈ F 2×2 where A = [a ] then is said to be orthogonal to W if v is orthogonal to every vector
vector space W over F | W ⊂ V ij
det : F 2×2 → F | det A = a11 · a22 − a12 · a21 . If A ∈ F n×n , in W . The set of all the vectors orthogonal to W is called the
Span For a finite set S = {v1 , v2 , v3 , . . . , vp }, Span S = then let Ai,j denote the (n − 1) × (n − 1) matrix obtained from orthogonal complement of W , and written as W ⊥
{c1 · v1 + c2 · v2 + c3 · v3 + · · · + cp · vp : (c1 , c2 , c3 , . . . , cp ) ∈ F p } A by omission of the i th row and j th column. Then det A =
Pn i+j · a · det A
 Pn i+j · a · det A
 Orthogonal basis An orthogonal basis for a subspace W is a basis
Linear Dependence Let (v1 , v2 , v3 , . . . , vp ) be a finite list of i=1 (−1) ij i,j = j=1 (−1) ij i,j which is also an orthogonal set.
vector in a vector space V . Then the list is linearly dependent Cofactor For an n × n matrix, we define the cofactor
if ∃ a non-trivial list of scalars (c1 , c2 , c3 , . . . , cp ) | Orthogonal Matrix A square matrix P is said to be orthogonal if
Cij = (−1)i+j · det Ai,j its columns are orthonormal.
c1 · v1 + c2 · v2 + c3 · v3 + · · · + cp · vp = 0
Adjoint adj A = C T . Orthogonally Diagonalizable Matrix A square matrix A is
Direct sum U ⊕ W = V ⇐⇒ each v ∈ V can be uniquely
expressed as v = u + w (u ∈ U, w ∈ W ) Eigenvector A non-zero vector x such that A · x = λ · x, for some orthogonally diagonalizable if there is an orthogonal matrix P
scalar λ. and a diagonal matrix D such that
Null space N ul Am×n = {x : x ∈ Rn , Ax = 0} A = P · D · P −1 = P · D · P T .
n
Eigenvalue A scalar λ is called an eigenvalue of A if there is a
Column space Col Am×n = {b : b = Ax, x ∈ R } non-trivial solution of A · x = λ · x; such a vector x is called an Spectrum The set of eigenvalues of A is called the spectrum of A.
Row space Row Am×n is the set of all linear combinations of the eigenvector corresponding to A. Quadratic form A quadratic form on Rn is a function Q : Rn → R,
rows of A. Eigenspace Eigenspace of an eigenvalue λ is defined as the set whose value at x is given by an expression of the form
Linear Transformation A map or a function or a homomorphism {v : A · v = λ · v} Q(x) = xT · A · x, where A is a symmetric n × n matrix. The
T : V → W , from a vector space V to a vector space W , both Diagonalizable matrix An n × n matrix A is said to be matrix A is the matrix of the quadratic form.
over the field F is said to be linear if diagonalizable if A is similar to a diagonal matrix D. Classification of Quadratic forms A quadratic form Q is said to
1. T (u + v) = T (u) + T (v), ∀ u, v ∈ V Algebraic multiplicity The algebraic multiplicity of λi is the power be:

2. T (c · u) = c · T (u), ∀ u ∈ V, c ∈ F of the factor (λ − λ i ) in the characteristic polynomial of A. Positive definite if Q(x) > 0∀x 6= 0
Geometric multiplicity The geometric multiplicty of λi is the Negative definite if Q(x) < 0∀x 6= 0
Additional properties
dimension of the eigenspace corresponding to λi .
Indefinite if Q(x) assumes both positive and negative values.
1. T (0) = 0 Determinant of a linear operator Let T : V → V be a linear
operator where V is a vector space of finite dimension n. Let α Positive semidefinite if Q(x) ≥ 0∀x
2. T preserves linear combinations
be any ordered basis for V , and let A = [T ]α . Then Negative semidefinite if Q(x) ≤ 0∀x
Kernel Ker T = {v ∈ V : T (v) = 0}, a subspace of V . det T = det A.
Singular values For an m × n matrix A, the singular values of A
Range Range T = {w ∈ W : f or some v ∈ V, T (v) = w} Eigenvector of a linear operator An eigenvector of a linear are the square roots of the eigenvalues of AT × A, denoted by
Isomorphism A linear transformation T : V → W is said to be an operator T : V → V is a non-zero vector v such that σ1 , σ2 , σ3 , . . . , σn , arranged in descending order. Note that
isomorphism if it is injective and surjective. T (v) = λ · v for some scalar λ. Such a scalar is called an σi = kA · vi k
eigenvalue of the operator. Pm
Rank of a Linear Transformation If T : V → W is a linear i
Inner product An inner product on a vector space V is a function
Polynomials applied to matrices P If p(t) = i i=0 (ai · t ), then for
transformation, then Rank T = dim Range T the n × n matrix A, p(A) = m (a
i=0 i · A ).
that with each pair of vectors u and v in V associates a scalar pq(A) = p(A) · q(A) = q(A) · p(A) = qp(A)
Ordered basis An ordered basis for a finite-dimensional vector hu, vi and satisfies the following axioms for all vectors u, v,
space V is a finite sequence of vectors which is linearly w ∈ V and all scalars c: Minimal polynomial Given an n × n square matrix A, the minimal
independent and spans V . In other words, and ordered basis is polynomial of A is the (non-zero) monic polynomial of
a basis with the vectors taken in a specified, fixed order. 1. hu, vi = hv, ui minimum degree p such that p(A) = 0.
Propositions (b) No subset of V which contains < n elements can span V . 37. Let T : V → W be a linear transformation. Then T is
1. Given any m × n matrix A, there exists an RREF matrix which non-singular ⇐⇒ T carries any linearly independent subset of
18. If W is a proper subspace of a finite-dimensional vector space V into a linearly independent subset of W .
is row-equivalent to A
V , then 0 < dim W < dim V
2. Row equivalence is an equivalence relation on Rm×n . 38. Let V and W be finite-dimensional vector spaces with
19. If U and W are subspaces of the vector space V , then dim V = dim W . Let T : V → W be a linear transformation.
3. If A is a square matrix, it is row-equivalent to I ⇐⇒ Ax = 0 V = U ⊕ W ⇐⇒ V = U + W and U ∩ W = {0} Then the following are equivalent:
has only the trivial solution.
20. If U and W are finite-dimensional subspaces of the vector space (a) T is invertible
4. Consider the RREF form of the augmented matrix equivalent V , then dim (U + W ) = dim U + dim W − dim (U ∩ W )
to a homogenous system, say R. (b) T is non-singular
21. N ul Am×n is a subspace of Rn . (c) T is surjective.
The system is consistent ⇐⇒ the rightmost column is not a
pivot column. 22. Col Am×n is a subspace of Rm . (d) T carries every basis of V into a basis for W .
5. If e is an elementary row operation and E = e(Im ), then 23. The pivot columns of a matrix A form a basis for Col A. 39. (a) For a fixed ordered basis β for a vector space V , the
e(A) = E × A, where A is any m × n matrix 24. Row Am×n is a subspace of Rn . mapping φ : L(V, V ) → F n×n given by φ(T ) = [T ]β is an
6. Every elementary matrix is invertible isomorphism which also preserves products.
25. Row equivalent matrices have the same row space
7. Let V be a vector space over the field F . Then (b) Let α, β, and γ be fixed ordered bases of V , W , and Z
26. (a) A linear transformation T : V → W is completely respectively. Then for U ∈ L(V, W ) and T ∈ L(W, Z),
(a) The zero vector is unique. determined by its action on a basis of V [U T ]α→γ = [U ]α→β · [T ]β→γ
(b) The additive inverse of any vector u is unique and (b) Conversely, given a list of n vectors w1 , w2 , . . . , wn , (not 40. The following hold for the determinant of a square matrix A:
represented by −u. necessarily distinct) in the codomain space W and a
basis of V = {v1 , v2 , . . . , vn }, then ∃ a unique linear (a) If the matrix A0 is obtained from A by interchanging two
(c) 0 · u = 0, ∀ u ∈ V rows, then det A0 = − det A.
transformation T : V → W | T (vi ) = wi , ∀i ∈ {1, . . . , n}
(d) c · 0 = 0, ∀ c ∈ F (b) If the matrix A0 is obtained from A by multiplying some
27. Let V and W be finite-dimensional spaces.
(e) −u = (−1) · u, ∀ u ∈ V row by λ ∈ F , then det A0 = λ · det A.
(a) An isomorphism T : V → W takes any arbritary basis of (c) If the matrix A0 is obtained from A by adding a multiple
8. A subset W is a subspace of V ⇐⇒ V to a basis of W of one row to another, then det A0 = det A.
(a) 0 ∈ W (b) Conversely, if a linear transformation T : V → W takes 41. If an n × n matrix A is upper-triangular, the
(b) ∀ u, v ∈ W, u + v ∈ W some basis of V to a basis of W , then it is an det A = a11 · a22 · a33 . . . ann .
isomorphism.
(c) ∀ u ∈ W and c ∈ F, c · u ∈ W 42. An n × n matrix A is invertible ⇐⇒ det A 6= 0
28. If V and W are two vector spaces over the same field F , then 43. For A, B ∈ F n×n , det(A × B) = (det A) · (det B)
9. A non-empty subset W of V is a subspace ⇐⇒ ∀ u, v ∈ W V ∼= W ⇐⇒ dim V = dim W . 44. ∀A ∈ F n×n , det A = det AT
and c ∈ F, c · u + v ∈ W
29. Let B = {u1 , u2 , u3 , . . . , un } and C = {v1 , v2 , v3 , . . . , vn } be 45. Let A be an invertible n × n matrix. For any vector b ∈ Rn ,
10. If S = {v1 , v2 , v3 , . . . , vp } ⊆ V , then Span S is a subspace of
two ordered bases for a vector space V . define Ai (b) to be the matrix obtained by replacing the it h
V
∃ a matrix P such that [x]C = P · [x]B column of A with b. Then, the unique solution to the system
11. B = {v1 , v2 , v3 , . . . , vp } is a basis of V ⇐⇒ v is uniquely det A (b)
30. Similarity of matrices is an equivalence relation on F n×n A · x = b is given by xi = det iA , for i ∈ {1, 2, 3, . . . , n}.
representable as a linear combination of B, ∀ v ∈ V
adj A
12. Suppose {v1 , v2 , v3 , . . . , vm } ⊆ V is a linearly independent
31. Suppose A and B are matrices of the linear operator T with 46. Let A be an invertible n × n matrix. Then A−1 = det A
respect to the ordered bases α and β respectively. Then, A and 47. (a) If A is a 2 × 2 matrix, the area of the parallelogram
set. Suppose that {w1 , w2 , w3 , . . . , wn } spans V . Then
B are similar. In fact, B = P · A · P −1 , where [P ]α→β is the determined by the columns of A is det A.
(a) m ≤ n change of basis matrix.
 (b) If A is a 3 × 3 matrix, the volume of the parallelepiped
(b) v1 , v2 , v3 , . . . , vm , wm+1 , wm+2 , wm+3 , . . . , wn 32. (a) The set W V of all functions from V to W is a vector determined by the columns of A is det A.
spans V , for some rearrangement of the w s. 0
space over F .
48. (a) Let T : R2 → R2 be a linear transformation determined
13. If V is a finite-dimensional vector space, then any two bases of (b) The set L(V, W ) of all linear transformations from V to by the matrix A. If S is a parallelogram in R2 , then area
V have the same number of elements. W is a subspace of W V . of T (S) = det A·area of S.
14. If S = {v1 , v2 , v3 , . . . , vm } ⊆ V is a linearly independent set, 33. Let V , W and Z be vector spaces over a field F . Let T be a (b) Let T : R3 → R3 be a linear transformation determined
and ∃ a vector v ∈ / Span S, then S ∪ {v} is also a linearly linear transformation from V to W , and let U be a linear by the matrix A. If S is a parallelepiped in R3 , then
independent set. transformation from W to Z. Then the composed function U T volume of T (S) = det A·volume of S.
from V to Z defined by (U T ) (v) = U (T (v)) is a linear 49. The conclusions of proposition 48 hold whenever S is a region
15. Any linearly independent subset of a finite-dimensional vector
transformation from V to Z. in R2 with finite area or a region in R3 with finite volume.
space can be expanded to a basis.
16. Any finite spanning set in a non-zero vector space can be 34. Let V be an n-dimensional vector space over F , and W be an 50. If v1 , v2 , v3 , . . . , vp are eigenvectors corresponding to distinct
contracted to a basis. m-dimensional vector space over F . Then there is an eigenvalues λ1 , λ2 , λ3 , . . . , λp , then the set {v1 , v2 , v3 , . . . , vp }
isomorphism between L(V, W ) and F ( m × n). is linearly independent.
17. Let V be a finite-dimensional vector space with dimension n.
35. If dim V = n and dim W = m, then dim L(V, W ) = m × n. 51. A scalar λ is an eigenvalue of an n × n matrix A ⇐⇒
Then
det(A − λ · In ) = 0
36. If T is an invertible linear transformation, then T −1 is also a
(a) Any subset of V which contains more than n elements is 52. If the n × n matrices A and B are similar, then they have the
linear transformation.
linearly dependent. same characteristic polynomial.
53. An n × n matrix A with n distinct real eigenvalues is 62. If A is a symmetric matrix, then two eigenvectors from Legend
diagonalizable. different eigenspaces are orthogonal. 1. ⇐⇒ if, and only if
54. Let A be an n × n with n real eigenvalues, of which only 63. An orthogonal matrix is necessarily invertible, and P −1 = P T . 2. ∈ belongs to
λ1 , λ2 , λ3 , . . . , λp are distinct, p < n. Then the following hold: 64. If an n × n matrix is orthogonally diagonalizable, then it is
3. ∀ for all
symmetric.
(a) For 1 ≤ k ≤ p, the geometric multiplicty of λk ≤ the 4. ∃ there exists
algebraic multiplicity of λk . 65. Let A be a symmetric n × n matrix. Then the quadratic form
Q(x) = xT · A · x is: 5. ⊂, ⊆ is a sbset of (I’ve used the proper subset notation
(b) A is diagonalizable ⇐⇒ the sum of the dimensions of incorrectly, too lazy to fix)
the distinct eigenspaces is n ⇐⇒ the geometric (a) Positive definite ⇐⇒ all the eigenvalues of A are
multiplicty for each λk equals its algebraic multiplicity. positive. 6. | such that
(b) Negative definite ⇐⇒ all the eigenvalues of A are 7. ∼
= is isomorphic to
55. Suppose A is a 2 × 2 matrix with a complex eigenvalue negative.
λ = a − b · i, b 6= 0 and associated eigenvector v in C 2 . Then 8. <(z) = Re(z)
A = P · B · P −1 where P = [ <(v) =(v) ] and B = ab −b . (c) Indefinite ⇐⇒ A has both positive and negative
a eigenvalues. 9. =(z) = Im(z)
Addendum: left multiplication by B corresponds to a rotation
followed by a scaling. The rotation is through the angle 66. Let 10. hu, vi inner product of u and v
ϕ = arctan ab , the argument of λ. The scaling is by the factor m = min xT · A · x : kxk = 1 ,

√  T
|λ| = a2 + b2 . M = max x · A · x : kxk = 1 , where A is a symmetric
matrix. Then M is the greatest eigenvalue of A, and m is the
56. An orthogonal set of non-zero vectors is linearly independent.
least eigenvalue of A; xT · A · x ∈ [m, M ]
57. (a) v belongs to W ⊥ ⇐⇒ v is orthogonal to every vector in 67. Suppose {v1 , v2 , v3 , . . . , vn } is an orthonormal basis for Rn
a spanning set for W . consisting of the eigenvectors of AT × A with corresponding
(b) W ⊥ is a subspace of V , and W ⊥ ∩ W = {0} eigenvalues arranged such that λ1 ≥ λ2 ≥ λ3 ≥ · · · ≥ λn .
Suppose that A has r non-zero singular values. Then
58. Let {u1 , u2 , u3 , . . . , up } be an orthogonal basis for a subspace
{A · v1 , A · v2 , A · v3 , . . . , A · vr } is an orthogonal basis for
W . Then if y = c1 · u1 + c2 · u2 + c3 · u3 + · · · + cp · up is any
hy,uj i Col A, and Rank A = r.
vector in W , we have cj = kuj k2 68. Suppose q ∈ F [t]. Then q(A) = 0 ⇐⇒ the minimal
59. u and v are orthogonal to each other polynomial of A divides q.
⇐⇒ ku + vk2 = kuk2 + kvk2 . 69. The roots of the minimum polynomial of A are precisely the
eigenvalues of A.
60. Let W be any finite-dimensional subspace of V , y be any
vector in V , and y ∧ be the orthogonal projection of y on W . 70. The coefficient bi of (−λ)n−1 in the characteristic polynomial
Then ky − y ∧ k < ky − vk for all v ∈ W , distinct from y ∧ . is equal to trace A = sum of diagonal elements Pof A. In case A
Corollary: ky ∧ k ≤ kyk is a matrix over the complex field, trace A = n i=1 λi

61. ∀u, v ∈ V , ku + vk ≤ kuk + kvk 71. The constant term in the characteristice polynomial is equal to
det A. InQcase A is a matrix over the complex field,
det A = n i=1 λi .

S-ar putea să vă placă și