Documente Academic
Documente Profesional
Documente Cultură
9.1
Introduction
One of the most important results of this chapter is how to dissect the action of a linear
transformation from Rn to Rn , given by x 7 Ax, where A is an n n matrix, into
elements that are easily visualized. The main concepts, eigenvectors and eigenvalues
have many practical applications and play also a key role in pure mathematics.
A given transformation x 7 Ax may transform vectors in a variety of ways, but it
often happens that there are special vectors on which the action of the linear transformation is very simple.
and vectors u =
5
, and v =
, and T (v) = Av =
1
1
that is, in contrast to u, A dilates v by a constant of 2.
1
4
2
= 2u,
Effect of multiplication by A
Av
Au
This example illustrates the concept of an eigenvector. We say that v is an eigenvector
of an n n matrix A, if it is a non-zero vector such that Av = v, for some scalar .
Such a scalar is called an eigenvalue of A if there is a non-trivial solution x of Ax = x,
and such an x is called an eigenvector corresponding to . Note that can be either real
or complex. It is easy to verify whether a given vector is an eigenvector of A or not:
we only have to compute Av and verify if this is a multiple of v. To determine all
posible eigenvectors of a given matrix A, we have to solve the following equation:
Ax = x
Ax = In x
Ax In x = 0
(A In )x = 0.
Example: Consider again matrix A =
3 2
1
2
2
eigenvalue of A and that the corresponding eigenspace is w R : w =
, R .
1
1
We also know that
(or any multiple of this) is not an eigenvector of A.
1
Eigenspace of A corresponding to 1 = 2
Example: Let A = 2
1 6 . Verify that 1 = 2 is an eigenvalue of A and find
2 1 8
a basis for the corresponding eigenspace.
If 1 = 2 is an eigenvalue, then (A 1 I3 )x = 0, that is, the system
1
1
1
x1
6 x2 = 0 ,
6
x3
0
must have a non trivial solution, which implies that rank(A 2 I3 ) < 3, which is
obviously the case. Therefore, 1 = 2 is an eigenvalue.
On the other hand, the solution to this system is given by 2x1 x2 + 6x3 = 0, which
3
x = x2 R 3 : x =
x3
1
x
2 2
3x3
,
x
,
x
R
=
x2
2
3
x3
x
3
3
= x = x2 R : x = x2 1 + x3 0 , x2 , x3 R .
x3
0
1
2 3
Thus, a basis for the eigenspace corresponding to 1 = 2 is 1 , 0
0
1
1
2
.
The following section shows how to compute all the eigenvalues of a given square
matrix.
9.2
We have just seen that eigenvalues are those scalars that make that the homogeneous
system (A In )x = 0 has a non-trivial solution. Recall that this implies that the
matrix A In is singular, and consequently, its determinant must equal 0. The resulting equation det(AIn ) = 0 is known as the characteristic equation of A, where
det(AIn ) is a polynomial (in ) of degree n and is known as the characteristic polynomial of A. Then the roots of the characteristic equations are the eigenvalues of A,
and their multiplicity as roots of this equation is called the algebraic multiplicity of
the eigenvalue.
Example: Let A = 1
1
eigenspaces.
3
2
3
2
3
1
det(A I3 ) = 0
1 2
1
1
3 2
= ( 1)2 = 0,
2 3
1
0
1 1 ,
0
0
0
and the eigenspace is given by
x = x2
x3
3
,
,
x
R
:
x
=
x
3
3 1
which is a line through the origin. Finally, to obtain the eigenspace corresponding to
2 = 1 we compute the nullspace of the matrix
21
3
1
1 3
1 2 1
1 = 1 3
1
3 2 1
1 3
which is
x1
3
x = x2 R : x = x2 1 + x3
x3
0
5
1 ,
0 , x2 , x3 R ,
Note that the three eigenvectors found in the previous example are linearly independent, and so form a basis for R3 .
0
the characteristic equation:
det
1
0
0
1
0 =0
0
0 3
(3 )( 1)2 = 0.
1 1
A 3I3 =
which is
1 1
1 3 0
Gauss
0
0 0
0 4
0
0 ,
3
x = x2 R : x1 = x2 = 0, x3 R ,
x3
that is, a line through the origin, with basis B=3 = {e3 = (0, 0, 1) 0 }.
For = 1, we compute now the nullspace of
A I3 = 1
0
1 0
Gauss
0 2
0
6
1
0
0
2 ,
which is
3
x = x2 R : x1 = x2 , x3 = 0 R ,
x3
and has a basis given by B=1 = {(1, 1, 0) 0 }, which is also a line through the origin.
In summary, both eigenspaces are of dimension 1, and therefore we can only find
two linearly independent eigenvectors, so that R3 does not have a basis consisting of
linearly independent eigenvectors of the matrix A.
det
4
2
0
5 =0
(1 )(2 )(3 ) = 0.
We have seen that there are cases in which the number of linearly independent eigenvectors does not coincide with the algebraic multiplicity of the corresponding eigenvalue. We will refer to the number of linearly independent eigenvectors associated to
an eigenvalue as the geometric multiplicity of such an eigenvalue. If the geometric
multiplicity of some eigenvalue is less than its algebraic multiplicity, we say that the
matrix is defective.
7
9.3
3 2
1
0
2
value is 1 = 2, with eigenvector v1 =
, and it can be easily verified that 2 = 1 is
1
1
the other eigenvalue, with eigenvector v2 =
. And we can also prove easily that
1
2
1
B =
,
is a basis for R2 . Then, every vector u R2 can be written as a
1
1
linear combination of v1 and v2 as
c1
c1 v1 + c2 v2 = (v1 |v2 )
= u, c1 , c2 R,
c2
|
{z
}
linear system
and
2c1
Au = A(c1 v1 + c2 v2 ) = c1 Av1 + c2 Av2 = 2c1 v1 + c2 v2 = (v1 |v2 )
=
c2
2 0
c1
= (v1 |v2 )
.
c2
0 1
This implies that
2
c1
A(v1 |v2 )
= (v1 |v2 )
c2
0
or
c1 ,
c2
1
c1 , c2 R,
2
0
c1
c1
A(v1 |v2 )
(v1 |v2 )
= 0, c1 , c2 R
c2
c2
0 1
2 0
A(v1 |v2 ) (v1 |v2 )
c1 = 0, c1 , c2 R.
c2
0 1
Therefore, it must be
=0
A(v1 |v2 ) = (v1 |v2 )
If we write P = (v1 |v2 ), since v1 and v2 form a basis, the columns of P are linearly
independent, it has rank 2, and therefore it is invertible. Then, we can write
2 0
= P1 AP.
D=
0 1
Notice that the diagonal elements of D are the eigenvalues of A, and that the columns
of P are the corresponding associated eigenvectors.
P = (v1 | . . . |vn ),
D=
1
..
.
n
2. If not, then A is not diagonalizable. If so, then find the eigenvectors corresponding to these eigenvalues. Determine whether we can find n linearly independent eigenvectors.
P = (v1 | . . . |vn ),
D=
1
..
.
n
where 1 , . . . , n R are the eigenvalues of A and where v1 , . . . , vn are respectively their corresponding eigenvectors. Then D = P1 AP.
In linear algebra, we say that two square matrices A and B are similar if there exists
some invertible matrix P such that A = P1 BP. In particular, if A is diagonalizable,
with A = PDP1 we will also say that A is similar to the diagonal matrix D.
10
9.3.1
[v]B0
[T(v)]B0
P1
[T(v)]Be
3 2
1
can be
diagonalized as follows:
A = P1 DP
where
D=
P=
and P1 =
5
Then if we consider for instance the vector u =
, the linear transformation TA
3
21
. On
associated to A, which is given by v 7 Av, transforms u into TA (u) = Au =
5
the other hand, we know that the columns of P are linearly independent eigenvectors
of A and they form a basis Be for R2 . If we express u and T (u) in terms of these
vectors we obtain
0
9.3.2
A = 2
2 1
6 1 .
1
5
B2 = {(1, 1, 2) 0 };
B3 = {(1, 1, 1) 0 }.
These three vectors form a basis for R3 ; in addition, it is easy to verify that they form
as well an orthogonal set. If instead of using these vectors to produce matrix P in
the diagonalization process, we use the corresponding unit vectors, then P becomes a
12
matrix with orthonormal columns, and thus, it is an orthogonal matrix, whose inverse
is computed simply as P1 = Pt . Then the diagonalization yields
1/ 2
0
8 0 0
1/ 2 1/ 6
1/ 2
A = Pt DP = 1/ 6 1/ 6 2/ 6 0 6 0 1/ 2 1/ 6
1/ 3
1/ 3 1/ 3
0 0 3
0
2/ 6
1/ 3
1/ 3
1/ 3
6 2 1
1/ 2
A = 2
6 1 = 8 1/ 2 1/ 2 1/ 2
0
1 1
5
13
1/ 6
+6 1/ 6
2/ 6
1/2
= 8 1/2
1/ 3
1/ 6 1/ 6 2/ 6 +3 1/ 3 1/ 3 1/ 3 1/ 3 =
1/ 3
1/2 0
1/6
1/6 2/6
1/3 1/3 1/3
1/2 0 + 6 1/6
1/6 2/6 + 3 1/3 1/3 1/3 ,
0 0
2/6 2/6
4/6
1/3 1/3 1/3
9.3.3
We have seen that not every matrix has enough linearly independent eigenvectors to
be diagonalizable. However, by using similarity transformations, every square matrix
can be transformed to the Jordan canonical form, which is almost diagonal.
Theorem. The Jordan canonical form:
J1
J= .
..
0 ... 0
J2 . . . 0
,
.. . .
..
.
.
.
0 . . . Jp
0 ...
1
0
1 ...
. .
.
.. ... ...
Ji =
.
0 0 ...
0 0 ...
0
with one of the eigenvalues of A.
14
..
.
,
The blocks Ji are called Jordan blocks, and ri is the corresponding size. If ri = 1 for
all i then Ji = i and A is similar to a diagonal matrix, and so diagonalizable.
To compute the Jordan canonical form J of matrix A, we first write A = PJP1 . Then
we right-multiply by P1 to get:
AP = PJ
P
P
J1
...
0
..
.
J2
..
.
...
..
.
...
{z
.
..
Jp
}
Direct computations show that for the first block (and for the other blocks we will
work in the same way):
AP1 = P1 ;
AP2 = P1 + P2 (A I)P2 = P1 ;
2 1
0
0 3
eigenvalues are 1 = 3, with algebraic and geometric multiplicity equal to 1, and associated eigenvector (0, 0, 1) 0 , and 2 = 1, with algebraic multiplicity 2 and geometric
multiplicity 1, and associated eigenvector (1, 1, 0) 0 . Thus, A is not diagonalizable. Let
us compute its Jordan canonical form.
Obviously, for = 3 there is only one Jordan block of size 1, J1 = 3. For = 1, we also
know that the nullspace of A 3I3 has dimension 1 (= a1 ), and thus there is just one
block associated to = 3. If we compute the matrix:
1 1
(A 3I3 )2 = 1 1 0
0
0 2
0
0
0
0 ,
we see that a2 = dim(N (A I)2 ) = 2, and thus there are a2 a1 = 1 Jordan blocks
of size larger than 1. Since
(A 3I3 )3 = 0
0
0
0
0 ,
3 0
J=
0 1
0 0
1
.
The matrix P will be (P1 |P2 |P3 ), where P1 = (0, 0, 1) 0 , P2 = (1, 1, 0) 0 , and to determine
16
(A I3 )P3 = P2
1
1
P3 =
1 1
0
1
Gauss
0
0
1+y
y
, y R.
1
1
is easy to verify that A = PJP .
0
17
1
1
0
0 . It
0