Sunteți pe pagina 1din 17

Chapter 9

Eigenvalues and Eigenvectors

9.1

Introduction

One of the most important results of this chapter is how to dissect the action of a linear
transformation from Rn to Rn , given by x 7 Ax, where A is an n n matrix, into
elements that are easily visualized. The main concepts, eigenvectors and eigenvalues
have many practical applications and play also a key role in pure mathematics.
A given transformation x 7 Ax may transform vectors in a variety of ways, but it
often happens that there are special vectors on which the action of the linear transformation is very simple.

Example: Consider matrix A =

. Their images are T (u) = Au =

and vectors u =
5

, and v =

, and T (v) = Av =

1
1
that is, in contrast to u, A dilates v by a constant of 2.
1

4
2

= 2u,

Effect of multiplication by A

Av

Au


This example illustrates the concept of an eigenvector. We say that v is an eigenvector
of an n n matrix A, if it is a non-zero vector such that Av = v, for some scalar .
Such a scalar is called an eigenvalue of A if there is a non-trivial solution x of Ax = x,
and such an x is called an eigenvector corresponding to . Note that can be either real
or complex. It is easy to verify whether a given vector is an eigenvector of A or not:
we only have to compute Av and verify if this is a multiple of v. To determine all
posible eigenvectors of a given matrix A, we have to solve the following equation:
Ax = x

Ax = In x

Ax In x = 0

(A In )x = 0.

Thus is an eigenvalue of A if and only if the homogeneous system (A In )x = 0


has a non-trivial solution. The set of all solutions of this system is just the nullspace
of matrix A In , which is a subspace of Rn , and is called the eigenspace of A corresponding to . Therefore, the eigenspace consists of the zero vector and all the
eigenvectors corresponding to . The set of eigenvalues of a matrix A is sometimes
called the spectrum of A and is represented by (A).
2


Example: Consider again matrix A =

3 2
1

. We already know that 1 = 2 is an

 


2
2
eigenvalue of A and that the corresponding eigenspace is w R : w =
, R .
1
 
1
We also know that
(or any multiple of this) is not an eigenvector of A.
1

Eigenspace of A corresponding to 1 = 2

Example: Let A = 2
1 6 . Verify that 1 = 2 is an eigenvalue of A and find

2 1 8
a basis for the corresponding eigenspace.
If 1 = 2 is an eigenvalue, then (A 1 I3 )x = 0, that is, the system

1
1
1

x1


6 x2 = 0 ,


6
x3
0

must have a non trivial solution, which implies that rank(A 2 I3 ) < 3, which is
obviously the case. Therefore, 1 = 2 is an eigenvalue.
On the other hand, the solution to this system is given by 2x1 x2 + 6x3 = 0, which
3

can be written as:

x = x2 R 3 : x =

x3

1
x
2 2

3x3

,
x
,
x

R
=

x2
2
3

x3

x
3

3
= x = x2 R : x = x2 1 + x3 0 , x2 , x3 R .

x3
0
1

2 3


Thus, a basis for the eigenspace corresponding to 1 = 2 is 1 , 0

0
1

1
2

. 

The following section shows how to compute all the eigenvalues of a given square
matrix.

9.2

The characteristic equation

We have just seen that eigenvalues are those scalars that make that the homogeneous
system (A In )x = 0 has a non-trivial solution. Recall that this implies that the
matrix A In is singular, and consequently, its determinant must equal 0. The resulting equation det(AIn ) = 0 is known as the characteristic equation of A, where
det(AIn ) is a polynomial (in ) of degree n and is known as the characteristic polynomial of A. Then the roots of the characteristic equations are the eigenvalues of A,
and their multiplicity as roots of this equation is called the algebraic multiplicity of
the eigenvalue.

Example: Let A = 1

1
eigenspaces.

3
2
3

1 . Find the eigenvalues and the corresponding

Firstly, we compute the characteristic equation as

2
3
1

det(A I3 ) = 0
1 2
1

1
3 2

= ( 1)2 = 0,

whose roots are


1 = 0, with algebraic multiplicity 1, and
2 = 1, with algebraic multiplicity 2.
The eigenspace corresponding to 1 = 0 coincides with the nullspace of A, so if we
compute the row echelon form of A we get

2 3
1

0
1 1 ,

0
0
0
and the eigenspace is given by

x = x2

x3

3
,
,
x

R
:
x
=
x

3
3 1

which is a line through the origin. Finally, to obtain the eigenspace corresponding to
2 = 1 we compute the nullspace of the matrix


21
3
1
1 3

1 2 1
1 = 1 3


1
3 2 1
1 3
which is

x1

3
x = x2 R : x = x2 1 + x3

x3
0
5

1 ,

0 , x2 , x3 R ,

a plane through the origin.

Note that the three eigenvectors found in the previous example are linearly independent, and so form a basis for R3 .

Example: Consider the matrix A = 1

0
the characteristic equation:

det

1
0
0

0 . The eigenvalues are the roots of

1
0 =0

0
0 3

(3 )( 1)2 = 0.

Thus the eigenvalues are 1 = 3, with algebraic multiplicity 1, and 2 = 1, with


algebraic multiplicity 2. Let us now find the eigenspaces. For = 3, we compute the
nullspace of

1 1

A 3I3 =

which is

1 1

1 3 0

Gauss
0
0 0

0 4
0

0 ,

3
x = x2 R : x1 = x2 = 0, x3 R ,

x3

that is, a line through the origin, with basis B=3 = {e3 = (0, 0, 1) 0 }.
For = 1, we compute now the nullspace of

A I3 = 1

0
1 0

Gauss
0 2
0
6

1
0
0

2 ,

which is

3
x = x2 R : x1 = x2 , x3 = 0 R ,

x3

and has a basis given by B=1 = {(1, 1, 0) 0 }, which is also a line through the origin.
In summary, both eigenspaces are of dimension 1, and therefore we can only find
two linearly independent eigenvectors, so that R3 does not have a basis consisting of
linearly independent eigenvectors of the matrix A.

Example: Consider the matrix A = 0

det

4
2
0

5 . The eigenvalues are the roots of

5 =0

(1 )(2 )(3 ) = 0.

Thus the eigenvalues are 1 = 1, 2 = 2 and 3 = 3, all with algebraic multiplicity


1. It follows that the eigenvalues of the matrix A are given by the entries on the main
diagonal. In fact, this is true for all triangular matrices.

We have seen that there are cases in which the number of linearly independent eigenvectors does not coincide with the algebraic multiplicity of the corresponding eigenvalue. We will refer to the number of linearly independent eigenvectors associated to
an eigenvalue as the geometric multiplicity of such an eigenvalue. If the geometric
multiplicity of some eigenvalue is less than its algebraic multiplicity, we say that the
matrix is defective.
7

9.3

The diagonalization problem

Example: Consider once more the matrix A =

3 2

. We know that one eigen-

1
0
 
2
value is 1 = 2, with eigenvector v1 =
, and it can be easily verified that 2 = 1 is
1
 
1
the other eigenvalue, with eigenvector v2 =
. And we can also prove easily that
1
   
2
1
B =
,
is a basis for R2 . Then, every vector u R2 can be written as a
1
1
linear combination of v1 and v2 as
 
c1
c1 v1 + c2 v2 = (v1 |v2 )
= u, c1 , c2 R,
c2
|
{z
}
linear system

and
 
2c1
Au = A(c1 v1 + c2 v2 ) = c1 Av1 + c2 Av2 = 2c1 v1 + c2 v2 = (v1 |v2 )
=
c2

 
2 0
c1

= (v1 |v2 )
.
c2
0 1
This implies that

 
2
c1
A(v1 |v2 )
= (v1 |v2 )
c2
0
or

 
c1 ,
c2
1

c1 , c2 R,

 
 
2
0
c1
c1

A(v1 |v2 )
(v1 |v2 )
= 0, c1 , c2 R
c2
c2
0 1

 
2 0
A(v1 |v2 ) (v1 |v2 )
c1 = 0, c1 , c2 R.
c2
0 1

Therefore, it must be

A(v1 |v2 ) (v1 |v2 )

=0


A(v1 |v2 ) = (v1 |v2 )

If we write P = (v1 |v2 ), since v1 and v2 form a basis, the columns of P are linearly
independent, it has rank 2, and therefore it is invertible. Then, we can write

2 0
= P1 AP.
D=
0 1
Notice that the diagonal elements of D are the eigenvalues of A, and that the columns
of P are the corresponding associated eigenvectors.

This example illustrates the following theorem:


Theorem: Suppose that A is an n n matrix, with eigenvalues 1 , . . . , n R (or
C), not necessarily distinct, with corresponding eigenvectors v1 , . . . , vn Rn (or Cn ),
that are linearly independent. Then
P1 AP = D,
where

P = (v1 | . . . |vn ),

D=

1
..

.
n

When that A is an n n matrix, we say that A is diagonalizable if there exists an


invertible matrix P such that P1 AP is a diagonal matrix. It follows from previous
theorem that an n n matrix A is diagonalizable if its eigenvectors form a basis for
Rn . In the opposite direction, we have the following result.
Theorem: Suppose that A is an nn matrix. Suppose further that A is diagonalizable.
Then A has n linearly independent eigenvectors in Rn .
In view of these results, we can establish the following:
9

Theorem: Suppose that A is an n n matrix. Suppose further that A has distinct


eigenvalues 1 , . . . , n R, with corresponding eigenvectors v1 , . . . , vn Rn . Then
v1 , . . . , vn are linearly independent.

Thus, an algorithm for determining whether a matrix is diagonalizable is given by the


following procedure.
Diagonalization process:
1. Determine whether the n roots of the characteristic polynomial det(A In ) are
real.

2. If not, then A is not diagonalizable. If so, then find the eigenvectors corresponding to these eigenvalues. Determine whether we can find n linearly independent eigenvectors.

3. If not, then A is not diagonalizable. If so, then write

P = (v1 | . . . |vn ),

D=

1
..

.
n

where 1 , . . . , n R are the eigenvalues of A and where v1 , . . . , vn are respectively their corresponding eigenvectors. Then D = P1 AP.

In linear algebra, we say that two square matrices A and B are similar if there exists
some invertible matrix P such that A = P1 BP. In particular, if A is diagonalizable,
with A = PDP1 we will also say that A is similar to the diagonal matrix D.
10

9.3.1

Eigenvectors and linear transformations

If a given linear transformation from Rn to Rn is


A = P1 D P

defined by v 7 AT v, where AT is diagonalizable


(that is, AT = PDP

[v]B0

), then there is a basis Be

[T(v)]B0

for Rn consisting of eigenvectors of AT (i.e., the


P1

columns of P). Then P1 is the transition matrix

P1

from the standard basis B0 to Be , and D is the


[v]Be

matrix of the transformation T when the vectors

[T(v)]Be

in Rn are expressed in terms of the basis Be , as


shown in the diagram.

Example: From previous examples, we know that the matrix A =

3 2
1

can be

diagonalized as follows:
A = P1 DP
where

D=

P=

and P1 =


5
Then if we consider for instance the vector u =
, the linear transformation TA
3
 
21
. On
associated to A, which is given by v 7 Av, transforms u into TA (u) = Au =
5
the other hand, we know that the columns of P are linearly independent eigenvectors
of A and they form a basis Be for R2 . If we express u and T (u) in terms of these
vectors we obtain
0

u = [5, 3]B0 = [8, 11]Be


and
0

T (u) = [21, 5]B0 = [16, 11]Be .


11

It is easy to verify that


[u]Be = P1 [u]B0
that
[TA (u)]Be = P1 [TA (u)]B0
and that
[TA (u)]Be = D[u]Be

9.3.2

Diagonalization of symmetric matrices

A symmetric matrix is a matrix A such that At = A; such a matrix is necessarily


square. There is an extensive theory on symmetric matrices; we will focus here on the
diagonalization process of such matrices.
Example: Let us try to diagonalize the following matrix:

A = 2

2 1

6 1 .

1
5

The characteristic equation is given by


( 8)( 6)( 3) = 0,
which has roots 1 = 8, 2 = 6 and 8 = 3. The associated eigenspaces have bases
given by
B1 = {(1, 1, 0) 0 };

B2 = {(1, 1, 2) 0 };

B3 = {(1, 1, 1) 0 }.

These three vectors form a basis for R3 ; in addition, it is easy to verify that they form
as well an orthogonal set. If instead of using these vectors to produce matrix P in
the diagonalization process, we use the corresponding unit vectors, then P becomes a
12

matrix with orthonormal columns, and thus, it is an orthogonal matrix, whose inverse
is computed simply as P1 = Pt . Then the diagonalization yields

1/ 2
0
8 0 0
1/ 2 1/ 6
1/ 2

A = Pt DP = 1/ 6 1/ 6 2/ 6 0 6 0 1/ 2 1/ 6

1/ 3
1/ 3 1/ 3
0 0 3
0
2/ 6

1/ 3

1/ 3

1/ 3


Finally, consider the following important results:


The spectral theorem: An n n symmetric matrix A has the following properties:
1. A has n real eigenvalues , counting multiplicities.
2. If is an eigenvalue of A with multiplicity k, then the eigenspace for is
k-dimensional.
3. The eigenspaces are mutually orthogonal, in the sense that eigenvectors
corresponding to different eigenvalues are orthogonal.
4. There are an orthogonal matrix P and a diagonal matrix D such that A =
PDPt .
The spectral decomposition: Suppose that A = PDP1 , where the columns of P
are orthonormal eigenvectors v1 , . . . , vn of A and the corresponding eigenvalues
1 , . . . , n are in the diagonal matrix D. Then
A = 1 v1 vt1 + + n vn vtn

Example: Using the spectral decomposition we can decompose:


6 2 1
1/ 2

A = 2
6 1 = 8 1/ 2 1/ 2 1/ 2

0
1 1
5
13


1/ 6

+6 1/ 6

2/ 6

1/2

= 8 1/2


1/ 3



1/ 6 1/ 6 2/ 6 +3 1/ 3 1/ 3 1/ 3 1/ 3 =

1/ 3

1/2 0
1/6
1/6 2/6
1/3 1/3 1/3

1/2 0 + 6 1/6
1/6 2/6 + 3 1/3 1/3 1/3 ,

0 0
2/6 2/6
4/6
1/3 1/3 1/3

which effectively yields A.

9.3.3

The Jordan canonical form

We have seen that not every matrix has enough linearly independent eigenvectors to
be diagonalizable. However, by using similarity transformations, every square matrix
can be transformed to the Jordan canonical form, which is almost diagonal.
Theorem. The Jordan canonical form:

J1

J= .
..

Every square matrix A is similar to the matrix

0 ... 0

J2 . . . 0

,
.. . .
..

.
.
.

0 . . . Jp

where every Ji is an ri ri matrix of the form:

0 ...
1

0
1 ...

. .
.
.. ... ...
Ji =
.

0 0 ...

0 0 ...
0
with one of the eigenvalues of A.
14

..
.
,

The blocks Ji are called Jordan blocks, and ri is the corresponding size. If ri = 1 for
all i then Ji = i and A is similar to a diagonal matrix, and so diagonalizable.
To compute the Jordan canonical form J of matrix A, we first write A = PJP1 . Then
we right-multiply by P1 to get:

AP = PJ

A(P1 | . . . |Pn ) = (P1 | . . . |Pn )


{z
} |
{z
}
|

P
P

J1

...

0
..
.

J2
..
.

...
..
.

...
{z

.
..

Jp
}

Direct computations show that for the first block (and for the other blocks we will
work in the same way):

AP1 = P1 ;

AP2 = P1 + P2 (A I)P2 = P1 ;

APi = Pi1 + Pi (A I)Pi = Pi1 (i = 2, . . . , r1 );


The set {s1 , s2 , . . . , sr1 } is called a Jordan chain. The vector s1 is an eigenvector;
s2 , . . . , sr1 are called generalized eigenvectors. We can determine a Jordan chain for
the given Jordan block by solving the corresponding linear system.
In addition, if we write
ai = dim(N (A I)i )
it can be proven that:
a1 determines the number of Jordan blocks associated to the given eigenvalue
.
ai+1 ai determines the number of Jordan blocks of size larger than i.
This allows to construct the Jordan canonical form of a given matrix, as illustrates the
following example.
15

2 1

Example: Consider again the matrix A = 1


0 0 . We already know that its

0
0 3
eigenvalues are 1 = 3, with algebraic and geometric multiplicity equal to 1, and associated eigenvector (0, 0, 1) 0 , and 2 = 1, with algebraic multiplicity 2 and geometric
multiplicity 1, and associated eigenvector (1, 1, 0) 0 . Thus, A is not diagonalizable. Let
us compute its Jordan canonical form.
Obviously, for = 3 there is only one Jordan block of size 1, J1 = 3. For = 1, we also
know that the nullspace of A 3I3 has dimension 1 (= a1 ), and thus there is just one
block associated to = 3. If we compute the matrix:

1 1

(A 3I3 )2 = 1 1 0

0
0 2

0
0
0

0 ,

we see that a2 = dim(N (A I)2 ) = 2, and thus there are a2 a1 = 1 Jordan blocks
of size larger than 1. Since

(A 3I3 )3 = 0

0
0
0

0 ,

we conclude that there are a3 a2 = 2 2 = 0 Jordan blocks of size larger than 2.


Then, the Jordan canonical form of A is given by:

3 0

J=
0 1

0 0

1
.

The matrix P will be (P1 |P2 |P3 ), where P1 = (0, 0, 1) 0 , P2 = (1, 1, 0) 0 , and to determine
16

the generalized eigenvector P3 we have to solve the linear system:

(A I3 )P3 = P2

1
1

P3 =

1 1

0
1

Gauss
0
0

1+y
y

, y R.

If we take, for instance, y = 0, we have P3 = (1, 0, 0) , and thus P = 0

1
1
is easy to verify that A = PJP .
0

17

1
1
0

0 . It

0


S-ar putea să vă placă și