Sunteți pe pagina 1din 99

THE CHINESE UNIVERSITY OF HONG KONG

Department of Mathematics
MATH3270
Ordinary Dierential Equations
Contents
1 Ordinary dierential equations of rst-order 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 First-order linear ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Separable equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Exact equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Homogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Bernoullis equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.7 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2 Second Order Linear Equations 18
2.1 Solution space and Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2 Reduction of order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Homogeneous equations with constant coecients . . . . . . . . . . . . . . . . . . 23
2.4 Method of Undetermined Coecients . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.6 Mechanical and electrical vibrations . . . . . . . . . . . . . . . . . . . . . . . . . 30
3 Higher order Linear Equations 34
3.1 Solution space and Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 Homogeneous equations with constant coecients . . . . . . . . . . . . . . . . . . 36
3.3 Method of undetermined coecients . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Method of variation of parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4 Systems of First Order Linear Equations 44
4.1 Basic properties of systems of rst order linear equations . . . . . . . . . . . . . . 44
4.2 Homogeneous linear systems with constant coecients . . . . . . . . . . . . . . . 46
4.3 Matrix exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4 Fundamental matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.5 Repeated eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6 Jordan normal forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.7 Nonhomogeneous linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
1
5 Nonlinear Dierential Equations and Stability 76
5.1 Phase plane of linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.2 Autonomous systems and stability . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.3 Almost linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.4 Competing species . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.5 Predator-Prey equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.6 Liapunovs second method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.7 Periodic solutions and limit cycles . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.8 Chaos and strange attractors: The Lorenz equations . . . . . . . . . . . . . . . . 94
6 Answers to Exercises 97
2
1 Ordinary dierential equations of rst-order
1.1 Introduction
An equation of the form
F(t, y, y

, , y
(n)
) = 0, t (a, b),
where y = y(t), y

=
dy
dt
, , y
(n)
=
d
n
y
dt
n
is called an Ordinary Dierential Equation (ODE)
of the function y.
Examples:
1. y

ty = 0,
2. y

3y

+ 2 = 0,
3. y sin(
dy
dt
) +
d
2
y
dt
2
= 0.
The order of the ODE is dened to be the order of the highest derivative in the equation. In
solving ODEs, we are interested in the following problems:
Initial Value Problem(IVP): to nd solutions y(t) which satises given initial value
conditions, e.g. y(a) = 0, y

(a) = 1.
Boundary Value Problem(BVP): to nd solutions y(t) which satises given boundary
value conditions, e.g. y(a) = y(b) = 0.
An ODE is linear if it can be written as the form
a
0
(t)y
(n)
+a
1
(t)y
(n1)
+ +a
n1
(t)y

+a
n
(t)y = g(t), a
0
(t) = 0.
The linear ODE is called homogeneous if g(t) 0, inhomogeneous, otherwise. Also, it is
called a linear ODE of constant coecients when a
0
= 0, a
1
, , a
n
are constants. If an ODE
is not of the above form, we call it a non-linear ODE.
1.2 First-order linear ODE
The general form of a rst-order linear ODE is
y

+p(x)y = g(x). (1.2.1)


The basic principle to solve (1.2.1) is to make left hand side a derivative of an expression by
multiplying both side by a suitable factor called an integrating factor. To nd integrating
factor, multiplying both sides of (1.2.1) by e
f(x)
, where f(x) is to be determined, we have
e
f(x)
y

+e
f(x)
p(x)y = g(x)e
f(x)
.
Now, if we choose f(x) so that f

(x) = p(x), then the left hand side becomes


e
f(x)
y

+e
f(x)
f

(x)y =
d
dx
_
e
f(x)
y
_
.
Thus we may take
f(x) =
_
p(x)dx
Ordinary dierential equations of rst-order 4
and (1.2.1) can be solved easily as follow.
y

+p(x)y = g(x)
e
_
p(x)dx
dy
dx
+e
_
p(x)dx
p(x)y = g(x)e
_
p(x)dx
d
dx
e
_
p(x)dx
y = g(x)e
_
p(x)dx
e
_
p(x)dx
y =
_
_
g(x)e
_
p(x)dx
_
dx
y = e

_
p(x)dx
_
_
g(x)e
_
p(x)dx
_
dx
Note: The function f(x) =
_
p(x)dx is called an integrating factor. Integrating factor is not
unique. One may choose an arbitrary integration constant for
_
p(x)dx. Any primitive function
of p(x) may give an integrating factor.
Example 1.2.1. Find the general solution of y

+ 2xy = 0.
Solution: Multiply both sides by e
x
2
, we have
e
x
2 dy
dx
+e
x
2
2xy = 0
d
dx
e
x
2
y = 0
e
x
2
y = C
y = Ce
x
2

Example 1.2.2. Solve (x


2
1)y

+xy = 2x, x > 1.


Solution: Dividing both sides by x
2
1, the equation becomes
dy
dx
+
x
x
2
1
y =
x
x
2
1
.
Now

_
x
x
2
1
dx =
1
2
log(x
2
1) +C
Thus we multiply both sides of the equation by
exp(
1
2
log(x
2
1)) = (x
2
1)
1
2
and get
(x
2
1)
1
2
dy
dx
+
x
(x
2
1)
1
2
y =
2x
(x
2
1)
1
2
d
dx
_
(x
2
1)
1
2
y
_
=
2x
(x
2
1)
1
2
(x
2
1)
1
2
y =
_
2x
(x
2
1)
1
2
dx
y = (x
2
1)

1
2
_
2(x
2
1)
1
2
+C
_
y = 2 +C(x
2
1)

1
2
Ordinary dierential equations of rst-order 5

Example 1.2.3. Solve y

y tan x = 4 sin x, x (/2, /2).


Solution: An integrating factor is
exp(
_
tan xdx) = exp(log(cos x)) = cos x.
Multiplying both sides by cos x, we have
cos x
dy
dx
y sin x = 4 sin xcos x
d
dx
(y cos x) = 2 sin 2x
y cos x =
_
2 sin 2xdx
y cos x = cos 2x +C
y =
cos 2x +C
cos x

Example 1.2.4. A tank contains 1L of a solution consisting of 100 g of salt dissolved in water.
A salt solution of concentration of 20 gL
1
is pumped into the tank at the rate of 0.02 Ls
1
, and
the mixture, kept uniform by stirring, is pumped out at the same rate. How long will it be until
only 60 g of salt remains in the tank?
Solution: Suppose there is x g of salt in the solution at time t s. Then x follows the following
dierential equation
dx
dt
= 0.02(20 x)
Multiplying the equation by e
0.02t
, we have
dx
dt
+ 0.02x = 0.4
e
0.02t
dx
dt
+ 0.02e
0.02t
x = 0.4e
0.02t
d
dt
e
0.02t
x =
_
0.4e
0.02t
dt
e
0.02t
x = 20e
0.02t
+C
x = 20 +Ce
0.02t
Since x(0) = 100, C = 80. Thus the solution is
60 = 20 + 80e
0.02t
e
0.02t
= 2
t = 50 log 2

Ordinary dierential equations of rst-order 6


Example 1.2.5. David would like to by an apartment. He had examined his budget and deter-
mined that he can aord monthly payments of $20,000. If the annual interest is 6%, and the
term of the loan is 20 years, what amount can he aord to borrow?
Solution: Let $y be the remaining loan amount after t months. Then
dy
dt
=
0.06
12
y 20, 000
dy
dt
0.005y = 20, 000
e
0.005t
dy
dt
0.005ye
0.005t
= 20, 000e
0.005t
d
dt
(e
0.005t
y) = 20, 000e
0.005t
e
0.005t
y =
20, 000e
0.005t
0.005
+C
y = 4, 000, 000 +Ce
0.005t
Since the term of the loan is 20 years, y(240) = 0 and thus
4, 000, 000 +Ce
0.005240
= 0
C =
4, 000, 000
e
1.2
= 1, 204, 776.85
Therefore the amount that David can aord to borrow is
y(0) = 4, 000, 000 1, 204, 776.85e
0.005(0)
= 2, 795, 223.15

Note: The total amount that David pays is $240 20, 000 = $4, 800, 000.
1.3 Separable equations
A separable equation is an equation of the form
dy
dx
= f(x)g(y).
It can be solved as follow.
dy
g(y)
= f(x)dx
_
dy
g(y)
=
_
f(x)dx
Example 1.3.1. Find the general solution of y

= 3x
2
y.
Solution:
dy
y
= 3x
2
dx
_
dy
y
=
_
3x
2
dx
log y = x
3
+C

y = Ce
x
3
Ordinary dierential equations of rst-order 7
where C = e
C

.
Example 1.3.2. Solve 2

x
dy
dx
= y
2
+ 1, x > 0.
Solution:
dy
y
2
+ 1
=
dx
2

x
_
dy
y
2
+ 1
=
_
dx
2

x
tan
1
y =

x +C
y = tan(

x +C)

Example 1.3.3. Solve the initial value problem


dy
dx
=
x
y +x
2
y
, y(0) = 1.
Solution:
dy
dx
=
x
y(1 +x
2
)
ydy =
x
1 +x
2
dx
_
ydy =
_
x
1 +x
2
dx
y
2
2
=
1
2
_
1
1 +x
2
d(1 +x
2
)
y
2
= log(1 +x
2
) +C
Since y(0) = 1, C = 1. Thus
y
2
= 1 + log(1 +x
2
)
y =
_
1 + log(1 +x
2
)

Example 1.3.4. (Logistic equation) Solve the initial value problem for the logistic equation
dy
dt
= ry(1 y/K), y(0) = y
0
where r and K are constants.
Ordinary dierential equations of rst-order 8
Solution:
dy
y(1 y/K)
= rdt
_
dy
y(1 y/K)
dt =
_
rdt
_ _
1
y
+
1/K
1 y/K
_
dt = rt
log y log(1 y/K) = rt +C
y
1 y/K
= e
rt+C
y =
Ke
rt+C
K +e
rt+C
To satisfy the initial condition, we set
e
C
=
y
0
1 y
0
/K
and obtain
y =
y
0
K
y
0
+ (K y
0
)e
rt
.
Note: When t ,
lim
t
y(t) = K.

1.4 Exact equations


We say that the equation
M(x, y)dx +N(x, y)dy = 0 (1.4.1)
is exact if
M
y
=
N
x
.
In this case, there exists a function f(x, y) such that
f
x
= M and
f
y
= N.
Then the dierential equation can be written as
f
x
dx +
f
y
dy = 0
df(x, y) = 0
Therefore the general solution of the dierential equation is
f(x, y) = C.
To nd f(x, y), rst note that
f
x
= M.
Ordinary dierential equations of rst-order 9
Hence
f(x, y) =
_
M(x, y)dx +g(y).
Dierentiating both sides with respect to y, we have
N(x, y) =

y
_
M(x, y)dx +g

(y)
since
f
y
= N.
Now
N(x, y)

y
_
M(x, y)dx
is independent of x (why?). Therefore
g(y) =
_ _
N(x, y)

y
_
M(x, y)dx
_
dy
and we obtain
f(x, y) =
_
M(x, y)dx +
_ _
N(x, y)

y
_
M(x, y)dx
_
dy.
Remark: Equation (1.4.1) is exact when F = (M(x, y), N(x, y)) denes a conservative vector
eld. Then f(x, y) is just a potential function for F.
Example 1.4.1. Solve (4x +y)dx + (x 2y)dy = 0.
Solution: Since

y
(4x +y) = 1 =

x
(x 2y),
the equation is exact. We need to nd F(x, y) such that
F
x
= M and
F
y
= N.
Now
F(x, y) =
_
(4x +y)dx
= 2x
2
+xy +g(y)
To determine g(y), what we want is
F
y
= x 2y
x +g

(y) = x 2y
g

(y) = 2y
Therefore we may choose g(y) = y
2
and the solution is
F(x, y) = 2x
2
+xy y
2
= C.

Ordinary dierential equations of rst-order 10


Example 1.4.2. Solve
dy
dx
=
e
y
+x
e
2y
xe
y
.
Solution: Rewrite the equation as
(e
y
+x)dx + (xe
y
e
2y
)dy = 0.
Since

y
(e
y
+x) = e
y
=

x
(xe
y
e
2y
),
the equation is exact. Set
F(x, y) =
_
(e
y
+x)dx
= xe
y
+
1
2
x
2
+g(y)
We want
F
y
= xe
y
e
2y
xe
y
+g

(y) = xe
y
e
2y
g

(y) = e
2y
Therefore we may choose g(y) =
1
2
e
2y
and the solution is
xe
y
+
1
2
x
2

1
2
e
2y
= C.

When the equation is not exact, sometimes it is possible to convert it to an exact equation by
multiplying it by a suitable integrating factor. Unfortunately, there is no systematic way of
nding integrating factor.
Example 1.4.3. Show that (x, y) = x is an integrating factor of
(3xy +y
2
)dx + (x
2
+xy)dy = 0
and then solve the equation.
Solution: Multiplying the equation by x reads
(3x
2
y +xy
2
)dx + (x
3
+x
2
y)dy = 0.
Now

y
(3x
2
y +xy
2
) = 3x
2
+ 2xy

x
(x
3
+x
2
y) = 3x
2
+ 2xy
Ordinary dierential equations of rst-order 11
Thus the above equation is exact and x is an integrating factor. To solve the equation, set
F(x, y) =
_
(3x
2
y +xy
2
)dx
= x
3
y +
1
2
x
2
y
2
+g(y)
Now we want
F
y
= x
3
+x
2
y
x
3
+x
2
y +g

(y) = x
3
+x
2
y
g

(y) = 0
g(y) = C

Therefore the solution is


x
3
y +
1
2
x
2
y
2
= C.

Note: The equation in Example 1.4.3 is also a homogenous equation which will be discussed in
Section 1.5.
Example 1.4.4. Show that (x, y) = y is an integrating factor of
ydx + (2x e
y
)dy = 0
and then solve the equation.
Solution: Multiplying the equation by y reads
y
2
dx + (2xy ye
y
)dy.
Now

y
y
2
= 2y

x
(2xy ye
y
) = 2y
Thus the above equation is exact and y is an integrating factor. To solve the equation, set
F(x, y) =
_
y
2
dx
= xy
2
+g(y)
Now we want
F
y
= 2xy ye
y
2xy +g

(y) = 2xy ye
y
g

(y) = ye
y
g(y) =
_
ye
y
dy
=
_
yde
y
= ye
y
+
_
e
y
dy
= ye
y
+e
y
+C

Ordinary dierential equations of rst-order 12


Therefore the solution is
xy
2
ye
y
+e
y
= C.

1.5 Homogeneous equations


A rst order equation is homogeneous if it can be written as
dy
dx
= f
_
y
x
_
.
The above equation can be solved by the substitution u = y/x. Then y = xu and
dy
dx
= u +x
du
dx
.
Therefore the equation reads
u +x
du
dx
= f(u)
du
f(u) u
=
dx
x
which becomes a separable equation.
Example 1.5.1. Solve
dy
dx
=
x
2
+y
2
2xy
.
Solution: Rewrite the equation as
dy
dx
=
1 + (y/x)
2
2y/x
which is a homogeneous equation. Using substitution y = xu, we have
u +x
du
dx
=
dy
dx
=
1 +u
2
2u
x
du
dx
=
1 +u
2
2u
2
2u
2udu
1 u
2
=
dx
x
_
2udu
1 u
2
=
_
dx
x
log(1 u
2
) = log x +C

(1 u
2
)x = e
C

x
2
y
2
Cx = 0
where C = e
C

.
Example 1.5.2. Solve (y + 2xe
y/x
)dx xdy = 0.
Ordinary dierential equations of rst-order 13
Solution: Rewrite the equation as
dy
dx
=
y + 2xe
y/x
x
=
y
x
+ 2e
y/x
.
Let u = y/x, we have
u +x
du
dx
=
dy
dx
= u + 2e
u
x
du
dx
= 2e
u
e
u
du = 2
dx
x
_
e
u
du =
_
2
dx
x
e
u
= 2 log x +C
e
y/x
2 log x = C

1.6 Bernoullis equations


An equation of the form
y

+p(x)y = q(x)y
n
, n = 0, 1,
is called a Bernoullis dierential equation. It is a non-linear equation and y(x) = 0 is
always a solution when n > 0. To nd a non-trivial, we use the substitution
u = y
1n
.
Then
du
dx
= (1 n)y
n
dy
dx
= (1 n)y
n
(p(x)y +q(x)y
n
)
du
dx
+ (1 n)p(x)y
1n
= (1 n)q(x)
du
dx
+ (1 n)p(x)u = (1 n)q(x)
which is a linear dierential equation of u.
Note: Dont forget that y(x) = 0 is always a solution to the Bernoullis equation when n > 0.
Example 1.6.1. Solve
dy
dx
y = e
x
y
2
.
Solution: Let u = y
12
= y
1
,
du
dx
= y
2
dy
dx
= y
2
_
y +e
x
y
2
_
du
dx
+y
1
= e
x
du
dx
+u = e
x
Ordinary dierential equations of rst-order 14
which is a linear equation of u. To solve it, multiply both side by e
x
, we have
e
x
du
dx
+e
x
u = 1
d
dx
(e
x
u) = 1
e
x
u = x +C
u = (C x)e
x
y
1
= (C x)e
x
Therefore the general solution is
y =
e
x
C x
or y = 0

Example 1.6.2. Solve x


dy
dx
+y = xy
3
.
Solution: Let u = y
13
= y
2
,
du
dx
= 2y
3
dy
dx
du
dx
=
2y
3
x
_
y +xy
3
_
du
dx

2y
2
x
= 2
du
dx

2u
x
= 2
which is a linear equation of u. To solve it, multiply both side by exp(
_
2x
1
dx) = x
2
, we
have
x
2
du
dx
2x
3
u = 2x
2
d
dx
(x
2
u) = 2x
2
x
2
u = 2x
1
+C
u = 2x +Cx
2
y
2
= 2x +Cx
2
y
2
=
1
2x +Cx
2
or y = 0

1.7 Substitution
In this section, we give some examples of dierential equations that can be transformed to one
of the forms in the previous sections by a suitable substitution.
Example 1.7.1. Use the substitution u = log y to solve xy

4x
2
y + 2y log y = 0.
Ordinary dierential equations of rst-order 15
Solution:
du
dx
= y

/y
x
du
dx
= 4x
2
2 log y
x
2
du
dx
+ 2xu = 4x
3
d
dx
x
2
u = 4x
3
x
2
u =
_
4x
3
dx
x
2
u = x
4
+C
u = x
2
+
C
x
2
y = exp
_
x
2
+
C
x
2
_

Example 1.7.2. Use the substitution u = e


2y
to solve 2xe
2y
y

= 3x
4
+e
2y
.
Solution:
du
dx
= 2e
2y
dy
dx
x
du
dx
= 2xe
2y
dy
dx
x
du
dx
= 3x
4
+e
2y
1
x
du
dx

1
x
2
u = 3x
2
d
dx
_
u
x
_
= 3x
2
u
x
=
_
3x
2
dx
u = x
3
+C
e
2y
= x
3
+C
y =
1
2
log(x
4
+Cx)

Example 1.7.3. Use the substitution y = x +


1
u
to solve y

1
x
y = 1
1
x
2
y
2
.
Note: An equation of the form
y

+p
1
(x)y +p
2
(x)y
2
= q(x)
is called a Riccatis equation. If we know that y(x) = y
1
(x) is a particular solution, then the
general solution can be found by the substitution
y = y
1
+
1
u
.
Ordinary dierential equations of rst-order 16
Solution:
dy
dx
= 1
1
u
2
du
dx
1
x
y + 1
1
x
2
y
2
= 1
1
u
2
du
dx
1
u
2
du
dx
=
1
x
2
_
x +
1
u
_
2

1
x
_
x +
1
u
_
1
u
2
du
dx
=
1
xu
+
1
x
2
u
2
du
dx

1
x
u =
1
x
2
which is a linear equation of u. An integrating factor is
exp
_

_
1
x
dx
_
= exp(log x) = x
1
.
Thus
x
1
du
dx
x
2
u = x
3
d
dx
(x
1
u) = x
3
x
1
u =
1
2x
2
+C
u =
1
2x
+Cx
u =
2Cx
2
1
2x
Therefore the general solution is
y = x +
2x
2Cx
2
1
or y = x.

Example 1.7.4. Solve y

= 1 +t
2
2ty +y
2
given that y = t is a particular solution.
Solution: This is a Riccattis equation and y = t is a solution. Thus we use the substitution
y = t +
1
u
.
Then
dy
dt
= 1
1
u
2
du
dt
1 +t
2
2ty +y
2
= 1
1
u
2
du
dt
1 +t
2
2t(t +
1
u
) + (t +
1
u
)
2
= 1
1
u
2
du
dt
du
dt
= 1
u = C t
Ordinary dierential equations of rst-order 17
Therefore the general solution is
y = t +
1
C t
or y = t.

2 Second Order Linear Equations


In this chapter, we consider second order linear ordinary dierential equation, i.e., a dierential
equation of the form
d
2
y
dt
2
+p(t)
dy
dt
+q(t)y = g(t).
We let
L[y] = y

+p(t)y

+q(t)y
and write the equation as the form
L[y] = g(t). (2.1)
We say that the equation is homogeneous if g(t) = 0 and
L[y] = 0
is called the homogeneous equation associated to (2.1).
2.1 Solution space and Wronskian
First we state two fundamental results of second order linear ordinary dierential equations.
Theorem 2.1.1. The initial value problem
_
y

+p(t)y

+q(t)y = g(t), t I
y(t
0
) = y
0
, y

(t
0
) = y

0
,
where p, q and g are continuous on an interval I that contains the point y
0
, has a unique solution
on I.
Theorem 2.1.2 (Principle of superposition). If y
1
and y
2
are two solutions of the homogeneous
equation
L[y] = 0,
then c
1
y
1
+c
2
y
2
is also a solution for any constants c
1
and c
2
.
Let y
1
and y
2
be two solutions of a second order linear ordinary dierential equations. Then we
dene the Wronskian (or Wronskian determinant) to be
W(t) = W(y
1
, y
2
)(t) =

y
1
(t) y
2
(t)
y

1
(t) y

2
(t)

= y
1
(t)y

2
(t) y

1
(t)y
2
(t)
Suppose y
1
and y
2
are two solutions with W(t
0
) = 0, then the solution of the initial value
problem can be expressed in terms of y
1
and y
2
.
Theorem 2.1.3. Let t
0
I. Suppose y
1
and y
2
are two solutions of the homogeneous equation
L[y] = 0, t I,
and
W(t
0
) = 0.
Then the initial value problem for the homogeneous equation
_
L[y] = 0,
y(t
0
) = y
0
, y

(t
0
) = y

0
,
(2.1.1)
Second Order Linear Equations 19
has a unique solution of the form
y(t) = c
1
y
1
(t) +c
2
y
2
(t)
for some constants c
1
and c
2
.
Proof. Since W(t
0
) = 0, there exists c
1
, c
2
such that
_
y
1
(t
0
) y
2
(t
0
)
y

1
(t
0
) y

2
(t
0
)
__
c
1
c
2
_
=
_
y
0
y

0
_
.
Then y = c
1
y
1
+c
2
y
2
is a solution to the initial value problem (2.1.1).
Theorem 2.1.4. Suppose y
1
and y
2
are two solutions of the homogeneous equation L[y] = 0
such that W(t
0
) = 0 for some t
0
. Then the general solution of the equation is
y(t) = c
1
y
1
(t) +c
2
y
2
(t),
i.e., every solution of the equation can be written into this form.
Proof. Suppose u(t) is a solution to the homogeneous equation L[y] = 0. Let
y
0
= u(t
0
) and y

0
= u

(t
0
).
By Theorem 2.1.3, the function u(t), as a solution to the initial value problem (2.1.1), must be
of the form u(t) = c
1
y
1
(t) +c
2
y
2
(t).
Suppose y
1
and y
2
are two solutions on I such that W(t
0
) = 0 for some t
0
I, then y
1
, y
2
are
said to form a fundamental set of solutions of the equation.
Two functions u(t) and v(t) are said to be linearly dependent if there exists constants k
1
and
k
2
, not both zero, such that k
1
u(t) + k
2
v(t) = 0 for all t I. They are said to be linearly
independent if they are not linearly dependent.
Theorem 2.1.5. Let u(t) and v(t) be two continuous functions on open interval I. If W(u, v)(t
0
) =
0 for some t
0
I, then u and v are linearly independent.
Note: The converse of the above theorem is false, e.g. u(t) = t
3
, v(t) = |t|
3
.
Example 2.1.6. y
1
(t) = e
t
and y
2
(t) = e
2t
form a fundamental set of solutions of
y

+y

2y = 0
since W(y
1
, y
2
) = e
t
(2e
2t
) e
t
(e
2t
) = 3e
t
is not identically zero.
Example 2.1.7. y
1
(t) = e
t
and y
2
(t) = te
t
form a fundamental set of solutions of
y

2y

+y = 0
since W(y
1
, y
2
) = e
t
(te
t
+e
t
) e
t
(te
t
) = e
2t
is not identically zero.
Second Order Linear Equations 20
Example 2.1.8. The functions y
1
(t) = 3, y
2
(t) = cos
2
t and y
3
(t) = 2 sin
2
t are linearly
dependent since
2(3) + (6) cos
2
t + 3(2 sin
2
t) = 0.
One may justify that the Wronskian

y
1
y
2
y
3
y

1
y

2
y

3
y

1
y

2
y

= 0.
Example 2.1.9. Show that y
1
(t) = t
1/2
and y
2
(t) = t
1
form a fundamental set of solutions of
2t
2
y

+ 3ty

y = 0, t > 0.
Solution: It is easy to check that y
1
and y
2
are solutions to the equation. Now
W(y
1
, y
2
)(t) =

t
1/2
t
1
1
2
t
1/2
t
2

=
3
2
t
3/2
is not identically zero. We conclude that y
1
and y
2
form a fundamental set of solutions of the
equation.
Theorem 2.1.10 (Abels Theorem). If y
1
and y
2
are solutions of the equation
L[y] = y

+p(t)y

+q(t)y = 0,
where p and q are continuous on an open interval I, then
W(y
1
, y
2
)(t) = c exp
_

_
p(t)dt
_
,
where c is a constant. Further, y
1
and y
2
form a fundamental set of solutions if and only if
W(y
1
, y
2
)(t) is never zero in I.
Proof. Since y
1
and y
2
are solutions, we have
_
y

1
+p(t)y

1
+q(t)y
1
= 0,
y

2
+p(t)y

2
+q(t)y
2
= 0.
If we multiply the rst equation by y
2
, multiply the second equation by y
1
and add the resulting
equations, we get
(y
1
y

2
y

1
y
2
) +p(t)(y
1
y

2
y

1
y
2
) = 0
W

+p(t)W = 0
which is a rst-order linear and separable dierential equation. The solution can be obtained
easily as
W(t) = c exp
_

_
p(t)dt
_
,
where c is a constant. Since the exponential function is never zero, W(y
1
, y
2
) is never zero on
I if and only if c = 0 which is also equivalent to that y
1
and y
2
form a fundamental set of
solutions.
Second Order Linear Equations 21
Theorem 2.1.11. Let y
1
and y
2
be two solutions of
L[y] = y

+p(t)y

+q(t)y = 0,
where p and q are continuous on an open interval I. Then the following statements are equiva-
lent.
1. y
1
and y
2
form a fundamental set of solutions, i.e., W(y
1
, y
2
)(t
0
) = 0 for some t
0
I.
2. W(y
1
, y
2
)(t) = 0 for all t I.
3. Every solution of the equation is of the form c
1
y
1
+c
2
y
2
for some constants c
1
, c
2
.
4. y
1
and y
2
are linearly independent.
Proof. (1)(2): Follows from Abels Theorem.
(2)(3): By Theorem 2.1.4.
(3)(4): Take any t
0
I. By theorem 2.1.1, there exists solutions u(t), v(t) of the equation
such that
_
u(t
0
) = 1
u

(t
0
) = 0
and
_
v(t
0
) = 0
v

(t
0
) = 1
.
Suppose (3) is true, then there exists constants a
1
, a
2
, b
1
, b
2
such that
_
u(t) = a
1
y
1
(t) +a
2
y
2
(t)
v(t) = b
1
y
1
(t) +b
2
y
2
(t).
Thus
_
y
1
(t
0
) y
2
(t
0
)
y

1
(t
0
) y

2
(t
0
)
__
a
1
b
1
a
2
b
2
_
=
_
u(t
0
) v(t
0
)
u

(t
0
) v

(t
0
)
_
=
_
1 0
0 1
_
.
Hence the matrix
_
y
1
(t
0
) y
2
(t
0
)
y

1
(t
0
) y

2
(t
0
)
_
is non-singular. To prove (4), suppose c
1
y
1
(t) +c
2
y
2
(t) = 0 for all t I, then
_
y
1
(t
0
) y
2
(t
0
)
y

1
(t
0
) y

2
(t
0
)
__
c
1
c
2
_
=
_
0
0
_
.
This implies that c
1
= c
2
= 0. Therefore y
1
and y
2
are linearly independent.
(4)(1): Suppose (1) is not true, i.e., W(y
1
, y
2
)(t) is identically zero. Let t
0
I be any point,
we have W(y
1
, y
2
)(t
0
) = 0. Consequently, there exists constants c
1
and c
2
, not both zero, such
that
_
c
1
y
1
(t
0
) +c
2
y
2
(t
0
) = 0,
c
1
y

1
(t
0
) +c
2
y

2
(t
0
) = 0.
Let u(t) = c
1
y
1
(t) +c
2
y
2
(t). Then u(t) is also a solution to the equation with initial conditions
u(0) = u

(0) = 0.
Therefore u(t) = 0 for all t I by Theorem 2.1.1. Hence y
1
and y
2
are linearly dependent.
Second Order Linear Equations 22
2.2 Reduction of order
Suppose we know one solution y
1
(t), not everywhere zero, of
y

+p(t)y

+q(t)y = 0. (2.2.1)
To nd a second solution, let
y(t) = v(t)y
1
(t).
We have
y

= v

y
1
+vy

1
and
y

= v

y
1
+ 2v

1
+vy

1
.
Then (2.2.1) becomes
y
1
v

+ (2y

1
+py
1
)v

+ (y

1
+py

1
+qy
1
)v = 0. (2.2.2)
Since y
1
is a solution of (2.2.1), the coecient of v in (2.2.2) is zero. So the equation reads
y
1
v

+ (2y

1
+py
1
)v

= 0,
which is actually a rst order equation for v

.
Example 2.2.1. Given that y
1
(t) = e
2t
is a solution of
y

+ 4y

+ 4y = 0,
nd the general solution of the equation.
Solution: We set y = e
2t
v, then
y

= e
2t
v

2e
2t
v, and y

= e
2t
v

4e
2t
v

+ 4e
2t
v.
Thus the equation becomes
e
2t
v

4e
2t
v

+ 4e
2t
v + 4(e
2t
v

2e
2t
v) + 4e
2t
v = 0
e
2t
v

= 0
v

= 0
v

= C
1
v = C
1
t +C
2
Therefore the general solution is
y = e
2t
(C
1
t +C
2
) = C
1
te
2t
+C
2
e
2t
.

Example 2.2.2. Given that y


1
(t) = t
1
is a solution of
2t
2
y

+ 3ty

y = 0, t > 0,
nd the general solution of the equation.
Second Order Linear Equations 23
Solution: We set y = vt
1
, then
y

= v

t
1
vt
2
, and y

= v

t
1
2v

t
2
+ 2vt
3
.
Thus the equation becomes
2t
2
(v

t
1
2v

t
2
+ 2vt
3
) + 3t(v

t
1
vt
2
) vt
1
= 0
2tv

= 0
t

1
2
v

1
2
t

3
2
v

= 0
d
dt
(t

1
2
v

) = 0
t

1
2
v

= C
v

= Ct
1
2
v = C
1
t
3
2
+C
2
Therefore the general solution is
y = (C
1
t
3
2
+C
2
)t
1
= C
1
t
1
2
+C
2
t
1
.

2.3 Homogeneous equations with constant coecients


We consider homogeneous equation with constant coecients
a
d
2
y
dt
2
+b
dy
dt
+cy = 0, t R.
The equation
ar
2
+br +c = 0
is called the characteristic equation of the dierential equation. Let r
1
, r
2
be the two roots
(can be equal) of the characteristic equation.
Case 1. Both r
1
, r
2
are real, and r
1
= r
2
(b
2
4ac > 0). Then the general solution is given by
y = C
1
e
r
1
t
+C
2
e
r
2
t
,
where C
1
, C
2
are arbitrary constants.
Case 2. r
1
= r
2
is real (b
2
4ac = 0). Then the general solution is given by
y = C
1
e
r
1
t
+C
2
te
r
1
t
,
where C
1
, C
2
are arbitrary constants.
Case 3. Both r
1
, r
2
are not real (b
2
4ac < 0). Write
r
1
, r
2
= i ( > 0).
Then the general solution is given by
y = e
t
(C
1
cos(t) +C
2
sin(t)) ,
where C
1
, C
2
are arbitrary constants.
Second Order Linear Equations 24
Example 2.3.1. Solve
y

+ 5y

+ 6y = 0
Solution: The general solution is:
y = C
1
e
2t
+C
2
e
3t
.

Example 2.3.2. Solve the initial value problem


4y

4y

+y = 0, y(0) = 2, y

(0) = 4.
Solution: The general solution is:
y = C
1
e
t/2
+C
2
te
t/2
.
Now
_
y(0) = C
1
= 2
y

(0) = C
1
/2 +C
2
= 4

_
C
1
= 2
C
2
= 5
Thus
y = 2e
t/2
5te
t/2
.

Example 2.3.3. Solve the initial value problem


16y

8y

+ 145y = 0, y(0) = 2, y

(0) = 1.
Solution: The roots of the characteristic equation is
r
1
, r
2
=
1
4
3i.
Thus the general solution is:
y = C
1
e
t/4
cos 3t +C
2
e
t/4
sin 3t.
Now
_
y(0) = C
1
= 2
y

(0) = C
1
/4 + 3C
2
= 1

_
C
1
= 2
C
2
= 1/2
y = 2e
t/4
cos 3t +
1
2
e
t/4
sin 3t.
Note: This is a case of growing oscillation.
10
0
10
20
y(t)
2 4 6 8 10
t
Second Order Linear Equations 25
Example 2.3.4. Solve
y

+y

+y = 0.
Solution: The roots of the characteristic equation is
r
1
, r
2
=
1
2

3
2
i.
Thus the general solution is:
y = C
1
e
t/2
cos(

3t/2) +C
2
e
t/2
sin(

3t/2),
where C
1
, C
2
are arbitrary constants. This is a case of decaying oscillation.
0.5
0
0.5
1
1.5
2
2.5
y(t)
2 4 6 8 10 12 14 16
t
Example 2.3.5.
y

+ 9y = 0.
Solution: The general solution is:
y = C
1
cos 3t +C
2
sin 3t,
which oscillates steadily.
Note: In the second case i.e. r
1
= r
2
, the solution y = te
r
1
t
can be obtained by the method
of order reduction explained in section 2.2. Determine a function v(t) such that y = v(t)e
r
1
t
becomes a solution of the dierential equation. Since
y

= v(t)r
1
e
r
1
t
+v

(t)e
r
1
t
,
y

= v(t)r
2
1
e
r
1
t
+ 2r
1
v

(t)e
r
1
t
+v

(t)e
r
1
t
,
and
y

+by

+cy = (b + 2r
1
)v

(t)e
r
1
t
+v

(t)e
r
1
t
= v

e
r
1
t
,
we have v

= 0, i.e. v(t) = k
1
t +k
2
for some constants k
1
, k
2
. Hence y = te
r
1
t
is a solution.
2.4 Method of Undetermined Coecients
Throughout this section, we assume that the coecient of y

is 1 and consider
L[y] = y

+by

+cy.
Second Order Linear Equations 26
To solve the nonhomogeneous equation
L[y] = g on I, (2.4.1)
it suces to obtain the general solution y = C
1
y
1
+C
2
y
2
(as described above) of the associated
homogeneous equation:
L[y] = 0, (2.4.2)
and nd a (particular) solution Y of (2.4.1). For, any solution u of (2.4.1) is then given by:
u = C
1
y
1
+C
2
y
2
+Y
for some (suitable) constants C
1
and C
2
. This holds because uY is a solution of (2.4.2). Hence
by the preceding section, it is given by C
1
y
1
+C
2
y
2
for some particular constants C
1
and C
2
.
When g(t) = a
1
g
1
(t) + a
2
g
2
(t) + + a
k
g
k
(t) where a
1
, a
2
, , a
k
are real numbers and each
g
i
(t) is of the form e
t
, cos t, sin t, e
t
cos t, e
t
sin t, a polynomial in t or a product of a
polynomial and one of the above functions, then a particular solution Y
i
(t) is of the form which
is listed in the following table.
The particular solution of ay

+by

+cy = g
i
(t)
g
i
(t) Y
i
(t)
P
n
(t) = a
n
t
n
+ +a
1
t +a
0
t
s
(A
n
t
n
+ +A
1
t +A
0
)
P
n
(t)e
t
t
s
(A
n
t
n
+ +A
1
t +A
0
)e
t
P
n
(t) cos t, P
n
(t) sin t t
s
_
(A
n
t
n
+ +A
1
t +A
0
) cos t
+(B
n
t
n
+ +B
1
t +B
0
) sin t
_
P
n
(t)e
t
cos t, P
n
(t)e
t
sin t t
s
_
(A
n
t
n
+ +A
1
t +A
0
)e
t
cos t
+(B
n
t
n
+ +B
1
t +B
0
)e
t
sin t
_
Notes: Here s = 0, 1, 2 is the smallest nonnegative integer that will ensure that
no term in Y
i
(t) is a solution of the corresponding homogeneous equation.
Example 2.4.3. Find a particular solution of y

+y

2y = 8e
2t
.
Solution: A particular solution is of the form Y = Ae
2t
, where A is a constant to be determined.
Now
Y

+Y

2Y = 4Ae
2t
.
So we set A = 2 and a particular solution is given by
Y = 2e
2t
.

Example 2.4.4. y

+y

2y = 9e
t
.
Solution: (This time Y = Ae
t
does not work.) Now 1 is a (simple) root of the characteristic
equation r
2
+r 2 = 0. We nd a particular solution by trying
Y = Ate
t
.
Second Order Linear Equations 27
Then
Y

+Y

2Y = (At + 2A)e
t
+ (At +A)e
t
2Ate
t
= 3Ae
t
.
So we set A = 3 and a particular solution is given by
Y = 3te
t
.

Example 2.4.5. y

+y

2y = 10 sin t.
Solution: We should set Y = Acos t +Bsin t. Then
_
Y

= Bcos t Asin t,
Y

= Acos t Bsin t,
Now
Y

+Y

2Y = (3A+B) cos t + (A3B) sin t


So we set
_
3A+B = 0,
A3B = 10.
Therefore a particular is given by
Y (t) = cos t + 3 sin t.

Example 2.4.6. y

+y

2y = 26e
t
cos 2t.
Solution: A particular solution is of the form Y = Ae
t
cos 2t +Be
t
sin 2t, then
Y

= (A+ 2B)e
t
cos 2t + (2A+B)e
t
sin 2t,
Y

= (3A+ 4B)e
t
cos 2t + (4A3B)e
t
sin 2t.
Now
Y

+Y

2Y = (4A+ 6B)e
t
cos 2t + (6A4B)e
t
sin 2t.
Put
_
4A+ 6B = 26,
6A4B = 0.
We have A = 2 and B = 3. Therefore a particular is given by
Y = 2e
t
cos 2t + 3e
t
sin 2t.

Example 2.4.7. Find the general solution of


y

+y

2y = 9e
t
10 sin t + 26e
t
cos 2t.
Second Order Linear Equations 28
Solution: The roots of the characteristic equation is r = 1, 2. Thus the general solution of
the associated homogeneous equation is
y
c
= c
1
e
t
+c
2
e
2t
.
The particular solutions corresponding to each term on the right hand side are
y

+y

2y = 9e
t
Y
1
= 3te
t
,
y

+y

2y = 10 sin t Y
2
= cos t + 3 sin t,
y

+y

2y = 26e
t
cos 2t Y
3
= 2e
t
cos 2t + 3e
t
sin 2t.
Therefore the general solution is
y = c
1
e
t
+c
2
e
2t
+ 3te
2t
+ cos t + 3 sin t 2e
t
cos 2t + 3e
t
sin 2t.

Example 2.4.8. Write down the form of a particular solution for the equation
y

4y

+ 5y = t
2
e
2t
+ 3t sin t 2e
2t
cos t.
Solution: The roots of the characteristic equation are r = 2 i. Thus a particular solution is
of the form
Y (t) = (A
2
t
2
+A
1
t +A
0
)e
2t
+ (B
1
t +B
0
) cos t + (C
1
t +C
0
) sin t +D
1
te
2t
cos t +D
2
te
2t
sin t.

Example 2.4.9. Write down the form of a particular solution for the equation
y

4y

+ 4y = te
2t
+t
2
e
3t
.
Solution: The characteristic equation has a double root r = 2. Thus a particular solution is of
the form
Y (t) = (A
1
t
3
+A
0
t
2
)e
2t
+ (B
2
t
2
+B
1
t +B
0
)e
3t
.

2.5 Variation of Parameters


Theorem 2.5.1. Let
W(y
1
, y
2
)

y
1
y
2
y

1
y

,
where y
1
, y
2
are fundamental set of solutions of
y

+p(t)y

+q(t)y = 0, on I.
Let u
1
, u
2
be such that
u

1
=
gy
2
W(y
1
, y
2
)
and u

2
=
gy
1
W(y
1
, y
2
)
,
where g(t) is a given continuous function on I. Then Y = u
1
y
1
+ u
2
y
2
is a particular solution
of
y

+p(t)y

+q(t)y = g(t), on I.
Second Order Linear Equations 29
Proof. Note that u
1
, u
2
satisfy
_
y
1
y
2
y

1
y

2
__
u

1
u

2
_
=
_
0
g
_
.
Let Y = u
1
y
1
+u
2
y
2
. Then
Y

= u
1
y

1
+u

1
y
1
+u
2
y

2
+u

2
y
2
= u
1
y

1
+u
2
y

2
since u

1
y
1
+u

2
y
2
= 0, and
Y

= u
1
y

1
+u
2
y

2
+u

1
y

1
+u

2
y

2
= u
1
y

1
+u
2
y

2
+g
since u

1
y

1
+u

2
y

2
= g. Therefore
Y

+pY

+qY
= (u
1
y

1
+u
2
y

2
+g) +p(u
1
y

1
+u
2
y

2
) +q(u
1
y
1
+u
2
y
2
)
= u
1
(y

1
+py

1
+qy
1
) +u
2
(y

2
+py

2
+qy
2
) +g
= g
since y
1
, y
2
are solutions to the homogeneous equation y

+py

+qy = 0.
Example 2.5.2. Solve
y

+ 4y =
3
sin t
.
Solution: Solving the corresponding homogeneous equation, we let
y
1
= cos 2t, y
2
= sin 2t.
We have
W(y
1
, y
2
)(t) =

cos 2t sin 2t
2 sin 2t 2 cos 2t)

= 2.
So
_

_
u

1
=
gy
2
W(t)
=
3 sin 2t
2 sin t
= 3 cos t,
u

2
=
gy
1
W(t)
=
3 cos 2t
2 sin t
=
3
2 sin t
3 sin t.
Hence,
_
u
1
= 3 sin t +c
1
,
u
2
=
3
2
ln | csc t cot t| + 3 cos t +c
2
.
and the general solution is
y = c
1
cos 2t +c
2
sin 2t 3 sin t cos 2t +
3
2
sin 2t ln | csc t cot t| + 3 cos t sin 2t
= c
1
cos 2t +c
2
sin 2t +
3
2
sin 2t ln | csc t cot t| + 3 sin t
where c
1
, c
2
are constants.
Second Order Linear Equations 30
Example 2.5.3. Solve
y

3y

+ 2y =
e
3t
e
t
+ 1
.
Solution: Solving the corresponding homogeneous equation, we let
y
1
= e
t
, y
2
= e
2t
.
We have
W(y
1
, y
2
)(t) =

e
t
e
2t
e
t
2e
2t

= e
3t
.
So
u

1
=
gy
2
W(t)
=
_
e
3t
e
t
+ 1
e
2t
_
/e
3t
=
e
2t
e
t
+ 1
,
u

2
=
gy
1
W(t)
=
_
e
3t
e
t
+ 1
e
t
_
/e
3t
=
e
t
e
t
+ 1
.
Thus
u
1
=
_
e
2t
e
t
+ 1
dt
=
_
e
t
e
t
+ 1
de
t
=
_ _
1
e
t
+ 1
1
_
d(e
t
+ 1)
= log(e
t
+ 1) (e
t
+ 1) +c

and
u
2
=
_
e
t
e
t
+ 1
dt
=
_
1
e
t
+ 1
d(e
t
+ 1)
= log(e
t
+ 1) +c

.
Therefore the general solution is
y = u
1
y
1
+u
2
y
2
= (log(e
t
+ 1) (e
t
+ 1) +c

)e
t
+ (log(e
t
+ 1) +c

)e
2t
= c
1
e
t
+c
2
e
2t
+ (e
t
+e
2t
) log(e
t
+ 1)

2.6 Mechanical and electrical vibrations


One of the reasons why second order linear equations with constant coecients are worth study-
ing is that they serve as mathematical models of simple vibrations.
Mechanical vibrations
Consider a mass m hanging on the end of a vertical spring of original length l. Let u(t), measured
Second Order Linear Equations 31
positive downward, denote the displacement of the mass from its equilibrium position at time t.
Then u(t) is related to the forces acting on the mass through Newtons law of motion
mu

(t) +ku(t) = f(t), (2.6.1)


where k is the spring constant and f(t) is the net force (excluding gravity and force from the
spring) acting on the mass.
Undamped free vibrations
If there is no external force, then f(t) = 0 and equation (2.6.1) reduces to
mu

(t) +ku(t) = 0.
The general solution is
u = C
1
cos
0
t +C
2
sin
0
t,
where

0
=
_
k
m
is the natural frequency of the system. The period of the vibration is given by
T =
2

0
= 2
_
m
k
.
We can also write the solution in the form
u(t) = Acos(
0
t ).
Then A is the amplitude of the vibration. Moreover, u satises the initial conditions
_
u(0) = u
0
= Acos
u

(0) = u

0
= A
0
sin .
Thus we have
_
_
_
A = u
2
0
+
u
2
0

2
0
= tan
1
u

0
u
0

0
.
Damped free vibrations
If we include the eect of damping, the dierential equation governing the motion of mass is
mu

+u

+ku = 0,
where is the damping coecient. The roots of the corresponding characteristic equation are
r
1
, r
2
=

_

2
4km
2m
.
The solution of the equation depends on the sign of
2
4km and are listed in the following
table.
Solution of mu

+u

+ku = 0
Case Solution Damping

2
4km
< 1 e


2m
t
(C
1
cos t +C
2
sin t) Small damping

2
4km
= 1 (C
1
t +C
2
)e


2m
t
Critical damping

2
4km
> 1 C
1
e
r
1
t
+C
2
e
r
2
t
Overdamped
Second Order Linear Equations 32
Here
=
_
k
m


2
4m
2
=
0
_
1

2
4km
is called the quasi frequency. As
2
/4km increases from 0 to 1, the quasi frequency de-
creases from
0
=
_
k/m to 0 and the quasi period increases from 2
_
m/k to innity.
Forces vibrations with damping
Suppose that an external force F
0
cos t is applied to a damped ( > 0) spring-mass system.
Then the equation of motion is
mu

+u

+ku = F
0
cos t.
The general solution of the equation must be of the form
u = c
1
u
1
(t) +c
2
u
2
(t) +Acos(t ) = u
c
(t) +U(t).
Since m, , k are all positive, the real part of the roots of the characteristic equation are always
negative. Thus u
c
0 as t and it is called the transient solution. The remaining
term U(t) is called the steady-state solution or the forced response. Straightforward, but
somewhat lengthy computations shows that
A =
F
0

, cos =
m(
2
0

2
)

, sin =

,
where
=
_
m
2
(
2
0

2
)
2
+
2

2
and
0
=
_
k/m.
If
2
< 2mk, resonance occurs, i.e. the maximum amplitude
A
max
=
F
0

0
_
1
2
/4mk
is obtained, when
=
max
=
0
_
1

2
2mk
.
We list in the following table how the amplitude A and phase angle of the steady-state
oscillation depends on the frequency of the external force.
Amplitude and phase of forced vibration
Frequency Amplitude Phase dierence
0 A
F
0
k
0
=
max
=
0
_
1

2
2mk
A
max
=
F
0

1
2
/4mk
cos
1
_
1
_
4mk/
2
1
_
=
0
F
0

2
A 0
Second Order Linear Equations 33
Forced vibrations without damping
The equation of motion of an undamped forced oscillator is
mu

+ku = F
0
cos t.
The general solution of the equation is
_

_
u = c
1
cos
0
t +c
2
sin
0
t +
F
0
cos t
m(
2
0

2
)
, =
0
u = c
1
cos
0
t +c
2
sin
0
t +
F
0
t sin
0
t
2m
0
, =
0
Suppose =
0
. If we assume that the mass is initially at rest so that the initial condition are
u(0) = u

(0) = 0, then the solution is


u =
F
0
m(
2
0

2
)
(cos t cos
0
t)
=
2F
0
m(
2
0

2
)
sin
(
0
)t
2
sin
(
0
+)t
2
.
If |
0
| is small, then
0
+ is much greater than |
0
|. The motion is a rapid oscillation
with frequency (
0
+)/2 but with a slowly varying sinusoidal amplitude
2F
0
m|
2
0

2
|

sin
(
0
)t
2

.
This type of motion is called a beat and |
0
|/2 is the beat frequency.
Electric circuits
Second order linear dierential equation with constant coecients can also be used to study
electric circuits. By Kirchhos law of electric circuit, the total charge Q on the capacitor in a
simple series LCR circuit satises the dierential equation
L
d
2
Q
dt
2
+R
dQ
dt
+
Q
C
= E(t),
where L is the inductance, R is the resistance, C is the capacitance and E(t) is the impressed
voltage. Since the ow of current in the circuit is I = dQ/dt, dierentiating the equation with
respect to t gets
LI

+RI

+C
1
I = E

(t).
Therefore the results for mechanical vibrations in the preceding paragraphs can be used to study
LCR circuit.
3 Higher order Linear Equations
3.1 Solution space and Wronskian
The theoretical structure and methods of solution developed in the preceding chapter for second
order linear equations extend directly to linear equations of third and higher order. An nth
order linear dierential equations is an equation of the form
L[y] =
d
n
y
dt
n
+p
n1
(t)
d
n1
y
dt
n1
+ +p
1
(t)
dy
dt
+p
0
(t)y = g(t).
Theorem 3.1.1. If the functions p
0
(t), p
1
(t), , p
n1
(t), and g(t) are continuous on the open
interval I, then there exists exactly one solution to the initial value problem
_
L[y] = g(t), t I
y(t
0
) = y
0
, y

(t
0
) = y

0
, , y
(n1)
(t
0
) = y
(n1)
0
,
.
Let y
1
, y
2
, , y
n
be solutions to the homogeneous equation
L[y] = 0.
Similar to second order equation, we dene the Wronskian as
W(y
1
, y
2
, , y
n
) =

y
1
y
2
y
n
y

1
y

2
y

n
.
.
.
.
.
.
.
.
.
.
.
.
y
(n1)
1
y
(n1)
2
y
(n1)
n

Theorem 3.1.2. Suppose y


1
, y
2
, , y
n
are solutions to the homogeneous equation L[y] = 0
such that W(t
0
) = 0 for some t
0
I. Then every solution to the equation can be written into
the form
y(t) = c
1
y
1
(t) +c
2
y
2
(t) + +c
n
y
n
(t).
Theorem 3.1.3 (Abels Theorem). Suppose y
1
, y
2
, , y
n
are solutions of a linear dierential
equation
L[y] =
d
n
y
dt
n
+p
n1
(t)
d
n1
y
dt
n1
+ +p
1
(t)
dy
dt
+p
0
(t)y = 0, on I.
Then the Wronskian W(y
1
, y
2
, , y
n
) satises
W(y
1
, y
2
, , y
n
)(t) = c exp
_

_
p
n1
(t)dt
_
for some constant c.
Proof. It suces to prove that W(t) satises the rst order dierential equation
W

(t) +p
n1
(t)W(t) = 0.
to this end write
y = (y
1
, y
2
, , y
n
),
Higher order Linear Equations 35
we have
W =

y
y

.
.
.
y
(n1)

.
Then
W

(t) =
d
dt

y
y

.
.
.
y
(n1)

.
.
.
y
(n1)

y
y

.
.
.
y
(n1)

y
y

.
.
.
y
(n1)

+ +

y
y

.
.
.
y
(n)

y
y

.
.
.
y
(n2)
p
n1
y
(n1)
+p
n2
y
(n2)
+ +p
1
y

+p
0
y

= p
n1

y
y

.
.
.
y
(n1)

= p
n1
W
as required.
A set of solutions satisfying the condition in the theorem is called a fundamental set of
solutions.
A set of functions f
1
, f
2
, , f
n
are said to be linearly independent if there does not exists
constants k
1
, k
2
, , k
n
, not all zero, such that
k
1
f
1
(t) +k
2
f
2
(t) + +k
n
f
n
(t) = 0,
for all t I. A set of solutions y
1
, y
2
, , y
n
of the homogeneous equation L[y] = 0 form a
fundamental set if and only if they are linearly independent.
Let Y (t) be a particular solution of the nonhomogeneous equation
L[y] = g.
Then every solution of the equation can be written as the form
y(t) = c
1
y
1
(t) +c
2
y
2
(t) + +c
n
y
n
(t) +Y (t),
where y
1
, y
2
, , y
n
is a fundamental set of solutions of the homogeneous equation L[y] = 0.
Higher order Linear Equations 36
3.2 Homogeneous equations with constant coecients
Consider the nth order linear homogeneous equation
L[y] = a
n
d
n
y
dt
n
+ +a
1
dy
dt
+a
0
y = 0,
where a
0
, a
1
, , a
n
are real constants. The equation
Z(r) = a
n
r
n
+ +a
1
r +a
0
= 0,
is called the characteristic equation of the dierential equation.
If is a real root of the characteristic equation with multiplicity m, then
e
t
, te
t
, , t
m1
e
t
are solutions to the equation.
If i is a purely imaginary root of the characteristic equation with multiplicity m, then
cos t, t cos t, , t
m1
cos t, and sint, t sin t, , t
m1
sin t
are solutions to the equation.
If +i is a complex root of the characteristic equation with multiplicity m, then
e
t
cos t, te
t
cos t, , t
m1
e
t
cos t,
and
e
t
sin t, te
t
sin t, , t
m1
e
t
sin t
are solutions to the equation.
Solutions of L[y] = 0
Root with multiplicity m Solutions
Real number e
t
, te
t
, , t
m1
e
t
Purely imaginary number i
cos t, t cos t, , t
m1
cos t,
sin t, t sint, , t
m1
sin t
Complex number +i
e
t
cos t, te
t
cos t, , t
m1
e
t
cos t,
e
t
sin t, te
t
sin t, , t
m1
e
t
sin t
Note that by fundamental theorem of algebra, there are exactly n functions which are of the
above forms. It can be proved that the Wronskian of these function are not identically zero.
Thus they form a fundamental set of solutions of the homogeneous equation L[y] = 0.
Example 3.2.1. Find the general solution of
y
(4)
+y

7y

+ 6y = 0.
Solution: The roots of the characteristic equation
r
4
+r
3
7r
2
r + 6 = 0
Higher order Linear Equations 37
are
r = 3, 1, 1, 2.
Therefore the general solution is
y(t) = c
1
e
3t
+c
2
e
t
+c
3
e
t
+c
4
e
2t
.

Example 3.2.2. Solve the initial value problem


_
y
(4)
y = 0,
y(0) = 7/2, y

(0) = 4, y

(0) = 5/2, y

(0) = 2.
Solution: The roots of the characteristic equation
r
4
1 = 0
are
r = 1, i.
Therefore the general solution is
y(t) = c
1
e
t
+c
2
e
t
+c
3
cos t +c
4
sin t.
The initial condition gives
_
_
_
_
1 1 1 0
1 1 0 1
1 1 1 0
1 1 0 1
_
_
_
_
_
_
_
_
c
1
c
2
c
3
c
4
_
_
_
_
=
_
_
_
_
7/2
4
5/2
2
_
_
_
_
.
Thus
_
_
_
_
c
1
c
2
c
3
c
4
_
_
_
_
=
_
_
_
_
0
3
1/2
1
_
_
_
_
and the solution is
y(t) = 3e
t
+
1
2
cos t sin t.

Example 3.2.3. Find the general solution of


y
(4)
+ 2y

+y = 0.
Solution: The characteristic equation is
r
4
+ 2r
2
+ 1 = 0
and its roots are
r = i, i, i, i.
Thus the general solution is
y(t) = c
1
cos t +c
2
sin t +c
3
t cos t +c
4
t sin t.

Higher order Linear Equations 38


Example 3.2.4. Find the general solution of
y
(4)
+y = 0.
Solution: The characteristic equation is
r
4
+ 1 = 0
and its roots are
r = e
2k+1
4
i
, k = 0, 1, 2, 3
= cos(
2k + 1
4
) +i sin(
2k + 1
4
), k = 0, 1, 2, 3
=

2
2

2
2
i
Thus the general solution is
y(t) = c
1
e

2
2
t
cos

2
2
t +c
2
e

2
2
t
sin

2
2
t +c
3
e

2
2
t
cos

2
2
t +c
4
e

2
2
t
sin

2
2
t.

3.3 Method of undetermined coecients


The main dierence in using the method of undetermined coecients for higher order equations
stems from the fact that roots of the characteristic equation may have multiplicity greater than
2. Consequently, terms proposed for the nonhomogeneous part of the solution may need to
be multiplied by higher powers of t to make them dierent from terms in the solution of the
corresponding homogeneous equation.
Example 3.3.1. Find the general solution of
y

3y

+ 3y

y = 2te
t
e
t
.
Solution: The characteristic equation
r
3
3r
2
+ 3r 1 = 0
has a triple root r = 1. So the general solution of the corresponding homogeneous equation is
y
c
(t) = c
1
e
t
+c
2
te
t
+c
3
t
2
e
t
.
Since r = 1 is a root of multiplicity 3, a particular solution of the dierential equation is of the
form
Y (t) = At
4
e
t
+Bt
3
e
t
.
Substituting Y (t) into the equation, we have
24Ate
t
+ 6Be
t
= 2te
t
e
t
.
Thus
A =
1
12
, B =
1
6
.
Higher order Linear Equations 39
Therefore the general solution is
y(t) = c
1
e
t
+c
2
te
t
+c
3
t
2
e
t

1
6
t
3
e
t
+
1
12
t
4
e
t
.

Example 3.3.2. Find a particular solution of the equation


y
(4)
+ 2y

+y = 4 cos t sin t.
Solution: The general solution of the corresponding homogeneous equation is
y
c
(t) = c
1
cos t +c
2
sin t +c
3
t cos t +c
4
t sin t.
A particular solution is of the form
Y (t) = At
2
cos t +Bt
2
sin t.
Substituting it into the equation, we have
8Acos t 8Bsin t = 4 cos t sin t.
Thus
A =
1
2
, B =
1
8
.
Therefore the general solution of the equation is
y(t) = c
1
cos t +c
2
sin t +c
3
t cos t +c
4
t sin t
1
2
t
2
cos t +
1
8
t
2
sin t.

Example 3.3.3. Find a particular solution of


y

9y

= t
2
+ 3 sin t +e
3t
.
Solution: The roots of the characteristic equation are r = 0, 3. A particular solution is of the
form
Y (t) = A
1
t
3
+A
2
t
2
+A
3
t +B
1
cos t +B
2
sin t +Cte
3t
.
Substituting into the equation, we have
6A
1
9A
3
18A
2
t 27A
1
t
2
10B
2
cos t + 10B
1
sin t + 18Ce
3t
= t
2
+ 3 sin t +e
3t
.
Thus
A
1
=
1
27
, A
2
= 0, A
3
=
2
81
, B
1
=
3
10
, B
2
= 0, C =
1
18
.
A particular solution is
Y (t) =
1
27
t
3

2
81
t +
3
10
cos t +
1
18
te
3t
.

Higher order Linear Equations 40


Example 3.3.4. Write down the form of a particular solution of the equation
y
(3)
+y

+ 4y

+ 4y = 4t +t
2
e
3t
3te
t
+ cos 2t.
Solution: The roots of the characteristic equation
r
3
+r
2
+ 4r + 4 = (r
2
+ 4)(r + 1) = 0
are 2i, 1. Thus a particular solution is of the form
Y (t) = A
1
t +A
0
+ (B
2
t
2
+B
1
t +B
0
)e
3t
+ (C
1
t +C
0
)te
t
+D
1
t cos 2t +E
1
t sin 2t.

Example 3.3.5. Write down the form of a particular solution of the equation
y
(5)
+ 3y
(4)
+ 3y
(3)
+y

= 6t + 7t
3
e
2t
5t
2
e
t
+ 3e
t
+ 8e
3t
cos t.
Solution: The characteristic equation is
r
5
+ 3r
4
+ 3r
3
+r
2
= r
2
(r + 1)
3
= 0
and its roots are 0 and -1 with multiplicities 2 and 3 respectively. Thus a particular solution is
of the form
Y (t) = A
0
t
2
+A
1
t
3
+ (B
0
+B
1
t +B
2
t
2
+B
3
t
3
)e
2t
+(C
0
t
3
+C
1
t
4
+C
2
t
5
)e
t
+e
3t
(D
0
cos t +D
1
sin t).

3.4 Method of variation of parameters


Theorem 3.4.1. Suppose y
1
, y
2
, , y
n
are solutions of the homogeneous linear dierential
equation
L[y] =
d
n
y
dt
n
+p
n1
(t)
d
n1
y
dt
n1
+ +p
1
(t)
dy
dt
+p
0
(t)y = 0, on I.
Let W(t) = W(y
1
, y
2
, , y
n
)(t) be the Wronskian and W
k
(t) be the determinant obtained from
W(t) by replacing the kth column by the column (0, , 0, 1)
t
. Then
Y (t) =
n

k=1
y
k
(t)
_
t
t
0
g(s)W
k
(s)
W(s)
ds,
is a particular solution to the nonhomogeneous equation
L[y](t) = g(t).
Proof. Let
Y (t) = v
1
(t)y
1
(t) +v
2
(t)y
2
+ +v
n
(t)y
n
(t),
where
v
k
(t) =
_
t
t
0
g(s)W
k
(s)
W(s)
ds, k = 1, 2, , n.
Higher order Linear Equations 41
Then
v

k
(t) =
g(t)W
k
(t)
W(t)
and the functions v

1
, v

2
, , v

n
satisfy the system of equations
_

_
v

1
y
1
+ v

2
y
2
+ + v

n
y
n
= 0,
v

1
y

1
+ v

2
y

2
+ + v

n
y

n
= 0,
.
.
. +
.
.
. +
.
.
.
+
.
.
. =
.
.
.
v

1
y
(n2)
1
+ v

2
y
(n2)
2
+ + v

n
y
(n2)
n
= 0,
v

1
y
(n1)
1
+ v

2
y
(n1)
2
+ + v

n
y
(n1)
n
= g(t).
Then one may prove by induction on m that
Y
(m)
= v
1
y
(m)
1
+v
2
y
(m)
2
+ +v
n
y
(m)
n
, for m = 0, 1, 2, , n 1,
and
Y
(n)
= (v
1
y
(n1)
1
+v
2
y
(n1)
2
+ +v
n
y
(n1)
n
)

= v
1
y
(n)
1
+v
2
y
(n)
2
+ +v
n
y
(n)
n
+v

1
y
(n1)
1
+v

2
y
(n1)
2
+ +v

n
y
(n1)
n
= v
1
y
(n)
1
+v
2
y
(n)
2
+ +v
n
y
(n)
n
+g.
Hence
L[Y ] = Y
(n)
+p
n1
Y
(n1)
+ +p
1
Y

+p
0
Y
= g +
n

k=1
v
k
y
(n)
k
+p
n1
n

k=1
v
k
y
(n1)
k
+ +p
1
n

k=1
v
k
y

k
+p
0
n

k=1
v
k
y
k
= g +
n

k=1
v
k
(y
(n)
k
+p
n1
y
(n1)
k
+ +p
1
y

k
+p
0
y
k
)
= g +
n

k=1
v
k
L[y
k
]
= g.
The last equality holds since y
k
is a solution to the homogeneous equation L[y] = 0.
Example 3.4.2. Find a particular solution of
y

2y

+y

2y = 5t.
Solution: The general solution of the corresponding homogeneous equation is
y
c
(t) = c
1
e
2t
+c
2
cos t +c
3
sin t.
Using variation of parameter, let
Y (t) = v
1
e
2t
+v
2
cos t +v
3
sin t,
and the conditions are
_
_
_
v

1
e
2t
+ v

2
cos t + v

3
sin t = 0,
2v

1
e
2t
v

2
sin t + v

3
cos t = 0,
4v

1
e
2t
v

2
cos t v

3
sin t = 5t.
Higher order Linear Equations 42
Solving this system we get
v

1
= te
2t
v

2
= t(2 sin t cos t)
v

3
= t(2 cos t + sin t)
Integrating the equations gives
v
1
=
2t + 1
4
e
2t
v
2
= t(2 cos t + sin t) cos t + 2 sin t
v
3
= t(cos t 2 sin t) 2 cos t sin t
Therefore
Y (t) = v
1
y
1
+v
2
y
2
+v
3
y
3
=
_

2t + 1
4
e
2t
_
e
2t
+ ((t(2 cos t + sin t) cos t + 2 sin t) cos t
+(t(cos t 2 sin t) 2 cos t sin t) sin t
=
5
4
(2t + 1)
is a particular solution.
Example 3.4.3. Find a particular solution of
y

+y

= sec
2
t, t (/2, /2).
Solution: The general solution of the corresponding homogeneous equation is
y
c
(t) = c
1
+c
2
cos t +c
3
sin t.
Using variation of parameter, let
Y (t) = v
1
+v
2
cos t +v
3
sin t,
and the conditions are
_
_
_
v

1
+ v

2
cos t + v

3
sin t = 0,
v

2
sin t + v

3
cos t = 0,
v

2
cos t v

3
sin t = sec
2
t.
Solving this system we get
v

1
= sec
2
t
v

2
= sec t
v

3
= sec t tan t
Integrating the equations gives
v
1
= tan t
v
2
= ln | sec t + tan t|
v
3
= sec t
Higher order Linear Equations 43
Therefore
Y (t) = v
1
y
1
+v
2
y
2
+v
3
y
3
= tan t cos t ln | sec t + tan t| sec t sin t
= cos t ln| sec t + tan t|
is a particular solution.
4 Systems of First Order Linear Equations
4.1 Basic properties of systems of rst order linear equations
In this chapter, we study systems of rst order linear equations
_

_
x

1
= p
11
(t)x
1
+ p
12
(t)x
2
+ + p
1n
(t)x
n
+ g
1
(t)
x

2
= p
21
(t)x
1
+ p
22
(t)x
2
+ + p
2n
(t)x
n
+ g
2
(t)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
x

n
= p
n1
(t)x
1
+ p
n2
(t)x
2
+ + p
nn
(t)x
n
+ g
n
(t)
.
We can also write the system into matrix form
x

= P(t)x +g(t), t I,
where
x =
_
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
_
_
_
, P(t) =
_
_
_
_
_
p
11
(t) p
12
(t) p
1n
(t)
p
21
(t) p
22
(t) p
2n
(t)
.
.
.
.
.
.
.
.
.
.
.
.
p
n1
(t) p
n2
(t) p
nn
(t)
_
_
_
_
_
, g(t) =
_
_
_
_
_
g
1
(t)
g
2
(t)
.
.
.
g
n
(t)
_
_
_
_
_
.
Example 4.1.1. We can use the substitution x
1
(t) = y(t) and x
2
(t) = y

(t) to transform the


second order dierential equation
y

+p(t)y

+q(t)y = g(t),
to a system of linear equations
_
x

1
= x
2
x

2
= q(t)x
1
p(t)x
2
+g(t)
.
Theorem 4.1.2 (Existence and uniqueness theorem). If all the functions {p
ij
} and {g
i
} are
continuous on an open interval I, then for any t
0
I and x
0
R
n
, there exists a unique solution
to the initial value problem
_
x

= P(t)x +g(t), t I,
x(t
0
) = x
0
.
Denition 4.1.3. Let x
(1)
, x
(2)
, , x
(n)
be solutions to the system x

= P(t)x and let


X(t) = [ x
(1)
x
(2)
x
(n)
],
be the matrix whose columns are x
(1)
, x
(2)
, , x
(n)
. Then the Wronskian is dened as
W[x
(1)
, x
(2)
, , x
(n)
](t) = det(X(t)).
The solutions x
(1)
, x
(2)
, , x
(n)
are linearly independent at a point t
0
I if and only if
W[x
(1)
, x
(2)
, , x
(n)
](t
0
) = 0. If W[x
(1)
, x
(2)
, , x
(n)
](t
0
) = 0 for some t
0
I, then we say
that x
(1)
, x
(2)
, , x
(n)
form a fundamental set of solutions. The following theorem then
implies that x
(1)
, x
(2)
, , x
(n)
are linearly independent for any t I.
Theorem 4.1.4. Let x
(1)
, x
(2)
, , x
(n)
be solutions to the system
x

= P(t)x, t I.
Then W[x
(1)
, x
(2)
, , x
(n)
](t) is either identically zero on I or else never zero on I.
Systems of First Order Linear Equations 45
Proof. We are going to prove that the Wronskian W(t) satises the equation
W

(t) = tr(P)(t)W(t),
where tr(P)(t) = p
11
(t) +p
22
(t) + +p
nn
(t) is the trace of P(t). First we assume for the time
being that x
(1)
, x
(2)
, , x
(n)
are linearly independent. Then there exists
ij
(t), 1 i, j n,
such that
Px
(i)
=
n

j=1

ij
x
(j)
, for i = 1, 2, , n.
We have
W

=
d
dt
det
_
x
(1)
x
(2)
x
(n)
_
=
n

i=1
det
_
x
(1)
x
(2)

d
dt
x
(i)
x
(n)
_
=
n

i=1
det
_
x
(1)
x
(2)
Px
(i)
x
(n)
_
=
n

i=1
det
_
_
x
(1)
x
(2)

n

j=1

ij
x
(j)
x
(n)
_
_
=
n

i=1
det
_
x
(1)
x
(2)

ii
x
(i)
x
(n)
_
=
_
n

i=1

ii
_
det
_
x
(1)
x
(2)
x
(n)
_
=
_
n

i=1

ii
_
W(t).
Here we have used the fact that if two columns of a matrix is identical, then its determinant is
zero. Now it is a standard fact in linear algebra that
n

i=1

ii
= tr(P).
By taking limits, it can be seen easily that the equation holds without the assumption on linear
independency. Consequently, we have
W(t) = c exp
__
tr(P)(t)dt
_
for some constant c and the result follows readily.
Note: One may also verify directly that
n

i=1
det
_
x
(1)
x
(2)
Px
(i)
x
(n)
_
= tr(P)det
_
x
(1)
x
(2)
x
(n)
_
.
Theorem 4.1.5. Let x
(1)
, x
(2)
, , x
(n)
be linearly independent solutions to the system
x

= P(t)x, t I.
Then each solution of the system can be expressed as a linear combination
x = c
1
x
(1)
+c
2
x
(2)
+ +c
n
x
(n)
,
in exactly one way.
Systems of First Order Linear Equations 46
4.2 Homogeneous linear systems with constant coecients
From now on we will consider homogeneous linear systems with constant coecients
x

= Ax,
where A is a constant n n matrix. Suppose the system has a solution of the form
x = e
t
,
where is not the zero vector. Then
x

= e
t
.
Put it into the system, we have
(AI) = 0.
Since = 0, is an eigenvalue and is an eigenvector of the matrix A. Conversely if is an
eigenvalue of A and is an eigenvector associated with , then x = e
t
gives a solution to the
system.
Example 4.2.1. Solve
x

=
_
1 1
4 1
_
x.
Solution: Solving the characteristic equation

1 1
4 1

= 0
(1 )
2
4 = 0
1 = 2
= 3, 1
we nd that the eigenvalues of the coecient matrix are
1
= 3 and
2
= 1 and the associated
eigenvectors are

(1)
=
_
1
2
_
,
(2)
=
_
1
2
_
respectively. Therefore the general solution is
x = c
1
e
3t
_
1
2
_
+c
2
e
t
_
1
2
_
.

Example 4.2.2. Solve


x

=
_
3

2 2
_
x.
Solution: Solving the characteristic equation

2 2

= 0
(3 +)(2 +) 2 = 0

2
+ 5 + 4 = 2
= 4, 1
Systems of First Order Linear Equations 47
we nd that the eigenvalues of the coecient matrix are
1
= 4 and
2
= 1 and the associated
eigenvectors are

(1)
=
_

2
1
_
,
(2)
=
_
1

2
_
respectively. Therefore the general solution is
x = c
1
e
4t
_

2
1
_
+c
2
e
t
_
1

2
_
.

When the characteristic equation has repeated root, the above method can still be used if
there are n linearly independent eigenvectors, in other words when the coecient matrix A is
diagonalizable.
Example 4.2.3. Solve
x

=
_
_
0 1 1
1 0 1
1 1 0
_
_
x.
Solution: Solving the characteristic equation, we nd that the eigenvalues of the coecient
matrix are
1
= 2 and
2
=
3
= 1.
For
1
= 2, the associated eigenvector is

(1)
=
_
_
1
1
1
_
_
.
For the repeated root
2
=
3
= 1, there are two linearly independent eigenvectors

(2)
=
_
_
1
0
1
_
_
,
(3)
=
_
_
0
1
1
_
_
.
Therefore the general solution is
x = c
1
e
2t
_
_
1
1
1
_
_
+c
2
e
t
_
_
1
0
1
_
_
+c
3
e
t
_
_
0
1
1
_
_
.

If = +i, i, > 0 are complex eigenvalues of A and a+bi, abi are the associated
eigenvectors respectively, then the real part and imaginary part of
e
(+i)t
(a +bi) = e
t
(cos t +i sin t)(a +bi)
= e
t
(acos t bsin t) +e
t
(bcos t +asin t)i
give two linearly independent solutions to the system. We have
Theorem 4.2.4. Suppose = + i, i, > 0 are complex eigenvalues of A and
a +bi, a bi are the associated eigenvectors respectively, then
_
x
(1)
= e
t
(acos t bsin t),
x
(2)
= e
t
(bcos t +asin t),
are two linear independent solutions to x

= Ax.
Systems of First Order Linear Equations 48
Example 4.2.5. Solve
x

=
_
3 2
4 1
_
x.
Solution: Solving the characteristic equation, we nd that the eigenvalues of the coecient
matrix are
1
= 1 + 2i,
2
= 1 2i, and the associated eigenvectors are
_

(1)
=
_
1 +i
1
_
=
_
1
1
_
+
_
1
0
_
i,

(2)
=
_
1 i
1
_
=
_
1
1
_

_
1
0
_
i,
respectively. Therefore
_

_
x
(1)
= e
t
__
1
1
_
cos 2t
_
1
0
_
sin 2t
_
= e
t
_
cos 2t sin 2t
cos 2t
_
,
x
(2)
= e
t
__
1
0
_
cos 2t +
_
1
1
_
sin 2t
_
= e
t
_
cos 2t sin 2t
sin 2t
_
,
are two linearly independent solutions and the general solution is
x = c
1
e
t
_
cos 2t sin 2t
cos 2t
_
+c
2
e
t
_
cos 2t sin 2t
sin 2t
_
= e
t
_
(c
1
+c
2
) cos 2t + (c
1
c
2
) sin 2t
c
1
cos 2t +c
2
sin 2t
_
.

Example 4.2.6. Two masses m


1
and m
2
are attached to each other and to outside walls by
three springs with spring constants k
1
, k
2
and k
3
in the straight-line horizontal fashion. Suppose
that m
1
= 2, m
2
= 9/4, k
1
= 1, k
2
= 3 and k
3
= 15/4 Find the displacement of the masses x
1
and x
3
after time t with the initial conditions x
1
(0) = 6, x

1
(0) = 6, x
2
(0) = 4 and x

2
(0) = 8.
Solution: The equation of motion of the system is
_
m
1
x

1
= (k
1
+k
2
)x
1
+k
2
x
2
,
m
2
x

2
= k
2
x
1
(k
1
+k
3
)x
2
.
Let y
1
= x
1
, y
2
= x
2
, y
3
= x

1
and y
4
= x

2
. Then the equation is transformed to
y

=
_
_
_
_
0 0 1 0
0 0 0 1
2 3/2 0 0
4/3 3 0 0
_
_
_
_
y = Ay.
The characteristic equation is

0 1 0
0 0 1
2 3/2 0
4/3 3 0

= 0

0 0 1 0
0 0 0 1

2
2 3/2 0
4/3
2
3 0

= 0
(
2
+ 2)(
2
+ 3) 2 = 0

4
+ 5
2
+ 4 = 0
(
2
+ 1)(
2
+ 4) = 0
Systems of First Order Linear Equations 49
The four eigenvalues are
1
= i,
2
= i,
3
= 2i and
4
= 2i. The associated eigenvectors
are

(1)
=
_
_
_
_
3
2
3i
2i
_
_
_
_
,
(2)
=
_
_
_
_
3
2
3i
2i
_
_
_
_
,
(3)
=
_
_
_
_
3
4
6i
8i
_
_
_
_
,
(4)
=
_
_
_
_
3
4
6i
8i
_
_
_
_
.
From real and imaginary parts of
e

1
t

(1)
=
_
_
_
_
3
2
3i
2i
_
_
_
_
(cos t +i sin t)
=
_
_
_
_
3 cos t
2 cos t
3 sin t
2 sin t
_
_
_
_
+
_
_
_
_
3 sin t
2 sin t
3 cos t
2 cos t
_
_
_
_
i
and
e

3
t

(3)
=
_
_
_
_
3
4
6i
8i
_
_
_
_
(cos 2t +i sin 2t)
=
_
_
_
_
3 cos 2t
4 cos 2t
6 sin 2t
8 sin 2t
_
_
_
_
+
_
_
_
_
3 sin 2t
4 sin 2t
6 cos 2t
8 cos 2t
_
_
_
_
i,
the general solution to the system is
y = c
1
_
_
_
_
3 cos t
2 cos t
3 sin t
2 sin t
_
_
_
_
+c
2
_
_
_
_
3 sin t
2 sin t
3 cos t
2 cos t
_
_
_
_
+c
3
_
_
_
_
3 cos 2t
4 cos 2t
6 sin 2t
8 sin 2t
_
_
_
_
+c
4
_
_
_
_
3 sin 2t
4 sin 2t
6 cos 2t
8 cos 2t
_
_
_
_
.
From the initial conditions, we have
_
_
_
_
3 0 3 0
2 0 4 0
0 3 0 6
0 2 0 8
_
_
_
_
_
_
_
_
c
1
c
2
c
3
c
4
_
_
_
_
=
_
_
_
_
6
4
6
8
_
_
_
_
_
_
_
_
c
1
c
2
c
3
c
4
_
_
_
_
=
_
_
_
_
2
0
0
1
_
_
_
_
.
Therefore
_
x
1
= 6 cos t 3 sin 2t
x
2
= 4 cos t + 4 sin 2t.

Exercise 4.2.7. Solve x

= Ax for the following matrix A.


Systems of First Order Linear Equations 50
1. A =
_
1 2
3 0
_
2. A =
_
1 1
5 1
_
3. A =
_
_
2 3 3
4 5 3
4 4 2
_
_
4. A =
_
_
4 1 1
1 2 1
1 1 2
_
_
4.3 Matrix exponential
Denition 4.3.1. Let A be an n n matrix. The matrix exponential of A is dened as
exp(A)
.
=

k=0
A
k
k!
= I +A+
1
2!
A
2
+
1
3!
A
3
+ .
Theorem 4.3.2 (Properties of matrix exponential). Let A and B be two n n matrices and
a, b be any scalars. Then
1. exp(0) = I;
2. exp(A) = exp(A)
1
;
3. exp((a +b)A) = exp(aA) exp(bA);
4. If AB = BA, then exp(A+B) = exp(A) exp(B);
5. For any non-singular matrix P, we have exp(P
1
AP) = P
1
exp(A)P;
6. det(exp(A)) = e
tr(A)
. (tr(A) = a
11
+a
22
+ +a
nn
is the trace of A.)
Theorem 4.3.3. If
D =
_
_
_
_
_

2
0
.
.
.
0
n
_
_
_
_
_
is a diagonal matrix, then
exp(Dt) =
_
_
_
_
_
e

1
t
e

2
t 0
.
.
.
0 e

n
t
_
_
_
_
_
.
Moreover, for any nn matrix A, if there exists non-singular matrix P such that P
1
AP = D
is a diagonal matrix. Then
exp(At) = Pexp(Dt)P
1
.
Systems of First Order Linear Equations 51
Example 4.3.4. Find exp(At) where
A =
_
4 2
3 1
_
.
Solution: Diagonalizing A, we have
_
2 1
1 3
_
1
_
4 2
3 1
__
2 1
1 3
_
=
_
5 0
0 2
_
.
Therefore
exp(At) =
_
2 1
1 3
__
e
5t
0
0 e
2t
__
2 1
1 3
_
1
=
_
2 1
1 3
__
e
5t
0
0 e
2t
_
1
7
_
3 1
1 2
_
=
1
7
_
e
2t
+ 6e
5t
2e
2t
+ 2e
5t
3e
2t
+ 3e
5t
6e
2t
+e
5t
_

Theorem 4.3.5. Suppose there exists positive integer k such that A


k
= 0, then
exp(At) = I +tA+
t
2
A
2
2!
+
t
3
A
3
3!
+ +
t
k1
A
k1
(k 1)!
.
Proof. It follows easily from the fact that A
l
= 0 for all l k.
Example 4.3.6. Find exp(At) where
A =
_
_
0 1 3
0 0 2
0 0 0
_
_
.
Solution: First compute
A
2
=
_
_
0 0 2
0 0 0
0 0 0
_
_
and A
3
=
_
_
0 0 0
0 0 0
0 0 0
_
_
.
Therefore
exp(At) = I +At +
1
2
A
2
t
2
=
_
_
1 0 0
0 1 0
0 0 1
_
_
+
_
_
0 1 3
0 0 2
0 0 0
_
_
t +
1
2
_
_
0 0 2
0 0 0
0 0 0
_
_
t
2
=
_
_
1 t 3t + 2t
2
0 1 2t
0 0 1
_
_
.

Systems of First Order Linear Equations 52


Example 4.3.7. Find exp(At) where
A =
_
4 1
0 4
_
.
Solution:
exp(At)
= exp
_
4t t
0 4t
_
= exp
__
4t 0
0 4t
_
+
_
0 t
0 0
__
= exp
_
4t 0
0 4t
_
exp
_
0 t
0 0
_
(since
_
4t 0
0 4t
__
0 t
0 0
_
=
_
0 t
0 0
__
4t 0
0 4t
_
)
=
_
e
4t
0
0 e
4t
__
I +
_
0 t
0 0
__
=
_
e
4t
te
4t
0 e
4t
_

Example 4.3.8. Find exp(At) where


A =
_
5 1
1 3
_
.
Solution: Solving the characteristic equation

5 1
1 3

= 0
(5 )(3 ) + 1 = 0

2
8 + 16 = 0
( 4)
2
= 0
= 4, 4
we see that A has only one eigenvalue = 4. Consider
det(A4I) = 0
_
1 1
1 1
_
= 0
we can nd only one linearly independent eigenvector = (1, 1)
T
. Thus A is not diagonalizable.
To nd exp(At), we may use the so called generalized eigenvector. A vector is called a
generalized eigenvector of rank 2 associated with eigenvalue if it satises
_
(AI) = 0,
(AI)
2
= 0.
Now if we take
_

_
=
_
1
0
_
,

1
= (A4I) =
_
1 1
1 1
__
1
0
_
=
_
1
1
_
,
Systems of First Order Linear Equations 53
then
_

_
(A4I) =
_
1 1
1 1
__
1
0
_
=
_
1
1
_
= 0,
(A4I)
2
= (A4I)
1
=
_
1 1
1 1
__
1
1
_
= 0.
So is a generalized eigenvector of rank 2 associated with eigenvalue = 4. We may let
Q = [
1
] =
_
1 1
1 0
_
,
and
J = Q
1
AQ
=
_
1 1
1 0
_
1
_
5 1
1 3
__
1 1
1 0
_
=
_
0 1
1 1
__
5 1
1 3
__
1 1
1 0
_
=
_
4 1
0 4
_
(J is called the Jordan normal form of A.) Then
exp(At) = exp(QJQ
1
t)
= Qexp(Jt)Q
1
=
_
1 1
1 0
__
e
4t
te
4t
0 e
4t
__
0 1
1 1
_
(By Example 4.3.7)
=
_
e
4t
+te
4t
te
4t
te
4t
e
4t
te
4t
_

Exercise 4.3.9. Find exp(At) for each of the following matrices A.


1.
_
1 3
4 2
_
2.
_
4 1
1 2
_
3.
_
_
0 4 1
0 0 3
0 0 0
_
_
4.
_
_
3 1 0
0 3 1
0 0 3
_
_
4.4 Fundamental matrices
Denition 4.4.1. A matrix function (t) is called a fundamental matrix for the system
x

= Ax if the column vectors of (t) form a fundamental set of solutions for the system.
Systems of First Order Linear Equations 54
Theorem 4.4.2. A matrix function (t) is a fundamental matrix for the system x

= Ax if
and only if (t
0
) is non-singular for some t
0
and
d
dt
= A.
Proof. For any vector valued functions x
(1)
(t), x
(2)
(t), , x
(n)
(t), consider the matrix
(t) =
_
x
(1)
x
(2)
x
(n)

.
We have
d
dt
=
_
dx
(1)
dt
dx
(2)
dt

dx
(n)
dt
_
and
A =
_
Ax
(1)
Ax
(2)
Ax
(n)

.
Thus satises the equation
d
dt
= A
if and only if x
(i)
is a solution to the system x

= Ax for any i = 1, 2, , n. From this the


theorem follows readily.
Corollary 4.4.3. Let be a fundamental matrix for the system x

= Ax and P be a non-
singular constant matrix. Then (t)P is also a fundamental matrix for the system. In particu-
lar,
(t) = (t)
1
(t
0
)
is a fundamental matrix for the system satisfying the initial condition
(t
0
) = I.
Furthermore, the solution to the system with initial conditions x(t
0
) = x
0
is
x = (t)x
0
= (t)
1
(t
0
)x
0
.
Caution: In general P(t) is not a fundamental matrix.
Note that a fundamental matrix for a system is not unique but there exists unique fundamental
matrix such that (0) = I. We will use to denote a general fundamental matrix and
use to denote the fundamental matrix which satises (0) = I. Later on we will see that
(t) = exp(At).
Example 4.4.4. Find the fundamental matrix (t) for the system
x

=
_
5 4
8 7
_
x,
with initial condition
(0) = I.
Then solve the system with initial conditions
x(0) =
_
2
1
_
.
Systems of First Order Linear Equations 55
Solution: Solving the characteristic equation

5 4
8 7

= 0

2
+ 2 3 = 0
= 1, 3.
For
1
= 1, an associated eigenvector is

1
=
_
1
1
_
which gives a solution
x
(1)
=
_
e
t
e
t
_
.
For
2
= 3, an associated eigenvector is

2
=
_
1
2
_
which gives another solution
x
(2)
=
_
e
3t
2e
3t
_
.
The functions x
(1)
, x
(2)
form a fundamental set of solution and we get a fundamental matrix
(t) = [x
(1)
x
(2)
] =
_
e
t
e
3t
e
t
2e
3t
_
.
The fundamental matrix with the given initial condition is
(t) = (t)(0)
1
=
_
e
t
e
3t
e
t
2e
3t
__
1 1
1 2
_
1
=
_
e
t
e
3t
e
t
2e
3t
__
2 1
1 1
_
=
_
2e
t
e
3t
e
t
e
3t
2e
t
+ 2e
3t
e
t
+ 2e
3t
_
The solution to the initial problem is
x = (t)x(0)
=
_
2e
t
e
3t
e
t
e
3t
2e
t
+ 2e
3t
e
t
+ 2e
3t
__
2
1
_
=
_
3e
t
e
3t
3e
t
+ 2e
3t
_

Theorem 4.4.5. Let A be an n n constant matrix. Then


(t) = exp(At)
Systems of First Order Linear Equations 56
is the fundamental matrix for the system x

= Ax which satises (0) = I. Moreover, the


solution to the initial value problem
_
x

= Ax,
x(0) = x
0
is
x = exp(At)x
0
.
Proof. Let (t) = exp(At), then
d
dt
=
d
dt
exp(At)
=

k=0
d
dt
_
A
k
t
k
k!
_
=

k=1
A
k
kt
k1
k!
=

k=1
A
k
t
k1
(k 1)!
= A

l=0
A
l
t
l
l!
= Aexp(At)
= A
and (0) = exp(0) = I. Thus (t) = exp(At) is the required fundamental matrix. Since is a
fundamental matrix, the function x = x
0
is a solution to the system. Further, it satises the
initial condition
x(0) = exp(0)x
0
= Ix
0
= x
0
.
Theorem 4.4.6. Suppose P diagonalizes A, that is, P
1
AP = D is a diagonal matrix. Then
= Pexp(Dt) is a fundamental matrix for the system x

= Ax.
Proof. There are two ways to prove the theorem. First if we write
= [
1

2

n
] and D =
_
_
_
_
_

2
0
.
.
.
0
n
_
_
_
_
_
.
Then P
1
AP = D means that
1
,
2
, ,
n
are eigenvalues of A and
1
,
2
, ,
n
are associ-
ated linearly independent eigenvectors respectively. Thus the column vectors of
Pexp(Dt) = [ e

1
t

1
e

2
t

2
e

n
t

n
]
are linearly independent solutions to the system x

= Ax and the proof is complete.


The theorem can be proved alternatively as follow. Since = exp(At) = Pexp(Dt)P
1
is a
fundamental matrix and P is a non-singular matrix, we have = Pexp(Dt) = exp(At)P is
also a fundamental matrix for the system by Corollary 4.4.3.
Systems of First Order Linear Equations 57
Example 4.4.7. Solve the initial value problem
_
x

= Ax,
x(0) = x
0
where
A =
_
4 2
3 1
_
and x
0
=
_
1
1
_
.
Solution: Diagonalizing A, we have
_
2 1
1 3
_
1
_
4 2
3 1
__
2 1
1 3
_
=
_
5 0
0 2
_
.
Therefore
exp(At) =
_
2 1
1 3
__
e
5t
0
0 e
2t
__
2 1
1 3
_
1
=
1
7
_
6e
5t
+e
2t
2e
5t
2e
2t
3e
5t
3e
2t
e
5t
+ 6e
2t
_
Hence the solution is
x = exp(At)x
0
=
1
7
_
6e
5t
+e
2t
2e
5t
2e
2t
3e
5t
3e
2t
e
5t
+ 6e
2t
__
1
1
_
=
1
7
_
4e
5t
+ 3e
2t
2e
5t
9e
2t
_

Example 4.4.8. Find a fundamental matrix for the system


x

=
_
2 2
3 1
_
x.
Solution: Solving the characteristic equation

2 2
3 1

= 0

2
3 4 = 0
= 4, 1.
For
1
= 4, an associated eigenvector is

1
=
_
1
1
_
.
For
2
= 1, an associated eigenvector is

2
=
_
2
3
_
.
Systems of First Order Linear Equations 58
Hence the matrix
P =
_
1 2
1 3
_
daigonalizes A and we have
P
1
AP = D =
_
4 0
0 1
_
.
Therefore
(t) = Pexp(Dt)
=
_
1 2
1 3
__
e
4t
0
0 e
t
_
=
_
e
4t
2e
t
e
4t
3e
t
_
is a fundamental matrix for the system.
Example 4.4.9. Find the fundamental matrix for the system
x

=
_
2 2
3 1
_
x
which satises (0) = I. Then solve the system with initial value x(0) =
_
2
1
_
.
Solution: By Example 4.4.8,
_
1 2
1 3
_
1
A
_
1 2
1 3
_
=
_
4 0
0 1
_
.
Therefore
= exp(At)
= Pexp(Dt)P
1
=
_
1 2
1 3
__
e
4t
0
0 e
t
__
1 2
1 3
_
1
=
_
1 2
1 3
__
e
4t
0
0 e
t
_
1
5
_
3 2
1 1
_
=
1
5
_
3e
4t
+ 2e
t
2e
4t
2e
t
3e
4t
3e
t
2e
4t
+ 3e
t
_
is the required fundamental matrix. The solution to the initial value problem is
x = exp(At)x(0)
=
1
5
_
3e
4t
+ 2e
t
2e
4t
2e
t
3e
4t
3e
t
2e
4t
+ 3e
t
__
2
1
_
=
1
5
_
4e
4t
+ 6e
t
4e
4t
9e
t
_
.

Exercise 4.4.10. Find a fundamental matrix for the system x

= Ax for each of the following


matrices A.
Systems of First Order Linear Equations 59
1. A =
_
3 4
1 2
_
2. A =
_
_
3 0 0
4 7 4
2 2 1
_
_
Exercise 4.4.11. Find the fundamental matrix which satises (0) = I for the system
x

= Ax for each of the matrices A in Exercise 4.4.10.


Exercise 4.4.12. Let A be a square matrix and be a fundamental matrix for the system
x

= Ax.
1. Prove that for any non-singular constant matrix Q, we have Q is a fundamental matrix
for the system if and only if QA = AQ.
2. Prove that (
T
)
1
is a fundamental matrix for the system x

= A
T
x.
Exercise 4.4.13. Prove that if
1
and
2
are two fundamental matrices for a system, then

2
=
1
P for some non-singular matrix P.
4.5 Repeated eigenvalues
Consider the system
x

= Ax, where A =
_
1 1
1 3
_
.
The characteristic equation of A is

1 1
1 3

= 0

2
4 + 4 = 0.
It has a root = 2 of multiplicity 2. (The multiplicity of a root of the characteristic equation is
called the algebraic multiplicity). However, the set of all eigenvectors associated with = 2
is spanned by one vector
=
_
1
1
_
.
In other words, the dimension of the eigenspace associated with eigenvalue = 2 is one. (The
set of eigenvectors associated with a certain eigenvalue form a vector subspace and is called the
eigenspace associated with the eigenvalue. The dimension of the eigenspace associated with
the eigenvalue is called the geometric multiplicity of the eigenvalue. In general, the geometric
multiplicity of an eigenvalue is always less than or equal to its algebraic multiplicity.) We know
that
x
(1)
= e
t
=
_
e
2t
e
2t
_
is a solution to the system. How do we nd another solution to form a fundamental set of
solutions?
Based on the procedure used for higher order linear equations, it may be natural to attempt to
nd a second solution of the form
x = te
2t
.
Systems of First Order Linear Equations 60
Substituting this into the equation reads
d
dt
te
2t
= Ate
2t

2te
2t
+e
2t
= te
2t
A
e
2t
= te
2t
(A2I)
e
2t
= 0
which has no non-zero solution for eigenvector . To overcome this problem, we try another
substitution
x = te
2t
+e
2t
,
where is to be determined. Then the equation reads
2te
2t
+e
2t
( + 2) = A(te
2t
+e
2t
)
e
2t
= e
2t
(A2I)
= (A2I)
Therefore if we take such that = (A2I) is an eigenvector, then x = te
2t
+e
2t
is another
solution to the system. Note that satises
_
(AI) = 0,
(AI)
2
= 0.
A vector satisfying these two equations is called a generalized eigenvector of rank 2 associ-
ated with eigenvalue . Back to our example
(A2I) =
_
1 1
1 1
_
=
_
1
1
_
.
We may take
=
_
1
0
_
.
Then
x
(2)
= te
2t
+e
2t
= te
2t
_
1
1
_
+e
2t
_
1
0
_
.
is a solution to the system and the general solution is
x = c
1
x
(1)
+c
2
x
(2)
= c
1
e
2t
_
1
1
_
+c
2
_
te
2t
_
1
1
_
+e
2t
_
1
0
__
= e
2t
_
c
1
c
2
+c
2
t
c
1
c
2
t
_
.
Denition 4.5.1 (Generalized eigenvector). Let A be a square matrix, be an eigenvalue of A
and k be a positive integer. A non-zero vector is called a generalized eigenvector of rank
k associated with eigenvalue if
_
(AI)
k1
= 0,
(AI)
k
= 0.
Note that a vector is a generalized eigenvector of rank 1 if and only if it is an ordinary eigen-
vector.
Systems of First Order Linear Equations 61
Theorem 4.5.2. Let A be a square matrix and be a generalized eigenvector of rank k associated
with eigenvalue . Let
_

0
= ,

1
= (AI),

2
= (AI)
2
,
.
.
.

k1
= (AI)
k1
,

k
= (AI)
k
= 0.
Then
1. For 0 l k 1, we have
l
is a generalized eigenvector of rank k l associated with
eigenvalue .
2. The vectors ,
1
,
2
, ,
k1
are linearly independent.
Proof. It is easy to see that
l
= (AI)
l

l
satises
_
(AI)
kl1

l
= (AI)
k1
=
k1
= 0,
(AI)
kl

l
= (AI)
k
=
k
= 0
and the rst statement follows. We prove the second statement by induction on k. The theorem
is obvious when k = 1 since is non-zero. Assume that the theorem is true for rank of
small than k. Suppose is a generalized eigenvector of rank k associated with eigenvalue and
c
0
, c
1
, , c
k1
are scalars such that
c
0
+c
1

1
+ +c
k2

k2
+c
k1

k1
= 0.
Multiplying both sides from the left by AI, we have
c
0

1
+c
1

2
+ +c
k2

k1
= 0.
Here we used
k
= (AI)
k1
= 0. Now
1
is a generalized eigenvector of rank k 1 by the
rst statement. Thus by induction hypothesis, we have
1
,
2
, ,
k1
are linearly independent
and hence
c
0
= c
1
= = c
k2
= 0.
Combining the rst equality gives c
k1

k1
= 0 which implies c
k1
= 0 since
k1
is non-zero.
We conclude that ,
1
,
2
, ,
k1
are linearly independent.
Theorem 4.5.3. Suppose is an eigenvalue of a n n matrix A and is a generalized eigen-
vector of rank k associated with . Let ,
1
,
2
, ,
k1
be vectors dened in Theorem 4.5.2,
then
_

_
x
(1)
= e
t

k1
x
(2)
= e
t
(
k2
+t
k1
)
x
(3)
= e
t
(
k3
+t
k2
+
t
2
2

k1
)
.
.
.
x
(k)
= e
t
( +t
1
+
t
2
2

2
+ +
t
k2
(k 2)!

k2
+
t
k1
(k 1)!

k1
)
are linearly independent solutions to the system
x

= Ax.
Systems of First Order Linear Equations 62
Proof. It is left for the reader to check that the solutions are linearly independent. It suces to
prove that x
(k)
is a solution to the system. Observe that for any 0 i k 1,
A
i
=
i
+ (AI)
i
=
i
+
i+1
.
Thus
dx
(k)
dt
=
d
dt
_
e
t
( +t
1
+
t
2
2

2
+ +
t
k1
(k 1)!

k1
)
_
= e
t
_
+ (1 +t)
1
+ (t +
t
2
2
)
2
+ + (
t
k2
(k 2)!
+
t
k1
(k 1)!
)
k1
_
= e
t
_
( +
1
) +t(
1
+
2
) +
t
2
2
(
2
+
3
) + +
t
k2
(k 2)!
(
k2
+
k1
) +
t
k1
(k 1)!

k1
_
= e
t
_
A +tA
1
+
t
2
2
A
2
+ +
t
k2
(k 2)!
A
k2
+
t
k1
(k 1)!
A
k1
_
= Ae
t
_
+t
1
+
t
2
2

2
+ +
t
k2
(k 2)!

k2
+
t
k1
(k 1)!

k1
_
= Ax
(k)
Example 4.5.4. Solve
x

=
_
1 3
3 7
_
x.
Solution: The characteristic equation of the coecient matrix A is

1 3
3 7

= 0
( 4)
2
= 0.
We nd that = 4 is double root and the eigenspace associated with = 4 is of dimension 1
and is spanned by (1, 1)
T
. Thus
x
(1)
= e
4t
_
1
1
_
is a solution. To nd another solution which is not a multiple of x
(1)
, we need to nd a generalized
eigenvector of rank 2. First we calculate
A4I =
_
3 3
3 3
_
.
Now we if take
=
_
1
0
_
,
then satises
_
_
_

1
= (A4I) =
_
3
3
_
= 0,

2
= (A4I)
2
= 0.
Systems of First Order Linear Equations 63
Thus is a generalized eigenvector of rank 2. Hence
x
(2)
= e
t
( +t
1
)
= e
4t
__
1
0
_
+t
_
3
3
__
= e
4t
_
1 3t
3t
_
is another solution to the system. Therefore the general solution is
x = c
1
e
4t
_
1
1
_
+c
2
e
4t
_
1 3t
3t
_
= e
4t
_
c
1
+c
2
3c
2
t
c
1
+ 3c
2
t
_

Example 4.5.5. Solve


x

=
_
7 1
4 3
_
x.
Solution: The characteristic equation of the coecient matrix A is

7 1
4 3

= 0
( 5)
2
= 0.
We nd that = 5 is double root and the eigenspace associated with = 5 is of dimension 1
and is spanned by (1, 2)
T
. Thus
x
(1)
= e
5t
_
1
2
_
is a solution. To nd the second solution, we calculate
A5I =
_
2 1
4 2
_
.
Now we if take
=
_
1
0
_
,
then satises
_

1
= (A5I) =
_
2
4
_
= 0,

2
= (A5I)
2
= (A5I)
_
2
4
_
= 0.
Thus is a generalized eigenvector of rank 2. Hence
x
(2)
= e
t
( +t
1
)
= e
5t
__
1
0
_
+t
_
2
4
__
= e
5t
_
1 + 2t
4t
_
Systems of First Order Linear Equations 64
is another solution to the system. Therefore the general solution is
x = c
1
e
5t
_
1
2
_
+c
2
e
5t
_
1 + 2t
4t
_
= e
5t
_
c
1
+c
2
+ 2c
2
t
2c
1
4c
2
t
_

Example 4.5.6. Solve


x

=
_
_
0 1 2
5 3 7
1 0 0
_
_
x.
Solution: The characteristic equation of the coecient matrix A is

1 2
5 3 7
1 0

= 0
( + 1)
3
= 0.
Thus A has an eigenvalue = 1 of multiplicity 3. By considering
A+I =
_
_
1 1 2
5 2 7
1 0 1
_
_
we see that the associated eigenspace is of dimension 1 and is spanned by (1, 1, 1)
T
. We need
to nd a generalized eigenvector of rank 3, that is, a vector such that
_
(A+I)
2
= 0,
(A+I)
3
= 0.
Note that by Cayley-Hamilton Theorem, we have (A+I)
3
= 0. Thus the condition (A+I)
3
= 0
is automatic. We need to nd which satises the rst condition. Now we take = (1, 0, 0)
T
,
then
_

1
= (A+I) =
_
_
1
5
1
_
_
= 0,

2
= (A+I)
2
=
_
_
2
2
2
_
_
= 0.
(One may verify that (A + I)
3
= 0 though it is automatic.) Therefore is a generalized
eigenvector of rank 3 associated with = 1. Hence
_

_
x
(1)
= e
t

2
= e
t
_
_
2
2
2
_
_
x
(2)
= e
t
(
1
+t
2
) = e
t
_
_
1 2t
5 2t
1 + 2t
_
_
x
(3)
= e
t
( +t
1
+
t
2
2

2
) = e
t
_
_
1 +t t
2
5t t
2
t +t
2
_
_
form a fundamental set of solutions to the system.
Systems of First Order Linear Equations 65
Example 4.5.7. Solve
x

=
_
_
0 1 2
8 1 6
7 3 8
_
_
x.
Solution: The characteristic equation of the coecient matrix A

1 2
8 1 6
7 3 8

= 0
( 3)( 2)
2
= 0
has a simple root
1
= 3 and a double root
2
=
3
= 2. The eigenspace associated with the
simple root
1
= 3 is of dimension 1 and is spanned by (1, 1, 2)
T
. The eigenspace associated
with the double root
2
=
3
= 2 is of dimension 1 and is spanned by (0, 2, 1)
T
. We obtain two
linearly independent solutions
_

_
x
(1)
= e
3t
_
_
1
1
2
_
_
x
(2)
= e
2t
_
_
0
2
1
_
_
.
To nd the third solution, we need to nd a generalized eigenvector of rank 2 associated with
the double root
2
=
3
= 2. The null space of
(A2I)
2
=
_
_
2 1 2
8 3 6
7 3 6
_
_
2
=
_
_
2 1 2
2 1 2
4 2 4
_
_
is spanned by (0, 2, 1)
T
and (1, 0, 1)
T
. We compute
_

_
(A2I)
_
_
0
2
1
_
_
= 0
(A2I)
_
_
1
0
1
_
_
=
_
_
0
2
1
_
_
= 0.
Thus (1, 0, 1)
T
is a generalized eigenvector of rank 2 associated with = 2 while (0, 2, 1)
T
is
not. (Note that a non-zero vector in the null space of (AI)
2
is a generalized eigenvector of
rank 2 if and only if it is not an eigenvector.) Let
_

_
=
_
_
1
0
1
_
_

1
= (A2I) =
_
_
0
2
1
_
_
.
We obtain the third solution
x
(3)
= e
t
( +t
1
) = e
2t
_
_
_
_
1
0
1
_
_
+t
_
_
0
2
1
_
_
_
_
= e
2t
_
_
1
2t
1 +t
_
_
Systems of First Order Linear Equations 66
Therefore the general solution is
x = c
1
e
3t
_
_
1
1
2
_
_
+c
2
e
2t
_
_
0
2
1
_
_
+c
3
e
2t
_
_
1
2t
1 +t
_
_
.

Exercise 4.5.8. Solve x

= Ax for the following matrix A.


1. A =
_
1 2
2 3
_
2. A =
_
_
3 1 1
1 2 1
1 1 1
_
_
3. A =
_
_
3 1 3
2 2 2
1 0 1
_
_
4.6 Jordan normal forms
When a matrix is not diagonalizable, its matrix exponential can be calculated using Jordan
normal form.
Denition 4.6.1. An n n matrix J is called a Jordan matrix if it is of the form
J =
_
_
_
_
_
B
1
B
2
0
.
.
.
0 B
n
_
_
_
_
_
,
where each B
i
is of the form either

i
I =
_
_
_
_
_

i
0
.
.
.
0
i
_
_
_
_
_
or
_
_
_
_
_
_

i
1

i
.
.
.
0
.
.
.
1
0
i
_
_
_
_
_
_
.
Note that a diagonal matrix is a Jordan matrix. So Jordan matrix is a generalization of diag-
onal matrix. Similar to diagonal matrix, the matrix exponential of a Jordan matrix is easy to
calculate.
Theorem 4.6.2. Let
J =
_
_
_
_
_
B
1
B
2
0
.
.
.
0 B
n
_
_
_
_
_
be a Jordan matrix. Then
exp(Jt) =
_
_
_
_
_
exp(B
1
t)
exp(B
2
t)
0
.
.
.
0 exp(B
n
t)
_
_
_
_
_
Systems of First Order Linear Equations 67
where
exp(B
i
t) =
_

_
e

i
t
_
_
_
_
_
1
1
0
.
.
.
0 1
_
_
_
_
_
if B
i
=
_
_
_
_
_

i
0
.
.
.
0
i
_
_
_
_
_
e

i
t
_
_
_
_
_
_
_
_
_
1 t
t
2
2

t
k
k!
1 t
t
k1
(k1)!
.
.
.
.
.
.
.
.
.
.
.
.
t
0 1
_
_
_
_
_
_
_
_
_
if B
i
=
_
_
_
_
_
_

i
1

i
.
.
.
0
.
.
.
1
0
i
_
_
_
_
_
_
Proof. Using the property exp(AB) = exp(A) exp(B) if AB = BA, it suces to prove the
formula for exp(B
i
t). When B
i
=
i
I, it is obvious that exp(B
i
t) = e

i
t
I. Finally we have
exp
_

_
_
_
_
_
_
_

i
1

i
.
.
.
0
.
.
.
1
0
i
_
_
_
_
_
_
t
_

_
= exp
_
_
_
_
_
_
_
_
_
_
_
_

i
t 0

i
t
.
.
.
0
.
.
.
0
0
i
t
_
_
_
_
_
_
+
_
_
_
_
_
_
0 t
0
.
.
.
0
.
.
.
t
0 0
_
_
_
_
_
_
_
_
_
_
_
_
= exp
_
_
_
_
_
_

i
t 0

i
t
.
.
.
0
.
.
.
0
0
i
t
_
_
_
_
_
_
exp
_
_
_
_
_
_
0 t
0
.
.
.
0
.
.
.
t
0 0
_
_
_
_
_
_
= e

i
t
_
_
_
_
_
_
_
_
_
1 t
t
2
2

t
n
n!
1 t
t
n1
(n1)!
.
.
.
.
.
.
.
.
.
.
.
.
t
0 1
_
_
_
_
_
_
_
_
_
.
The second equality used again the property exp(AB) = exp(A) exp(B) if AB = BA and the
third equality used the fact that for k k matrix
N =
_
_
_
_
_
_
0 1
0
.
.
.
0
.
.
.
1
0 0
_
_
_
_
_
_
we have
N
k+1
= 0
and hence
exp(Nt) = I +tN+
t
2
N
2
2
+ +
t
k
N
k
k!
.
Systems of First Order Linear Equations 68
Theorem 4.6.3. Let A be an n n matrix. Then there exists non-singular matrix
Q = [ v
n
v
n1
v
2
v
1
],
where v
i
, i = 1, 2, , n, are column vectors of Q, such that for any i,
1. the column vector v
i
is a generalized eigenvector of A, and
2. if v
i
is a generalized eigenvector of rank k > 1 associated with eigenvalue
i
, then
(A
i
I)v
i
= v
i+1
.
Furthermore, if Q is a non-singular matrix which satises the above condition, then Q
1
AQ = J
is a Jordan matrix. The matrix J is called a Jordan normal form of A.
Note that the Jordan normal form of a matrix is unique up to a permutation of Jordan blocks.
Lets discuss the case for non-diagonalizable 2 2 and 3 3 matrices.
Example 4.6.4. Let A be a non-diagonalizable 2 2 matrix. Then A has only one eigenvalue

1
and the associated eigenspace is of dimension 1. There exists a generalized eigenvector of
rank 2. Let
1
= (A
1
I) and Q = [
1
], we have
Q
1
AQ = J =
_

1
1
0
1
_
.
The minimal polynomial of A is (x
1
)
2
. The matrix exponential is
exp(At) = e

1
t
Q
_
1 t
0 1
_
Q
1
.
Example 4.6.5. Let A be a non-diagonalizable 3 3 matrix. There are 3 possible cases.
1. There is one triple eigenvalue
1
and the associated eigenspace is of dimension 1. Then
there exists a generalized eigenvector of rank 3. Let
1
= (A
1
I),
2
= (A
1
I)
2

and Q = [
2

1
], we have
Q
1
AQ = J =
_
_

1
1 0
0
1
1
0 0
1
_
_
.
The minimal polynomial of A is (x
1
)
3
. The matrix exponential is
exp(At) = e

1
t
Q
_
_
1 t
t
2
2
0 1 t
0 0 1
_
_
Q
1
.
2. There is one triple eigenvalue
1
and the associated eigenspace is of dimension 2. Then
there exists a generalized eigenvector of rank 2 and an eigenvector such that , and

1
= (A
1
I) are linearly independent. Let Q = [
1
], we have
Q
1
AQ = J =
_
_

1
0 0
0
1
1
0 0
1
_
_
.
The minimal polynomial of A is (x
1
)
2
. The matrix exponential is
exp(At) = e

1
t
Q
_
_
1 0 0
0 1 t
0 0 1
_
_
Q
1
.
Systems of First Order Linear Equations 69
3. There is one simple eigenvalue
1
and one double eigenvalue
2
and both of the associated
eigenspaces are of dimension 2. Then there exists an eigenvector associated with
1
and
a generalized eigenvector of rank 2 associated with
2
. Let
1
= (A
2
I) (note that
, ,
1
must be linearly independent) and Q = [
1
], we have
Q
1
AQ = J =
_
_

1
0 0
0
2
1
0 0
2
_
_
.
The minimal polynomial of A is (x
1
)(x
2
)
2
. The matrix exponential is
exp(At) = Q
_
_
e

1
t
0 0
0 e

2
t
te

2
t
0 0 e

2
t
_
_
Q
1
.
We have the following generalization of Theorem 4.4.6 which allows us to use Jordan normal
form to nd a fundamental matrix for a linear system.
Theorem 4.6.6. Suppose Q is a non-singular matrix such that Q
1
AQ = J is a Jordan matrix.
Then Qexp(Jt) is a fundamental matrix to the system x

= Ax.
Lets redo Example 4.5.4 and Example 4.5.6 using matrix exponential of Jordan matrices.
Example 4.6.7. Find a fundamental matrix (t) with initial condition (0) = I for the system
x

= Ax where
A =
_
1 3
3 7
_
.
Solution: We have seen in Example 4.5.4 that = 4 is an eigenvalue and = (1, 0)
T
is a
generalized eigenvector of rank 2 of A. Let

1
= (A4I) =
_
3
3
_
.
Then if we take
Q = [
1
] =
_
3 1
3 0
_
,
we have
Q
1
AQ =
_
4 1
0 4
_
= J
is the Jordan normal form of A. We have
exp(Jt) = exp
_
4t t
0 4t
_
= exp
__
4t 0
0 4t
_
+
_
0 t
0 0
__
= exp
_
4t 0
0 4t
_
exp
_
0 t
0 0
_
= e
4t
_
1 t
0 1
_
Systems of First Order Linear Equations 70
(One should explain why the third equality holds in the above calculation.) The required
fundamental matrix is
(t) = exp(At)
= Qexp(Jt)Q
1
=
_
3 1
3 0
_
e
4t
_
1 t
0 1
__
3 1
3 0
_
1
= e
4t
_
3 3t + 1
3 3t
__
0
1
3
1 1
_
= e
4t
_
3t + 1 3t
3t 3t + 1
_
(Note that the two columns correspond to the solutions for c
1
= 0, c
2
= 1 and c
1
= 1/3, c
2
= 1
in Example 4.5.4.)
Example 4.6.8. Find a fundamental matrix for the system x

= Ax where
A =
_
_
0 1 2
5 3 7
1 0 0
_
_
.
Solution: We have seen in Example 4.5.6 that = 1 is an eigenvalue and = (1, 0, 0)
T
is a
generalized eigenvector of rank 3 of A. Let
_

1
= (A+I) =
_
_
1
5
1
_
_

2
= (A+I)
2
=
_
_
2
2
2
_
_
Then if we take
Q = [
2

1
] =
_
_
2 1 1
2 5 0
2 1 0
_
_
,
then
Q
1
AQ =
_
_
1 1 0
0 1 1
0 0 1
_
_
= J
Systems of First Order Linear Equations 71
is the Jordan form of A. Now
exp(Jt)
= exp
_
_
t t 0
0 t t
0 0 t
_
_
= exp
_
_
_
_
t 0 0
0 t 0
0 0 t
_
_
+
_
_
0 t 0
0 0 t
0 0 0
_
_
_
_
= exp
_
_
t 0 0
0 t 0
0 0 t
_
_
exp
_
_
0 t 0
0 0 t
0 0 0
_
_
(State the reason.)
= e
t
_
_
1 t
1
2
t
2
0 1 t
0 0 1
_
_
Therefore
= Qexp(Jt)
= e
t
_
_
2 1 1
2 5 0
2 1 0
_
_
_
_
1 t
1
2
t
2
0 1 t
0 0 1
_
_
= e
t
_
_
2 2t + 1 t
2
+t + 1
2 t 5 t
2
5t
2 2t + 1 t
2
+t
_
_
.
is a fundamental matrix for the system.
Exercise 4.6.9. For each of the following matrix A, nd the Jordan normal form of A and a
fundamental matrix for the system x

= Ax.
1. A =
_
1 3
3 5
_
2. A =
_
_
5 1 1
1 3 0
3 2 1
_
_
3. A =
_
_
3 1 1
2 0 1
1 1 2
_
_
4.7 Nonhomogeneous linear systems
We now turn to nonhomogeneous system
x

= Ax +g(t).
The general solution of the system can be expressed as
x = c
1
x
(1)
+ +c
n
x
(n)
+v(t),
Systems of First Order Linear Equations 72
where x
(1)
, , x
(n)
is a fundamental set of solutions to the associated homogeneous system and
v is a particular solution. We will briey describe two methods for nding a particular solution.
Variation of parameters
The following theorem may apply even to linear system with variable coecients.
Theorem 4.7.1. Let (t) be a fundamental matrix of the system x

(t) = P(t)x(t) and g(t) be


a continuous vector valued function. Then the solution to the non-homogeneous system
x

(t) = P(t)x(t) +g(t)


is given by
v(t) = (t)
_

1
(t)g(t)dt.
Moreover the solution to the initial value problem
_
x

(t) = P(t)x(t) +g(t)


x(t
0
) = x
0
is given by
v(t) = (t)
_

1
(t
0
)x
0
+
_
t
t
0

1
(s)g(s)ds
_
.
In particular if (t) is a fundamental matrix such that (t
0
) = I, then the solution is
v(t) = (t)
_
x
0
+
_
t
t
0

1
(s)g(s)ds
_
.
Proof. We checked that the expressions satisfy the dierential equation
v

_

1
gdt +
d
dt
__

1
gdt
_
= A
_

1
gdt +
1
g
= Av +g
To check that it satises the initial condition, we have
v(t
0
) = (t
0
)
_

1
(t
0
)x
0
+
_
t
0
t
0

1
(s)g(s)ds
_
= (t
0
)
1
(t
0
)x
0
= x
0
Example 4.7.2. Use method of variation of parameters to nd a particular solution for the
system x

= Ax +g(t) where
A =
_
2 1
1 2
_
and g(t) =
_
4e
t
18t
_
.
Systems of First Order Linear Equations 73
Solution: The eigenvalues of A are
1
= 3 and
2
= 1 with eigenvectors (1, 1)
T
and
(1, 1)
T
respectively. Thus a fundamental matrix of the system is
= Qexp(Dt) =
_
1 1
1 1
__
e
3t
0
0 e
t
_
=
_
e
3t
e
t
e
3t
e
t
_
.
Now

1
g =
_
e
3t
e
t
e
3t
e
t
_
1
_
4e
t
18t
_
=
1
2
_
e
3t
e
3t
e
t
e
t
__
4e
t
18t
_
=
_
2e
2t
9te
3t
2 + 9te
t
_
.
Thus
_

1
gdt =
_ _
2e
2t
9te
3t
2 + 9te
t
_
dt
=
_
e
2t
3te
3t
+e
3t
+c
1
2t + 9te
t
9e
t
+c
2
_
.
Therefore a particular solution is
v =
_
e
3t
e
t
e
3t
e
t
__
e
2t
3te
3t
+e
3t
2t + 9te
t
9e
t
_
=
_
e
3t
(e
2t
3te
3t
+e
3t
) +e
t
(2t + 9te
t
9e
t
)
e
3t
(e
2t
3te
3t
+e
3t
) +e
t
(2t + 9te
t
9e
t
)
_
=
_
2te
t
+e
t
+ 6t 8
2te
t
e
t
+ 12t 10
_

Example 4.7.3. Solve the initial value problem x

= Ax +g(t) with x(0) = (1, 1)


T
, where
A =
_
1 1
0 1
_
and g(t) =
_
2e
t
0
_
.
Solution: A fundamental matrix of the system is
= exp(At) = e
t
_
1 t
0 1
_
.
Now

1
g = e
t
_
1 t
0 1
_
1
_
2e
t
0
_
= e
t
_
1 t
0 1
__
2e
t
0
_
=
_
2e
2t
0
_
.
Systems of First Order Linear Equations 74
We have
_
t
0

1
(s)g(s)ds =
_
t
0
_
2e
2s
0
_
ds
=
_
1 e
2t
0
_
.
Therefore the solution to the initial problem is
v =
_
x
0
+
_
t
0

1
(s)g(s)ds
_
= e
t
_
1 t
0 1
___
1
1
_
+
_
1 e
2t
0
__
= e
t
_
1 t
0 1
__
e
2t
1
_
=
_
te
t
e
t
e
t
_

Exercise 4.7.4. Use the method of variation of parameters to nd a particular solution for each
of the following non-homogeneous equations.
1. x

=
_
2 1
4 3
_
x +
_
0
9e
t
_
2. x

=
_
1 2
4 3
_
x +
_
6e
5t
6e
5t
_
Undetermined coecients
Example 4.7.5. Use method of undetermined coecients to nd a particular solution for the
system x

= Ax +g(t) where
A =
_
2 1
1 2
_
and g(t) =
_
4e
t
18t
_
.
Solution: Let
v(t) = te
t
a +e
t
b +tc +d
be a particular solution. (Remark: It is not surprising that the term te
t
a appears since = 1
is an eigenvalue of A. But one should note that we also need the term e
t
b. It is because b
may not be an eigenvector and thus e
t
b may not be a solution to the associated homogeneous
system. One cannot ignore its contribution.) Then
v

= Av +g(t)
te
t
a +e
t
(a b) +c = te
t
Aa +e
t
Ab +tAc +Ad +
_
2e
t
3t
_
We need to choose a, b, c, d such that
_

_
(A+I)a = 0
(A+I)b = a
_
4
0
_
Ac =
_
0
18
_
Ad = c
Systems of First Order Linear Equations 75
We may nd that one of the solutions of the above system of equations is
a =
_
2
2
_
, b =
_
0
2
_
, c =
_
6
12
_
, d =
_
8
10
_
.
Therefore
v = te
t
_
2
2
_
e
t
_
0
2
_
+t
_
6
12
_

_
8
10
_
=
_
2te
t
+ 6t 8
2te
t
2e
t
+ 12t 10
_
is a particular solution to the system. (Note: This particular solution is not the same as the one
given in Example 4.7.2. They are dierent by (e
t
, e
t
)
T
which is a solution to the associated
homogeneous system.)
Exercise 4.7.6. Redo Exercise 4.7.4 using the method of undetermined coecients.
5 Nonlinear Dierential Equations and Stability
5.1 Phase plane of linear systems
Consider the system of equations
x

= Ax,
where A is a 2 2 matrix. We will assume that A is nonsingular. Recall that a nonzero vector
valued function
x = e
rt
is a solution to the system if and only if r is an eigenvalue of A and is a eigenvector corre-
sponding to r. Let x
0
be a constant vector. If x = x
0
is a solution to the system, than x
0
is
called an equilibrium point (or critical point) of the system. It is obvious that the zero
vector 0 is the only critical point of the system x

= Ax if A is nonsingular.
A solution to the system can be viewed as a parametric representation for a curve in the x
1
x
2
-
plane. It is regarded as a path or trajectory traversed by a moving particle whose velocity
x

= Ax is spectied by the dierential equation. The x


1
x
2
-plane itself is called the phase
plane, and a representative set of trajectories is referred to as a phase portrait.
Consider the system
x

=
_
1 2
3 2
_
x.
We can think of a particle moving in the phase plane with velocity x

= Ax which depends on
its position. Now the system denes a (velocity) vector eld on the plane which is sketched in
Fig 5.1.1.
Fig 5.1.1
The solutions of the system are integral curves of the vector eld. We can consider them as the
trajectories of moving particles which are sketched in Fig 5.1.2.
Fig 5.1.2
Nonlinear Dierential Equations and Stability 77
In this case it looks like most of the solutions will start away from the equilibrium solution then
as t starts to increase they move in towards the equilibrium solution and then eventually start
moving away from the equilibrium solution again.
There seem to be four solutions that have slightly dierent behaviors. It looks like two of
the solutions will start at (or near at least) the equilibrium solution and then move straight
away from it while two other solution start away from the equilibrium solution and then move
straight in towards the equilibrium solution. In these kinds of cases we call the equilibrium point
a saddle point and we call the equilibrium point in this case unstable since all but two of the
solutions are moving away from it as t increases.
Here are a few more phase portraits so you can see some more possible examples.
Unstable node Asymptotically stable node
Unstable improper node Asymptotically stable improper node
Saddle point Center
Nonlinear Dierential Equations and Stability 78
Unstable spiral point Asymptotically stable spiral point
Example 5.1.1. Consider the system
x

=
_
5 1
4 2
_
x.
The phase portrait of the above system is sketched in Fig 5.1.3.
Fig 5.1.3
The eigenvalues of the coecients matrix A are r
1
= 1 and r
2
= 6 and the corresponding
eigenvectors are

(1)
=
_
1
4
_
and
(2)
=
_
1
1
_
.
We call the equilibrium solution (0, 0) a node. The node in this example is asymptotically stable.
The general solution of the system is
x = c
1
e
t
_
1
4
_
+c
2
e
6t
_
1
1
_
=
_
c
1
e
t
c
2
e
6t
4c
1
e
t
+c
2
e
6t
_
.
Example 5.1.2. Consider the system
x

=
_
2 5
4 2
_
x.
The eigenvalues of the coecients matrix A are r
1
= 4i and r
2
= 4i and the corresponding
eigenvectors are

(1)
=
_
1 + 2i
2
_
and
(2)
=
_
1 2i
2
_
.
Nonlinear Dierential Equations and Stability 79
The general solution of the system is
x = c
1
_
cos 4t 2 sin 4t
2 cos 4t
_
+c
2
_
sin 4t + 2 cos 4t
2 sin 4t
_
=
_
(c
1
+ 2c
2
) cos 4t + (2c
1
+c
2
) sin 4t
2c
1
cos 4t + 2c
2
sin 4t)
_
.
Since the solutions of the system are periodic functions, the trajectories of the solution are closed
curves. In this example, the trajectories are closed ellipses centered at the origin. At the points
(1, 0)
T
and (0, 1)
T
, the values of x

are
_
2 5
4 2
__
1
0
_
=
_
2
4
_
and
_
2 5
4 2
__
0
1
_
=
_
5
2
_
.
The phase portrait of the above system is sketched in Fig 5.1.4.
Fig 5.1.4
This type of critical point is called a center.
Example 5.1.3. Consider the system
x

=
_
3 13
5 1
_
x.
The eigenvalues of the coecients matrix A are r
1
= 2+8i and r
2
= 28i and the corresponding
eigenvectors are

(1)
=
_
1 + 8i
5
_
and
(2)
=
_
1 8i
5
_
.
The general solution of the system is
x = c
1
e
2t
_
cos 8t 8 sin 8t
5 cos 8t
_
+c
2
e
2t
_
sin 8t + cos 8t+
5 sin 4t
_
=
_
e
2t
((c
1
+ 8c
2
) cos 8t + (8c
1
+c
2
) sin 8t)
e
2t
(5c
1
cos 8t + 5c
2
sin 8t)
_
.
The phase portrait of the above system is sketched in Fig 5.1.5.
Nonlinear Dierential Equations and Stability 80
Fig 5.1.5
We call the critical point a spiral point and in this example it is an unstable spiral point since
the trajectories move away from the origin.
Example 5.1.4. Consider the system
x

=
_
7 1
4 3
_
x.
The characteristic equation has a repeated root r
1
= r
2
= 5 and there is only one linearly
independent eigenvector
=
_
1
2
_
.
We can choose a generalized eigenvector
=
_
0
1
_
,
and we have

1
= (A5I) =
_
1
2
_
.
Thus the general solution of the system is
x = c
1
e
5t
_
1
2
_
+c
2
e
5t
_
t
_
1
2
_
+
_
0
1
__
=
_
e
5t
(c
2
t +c
1
)
e
5t
(2c
2
t + (2c
1
+c
2
))
_
.
The phase portrait of the above system is sketched in Fig 5.1.6.
Fig 5.1.6
Nonlinear Dierential Equations and Stability 81
We call the critical point a improper node or degenerate node and in this example it is an
unstable improper node.
Example 5.1.5. Consider the system
x

=
_
6 9
1 12
_
x.
The characteristic equation has a repeated root r
1
= r
2
= 9 and there is only one linearly
independent eigenvector
=
_
3
1
_
.
We can choose a generalized eigenvector
=
_
1
0
_
,
and we have

1
= (A+ 9I) =
_
3
1
_
.
Thus the general solution of the system is
x = c
1
e
9t
_
3
1
_
+c
2
e
9t
_
t
_
3
1
_
+
_
1
0
__
=
_
e
9t
(3c
2
t + 3c
1
+c
2
)
e
9t
(c
2
t +c
1
)
_
.
The phase portrait of the above system is sketched in Fig 5.1.7.
Fig 5.1.7
In this example the critical point is a stable improper node.
We can see from the above examples that the type of critical point depends on the eigenvalues
of the system. We summarize the properties of linear system in the following table.
Nonlinear Dierential Equations and Stability 82
Stability properties of linear system x

= Ax with det(A) = 0.
Eigenvalues Type of critical point Stability
r
1
> r
2
> 0 Node Unstable
r
1
< r
2
< 0 Node Asymptotically stable
r
1
r
2
< 0 Saddle point Unstable
r
1
= r
2
> 0 Proper or improper node Unstable
r
1
= r
2
< 0 Proper or improper node Asymptotically stable
r
1
, r
2
= i, > 0 Spiral point Unstable
r
1
, r
2
= i, < 0 Spiral point Asymptotically stable
r
1
, r
2
= i Center Stable
In fact, the critical point is unstable if the real part of one of the eigenvalues is positive and
is asymptotically stable if the real part of both eigenvalues are negative. If the eigenvalues are
purely imaginary, then the critical point is stable but not asymptotically stable.
We may also let
= tr(A) and = det(A).
The stablity of the system is shown in the following trace-determinant plane.
5.2 Autonomous systems and stability
In this section, we are concerned with systems of two simultaneous equations
_

_
dx
dt
= F(x, y),
dy
dt
= G(x, y),
where F and G are continuous functions. We can write the system as the form
x

= f (x).
Nonlinear Dierential Equations and Stability 83
If x
0
is a point such that f (x
0
) = 0, then x
0
is called a critical point of the autonomous
system. It is easy to see that the constant function x = x
0
is a solution to the system when x
0
is a critical point.
Denition 5.2.1. Let x
0
be a critical point of the system x

= f (x).
1. x
0
is said to be stable if for any > 0, there exists a > 0 such that for every solution
x = (t) which satises
(0) x
0
< ,
we have
(t) x
0
< ,
for any positive t.
2. x
0
is said to be unstable if it is not stable.
3. x
0
is said to be asymptotically stable if it is stable and there exists > 0 such that for
every solution x = (t) which satises
(0) x
0
< ,
we have
lim
t
(t) = x
0
.
Theorem 5.2.2. Let A be a 2 2 nonsingular matrix and r
1
, r
2
be eigenvalues of A. Consider
the linear system x

= Ax.
1. The critical point x = 0 is asymptotically stable if Re(r
1
), Re(r
2
) are both negative.
2. The critical point x = 0 is stable but not asymptotically stable if r
1
, r
2
are pure imaginary.
3. The critical point x = 0 is unstable if one of Re(r
1
), Re(r
2
) is positive.
Example 5.2.3. The motion of a simple pendulum is modeled by the second order equation
d
2

dt
2
+
d
dt
+
2
sin = 0,
where = c/mL and
2
= g/L. If we let x = and y = d/dt, then the equation is converted
to the system
_

_
dx
dt
= y,
dy
dt
=
2
sin x y.
By solving the equations
_
y = 0,

2
sin x y = 0.
we nd the critical points of the system are
x = n, n Z and y = 0.
The points correspond to an even n are stable critical points which represent the mass directly
below the point of support. If > 0, these critical points are also asymptotically stable. The
points correspond to an odd n are unstable critical points which represent the mass directly above
the point of support.
Nonlinear Dierential Equations and Stability 84
The trajectories of a two dimensional autonomous system can be obtain by solving the rst
order dierential equation
dy
dx
=
dy/dt
dx/dt
=
G(x, y)
F(x, y)
.
Example 5.2.4. Find the trajectories of the system
_

_
dx
dt
= y,
dy
dt
= x.
Solution: Solving the rst order equation
dy
dx
=
x
y
ydy = xdx
y
2
= x
2
+c
Thus the trajectories are given by the equation
y
2
x
2
= c,
where c is arbitrary.
Another way to obtain the solution is to solve the system directly and get the solution
x = c
1
e
t
+c
2
e
t
, and y = c
1
e
t
c
2
e
t
.
Eliminating t, we get
x
2
y
2
= 4c
1
c
2
.
The system has an unstable critical point at the origin.
Example 5.2.5. Find the trajectories of the system
_

_
dx
dt
= 4 2y,
dy
dt
= 12 3x
2
.
Solution: Solving the rst order equation
dy
dx
=
12 3x
2
4 2y
(4 2y)dy = (12 3x
2
)dx
4y y
2
+x
3
12x = c
The system has two critical points (x, y) = (2, 2) and (2, 2). The point (2, 2) is unstable while
the point (2, 2) is stable but not asymptotically stable.
Example 5.2.6. Consider the Dung equation
x

x +x
3
= 0.
Let y = x

, we obtain a nonlinear system


_

_
dx
dt
= y,
dy
dt
= x x
3
.
There are three critical points (0, 0), (1, 0) and (1, 0). The origin is a saddle point and thus is
unstable. The critical points (1, 0) and (1, 0) are stable. The phase portrait is shown below.
Nonlinear Dierential Equations and Stability 85
5.3 Almost linear systems
Consider a nonlinear two dimensional autonomous system
x

= f (x).
By a linear change of coordinates, we may assume that x = 0 is a critical point.
Denition 5.3.1. We say that a two dimensional autonomous system is an almost linear
system if it can be written as the form
x

= Ax +g(x),
where A is a 2 2 matrix and
lim
x0
g(x)
x
= 0.
We will further assume that A is nonsingular so that x = 0 is an isolated critical point.
Example 5.3.2. The following systems are almost linear.
x

=
_
x
2
2xy +x
xy +y
2
3y
_
=
_
1 0
0 3
_
x +
_
x
2
2xy
xy +y
2
_
.
Example 5.3.3. The motion of a damped pendulum is described by

+
2
sin = 0,
where , are positive constants. Using the substitutions
x = and y =

,
the second order linear dierential equation is transformed to an almost linear system
x

=
_
y

2
sin x y
_
=
_
0 1

_
x +
2
_
0
x sin x
_
.
The phase portrait of the above system is shown below.
Nonlinear Dierential Equations and Stability 86
Theorem 5.3.4. The stability of an almost linear system is the same as the associated linear
system unless its eigenvalues are purely imaginary. When its eigenvalues are purely imaginary,
the system can be stable, asymptotically stable or unstable.
5.4 Competing species
In this section, we explore the application of phase analysis to a problem in population dynamics.
Suppose that in some closed environment there are two similar species competing for a limited
food supply, Let x and y be the populations of the two species at time t. We assume that the
population of each of the species is governed by
_

_
dx
dt
= x(
1

1
x
1
y),
dy
dt
= y(
2

2
y
2
x),
where
1
,
1
,
1
,
2
,
2
,
2
are positive constant. One should compare it with the logistic equation
which model the population of a single species in chapter one.
Example 5.4.1. Consider the system
_

_
dx
dt
= x(1 0.2x 0.1y),
dy
dt
= y(1 0.2y 0.1x).
There are four critical points (0, 0), (5, 0), (0, 5) and (10/3, 10/3). The system is almost linear
in the neighborhood of each critical point. The rst three of these points are unstable and involve
the extinction of one or both species; only the last corresponds to the long-term survival of both
species. The phase portrait of the system is shown below.
5.5 Predator-Prey equations
Consider two species that exists together and interact. One is a predator which depends in an
essential way on the other, the prey, for its food supply. We denote by x and y the populations of
the prey and predator repectively. We assume that x and y satisfy the following Lotka-Volterra
equations
_

_
dx
dt
= x(a y),
dy
dt
= y(c +x),
(5.5.1)
Nonlinear Dierential Equations and Stability 87
where a, c, and are positive constants.
Example 5.5.1. Consider the system
_

_
dx
dt
= x(1 0.5y),
dy
dt
= y(0.75 + 0.25x).
There are two critical points (0, 0) and (3, 2). Near (0, 0), the corresponding approximating linear
system is
x

=
_
1 0
0 0.75
_
x.
The eigenvalues are 1 and 0.75. So (0, 0) is a saddle point and thus is unstable.
We now consider the critical point (3, 2). We can use the substitution x = 3 +u and y = 2 +v
to tranform the critical point to the origin and the corresponding approximating linear system is
_
u
v
_

=
_
0 1.5
0.5 0
__
u
v
_
.
The eigenvalues and eigenvectors of the this system are
r
1
=

3i
2
,
(1)
=
_
1
i/

3
_
; r
2
=

3i
2
,
(1)
=
_
1
i/

3
_
.
Since the eigenvalues are purely imaginary, we cannot determine the stability of the original
system (5.5.1) by its linear approximation. However, we can nd the trajectories of the system
by solving it directly. Dividing the second equation of (5.5.1) by the rst, we have
dy
dx
=
y(0.75 + 0.25x)
x(1 0.5y)
1 0.5y
y
dy =
0.75 + 0.25x
x
dx,
which is a separable equation. It follows that the solution are
0.75 ln x + lny 0.5y 0.25x = c,
where c is a constant. The graph of the equation for a xed value of c is a closed curve surround-
ing the critical point. Thus the critical point is also a center of the nonlinear system (5.5.1),
and the predator and prey population exhibit a cyclic variation. The phase portrait of the system
around (3, 2) is shown in the following gure.
The variations of the prey and predator populations with time is shown below.
Nonlinear Dierential Equations and Stability 88
5.6 Liapunovs second method
You have seen that in most cases, the stability of an almost linear system can be determined
by its associated linear system. But this method fails when the eigenvalues of an almost linear
system are purely imaginary. In this section, we discuss the Liapunovs second method (or
also called the direct method) to determine the stability of such system. basically, Liapunovs
second method is a generalization of the following two physical principles for conservative system
1. A rest position is stable if the potential energy is a local minimum.
2. The total energy is a constant during any motion.
To illustrate these concepts, lets look at the undamped pendulum which is governed by the
system
_

_
dx
dt
= y,
dy
dt
=
g
L
sin x,
where x = and y = d/dt. The potential energy of the pendulum (assuming zero when = 0)
is
U(x, y) = mgL(1 cos x).
The total energy of the system is
V (x, y) = mgL(1 cos x) +
1
2
mL
2
y
2
.
On a trajectory corresponding to a solution x = (t), y = (t), V (x, y) = V ((t), (t)) can be
considered as a function of t. The rate of change of V following the trajectory is given by
dV
dt
= V
x
dx
dt
+V
y
dy
dt
= (mgLsin x)
dx
dt
+mL
2
y
dy
dt
= (mgLsin x)y +mL
2
y(
g
L
sin x)
= 0
(We will also write
dV
dt
as

V .) Hence V is constant along any trajectory of the system. If the
initial state, say, (x
0
, y
0
) of the pendulum is suciently close to (0, 0), then the energy along
the whole trajectory will remains V
0
= V (x
0
, y
0
). Thus the equation of the trajectory is
V (x, y) = mgL(1 cos x) +
1
2
mL
2
y
2

1
2
mgLx
2
+
1
2
mL
2
y
2
= V
0
.
Nonlinear Dierential Equations and Stability 89
We can also see that the whole trajectory will keep close to (0, 0). So 1 cos x x
2
/2 and the
equation of the trajectory is approximately
x
2
2V
0
/mgL
+
y
2
2V
0
/mL
2
= 1.
If damping is present, the motion is governed by the following system
_

_
dx
dt
= y,
dy
dt
= y
g
L
sin x.
In this case,
dV
dt
= V
x
dx
dt
+V
y
dy
dt
= (mgLsin x)
dx
dt
+mL
2
y
dy
dt
= (mgLsin x)y +mL
2
y(y
g
L
sin x)
= 0 mL
2
y
2
0.
Thus energy is decreasing along any trajectory except for the line y = 0. Hence each trajectory
must approach a point of minimum energy. Since the line y = 0 is not a trajectory, each
trajectory must approach a point of minimum energy which must be (0, 0). Here the energy
function V (x, y) is useful to study the stability of the system because it has a local minimum at
(0, 0) and dV/dt 0 along any trajectory.
Denition 5.6.1. Let V be a function dened on a domain containing (0, 0).
1. V is said to be positive denite (negative denite) at (0, 0) if V (x, y) 0 (V (x, y) 0)
on a neighborhood of (0, 0) and V (x, y) = 0 only when (x, y) = (0, 0).
2. V is said to be positive semidenite (negative semidenite) at (0, 0) if V (x, y) 0
(V (x, y) 0) on a neighborhood of (0, 0) and V (0, 0) = 0.
Theorem 5.6.2. [Liapunovs second method] Suppose that an almost linear autonomous system
has an isolated critical point at (0, 0).
1. If there exists a function V that is continuous and has continuous partial derivatives such
that V (0, 0) = 0 and
(a) V is positive denite, and
(b)
dV
dt
0 (
dV
dt
< 0),
then (0, 0) is a stable (asymptotically stable) point.
2. If there exists a function V that is continuous and has continuous partial derivative such
that V (0, 0) = 0 and
(a) in every neighborhood of (0, 0) there is at least one point at which V is positive (neg-
ative), and
Nonlinear Dierential Equations and Stability 90
(b)
dV
dt
is positive denite (negative denite),
then (0, 0) is an unstable point.
The function V in the above theorem is called a Liapunov function.
Example 5.6.3. Consider the undamped pendulum equations
_

_
dx
dt
= y,
dy
dt
=
g
L
sin x,
Show that
1. (0, 0) is a stable critical point, and
2. (, 0) is an unstable critical point,
for the system.
Solution:
1. Let
V (x, y) = mgL(1 cos x) +
1
2
mL
2
y
2
.
Then it is easy to see that V is positive denite at (0, 0). We have already seen that
dV
dt
(x, y) = 0.
Therefore (0, 0) is a stable critical point.
2. The function given in (1) cannot be used in this case since

V is neither positive nor negative
denite at (, 0). To analyze the point (, 0), it is convenient to move this point to the
origin by the change of variables (x, y) = (+u, v). Then the dierential equations become
du
dt
= v,
dv
dt
=
g
L
sin u.
Consider the function
V (u, v) = v sin u.
Then

V = (v cos u)v + (sin u)(g/L) sin u = v


2
cos u + (g/L) sin
2
u
is positive denite at (0, 0). Now in every neighborhood of (0, 0), we can nd (u, v) with
u > 0 and 0 < v < /2 so that V (u, v) > 0. Therefore the system is unstable at
(u, v) = (0, 0), i.e., (x, y) = (, 0).

There is no general method for the construction of Liapunov function. But the following theorem
help us nd a quadratic one.
Nonlinear Dierential Equations and Stability 91
Theorem 5.6.4. The quadratic function
V (x, y) = ax
2
+bxy +cy
2
is positive (negative) denite if and only if
a > 0 (a < 0) and b
2
4ac < 0.
Example 5.6.5. Consider the system
dx
dt
= y +ax
3
,
dy
dt
= x +ay
3
,
where a is a constant. The function
V (x, y) =
1
2
(x
2
+y
2
)
is positive denite. Now
dV
dt
= x(y +ax
3
) +y(x +ay
3
) = a(x
4
+y
4
).
When a < 0,

V is negative denite. Thus the origin is stable by Theorem 5.6.2. When a = 0,
the system is linear and the origin is a stable center. When a > 0,

V is positive denite. Thus
the origin is unstable by Theroem 5.6.2
Example 5.6.6. Show that the critical point (0, 0) of the system
dx
dt
= x xy
2
,
dy
dt
= y x
2
y
is asymptotically stable.
Solution: Let
V (x, y) = ax
2
+bxy +cy
2
.
Then

V = V
x
x +V
y
y
= (2ax +by)(x xy
2
) + (bx + 2cy)(y x
2
y)
=
_
2a(x
2
+x
2
y
2
) +b(2xy +xy
3
+x
3
y) + 2c(y
2
+x
2
y
2
)
_
If we choose b = 0, and a and c be any positive number, then

V is negative denite and V is
positive. Thus (0, 0) is an asymptotically stable critical point.
Example 5.6.7. Show that the critical point (0.5, 0.5) of the system
dx
dt
= x(1 x y),
dy
dt
= y(0.75 y 0.5x)
is asymptotically stable.
Nonlinear Dierential Equations and Stability 92
Solution: First make a change of (x, y) = (0.5 +u, 0.5 +v), then the system becomes
du
dt
= 0.5u 0.5v u
2
uv,
dv
dt
= 0.25u 0.5v 0.5uv v
2
.
Let
V (u, v) = u
2
+v
2
.
Then V is positive denite and

V = V
u
u +V
v
v
= 2u(0.5u 0.5v u
2
uv) + 2v(0.25u 0.5v 0.5uv v
2
)
=
_
(u
2
+ 1.5uv +v
2
) + (2u
3
+ 2u
2
v +uv
2
+ 2v
3
)
_
is negative denite by Theorem 5.6.4. Therefore (x, y) = (0.5, 0.5) is an asymptotically stable
critical point by Theorem 5.6.2.
5.7 Periodic solutions and limit cycles
Denition 5.7.1. If a closed trajectory in the phase plane have the property that at least one
other trajectory spirals into it either as time approaches innity or as time approaches minus-
innity, then it is called a limit cycle.
1. If all trajectories that start near a limit cycle (both inside and outside) spiral toward it as
t , then the limit cycle is said to be asymptotically stable.
2. If the trajectories on one side of a limit cycle spiral toward it, while those on the other
side spiral away as t , then the limit cycle is said to be semistable.
3. If the trajectories on both sides of a limit cycle spiral away as t , then the limit cycle
is said to be unstable.
Example 5.7.2. Consider the system
dx
dt
= x +y x(x
2
+y
2
),
dy
dt
= x +y y(x
2
+y
2
).
The corresponding linear system
x

=
_
1 1
1 1
_
x
has eigenvalues 1 i. Thus the system has an unstable critical point at the origin. It is easy to
see that
x = cos t,
y = sin t,
is a solution to the system. Consequently, the unit circle x
2
+ y
2
= 1 is a limit cycle. It is
convenient to introduce polar coordinates
x = r cos , y = r sin , r 0.
Nonlinear Dierential Equations and Stability 93
Then we have
y
dx
dt
x
dy
dt
= x
2
+y
2
r
2
d
dt
= r
2
d
dt
= 1
= t +
0
On the other hand,
x
dx
dt
+y
dy
dt
= (x
2
+y
2
) (x
2
+y
2
)
2
r
dr
dt
= r
2
(1 r
2
)
dr
r(1 r
2
)
= dt
log
_
r

r
2
1
_
= t +C
r
2
r
2
1
= ce
2t
r =
1

1 +ce
2t
.
We can see that r 1 when t . Therefore r = 1 is an asymptotically stable limit cycle.
Several trajectories of the system are shown in the following gure.
Stable limit cycle
Theorem 5.7.3. Consider the system
dx
dt
= F(x, y),
dy
dt
= G(x, y),
on a simply connected domain D, where F and G are functions which have continuous rst
partial derivatives.
1. A limit cycle must necessarily enclose at least one critical point. If it encloses only one
critical point, the critical point cannot be a saddle point.
Nonlinear Dierential Equations and Stability 94
2. If F
x
+G
y
has the same sign on D, then there is no limit cycle lying entirely in D.
Theorem 5.7.4. [Poincare-Bendixson Theorem] Suppose F and G have continuous rst partial
derivative in a domain. Let D
1
be a bounded subdomain and R is the union of D
1
and its
boundary. Suppose that R contains no critical point of the system. If there exists a constant t
0
such that x = (t) is a solution of the system that stays in R for all t t
0
, then either x = (t)
is a periodic solution, or it spiral toward a limit cycle as t . In particular, the system has
a periodic solution in R.
Example 5.7.5. The van der Pol equation
u

(1 u
2
)u

+u = 0,
where is a nonnegative constant, describes the current u in a triode oscillator. When = 0,
the equation reduces to u

+u = 0 whose solutions are sine or cosine waves of period 2. When


> 0, by letting x = u and y = u

, we write the equation as a system


dx
dt
= y,
dy
dt
= x +(1 x
2
)y.
There is only one critical point at the origin and the eignevalues of the corresponding linear
system are (
_

2
4)/2. Thus the origin is an unstable critical point. By Theorem 5.7.3,
if there exists limit cycles, they must enclose tbe origin. Now we calculate
F
x
+G
y
= (1 x
2
).
Then it follows from Theorem 5.7.4 that limit cycle, if there are any, are not contained in the
strip |x| < 1 where F
x
+G
y
> 0.
It can be shown that the van der Pol equation does have a unique limit cycle. The phase portraits
for various values of are shown below.
= 0 = 1 = 2
5.8 Chaos and strange attractors: The Lorenz equations
In this section we study a nonlinear autonomous third order system
dx
dt
= (x +y),
dy
dt
= rx y xz,
dz
dt
= bz +xy,
Nonlinear Dierential Equations and Stability 95
where , r and b are positive real numbers. These equations are commonly referred to as the
Lorenz equations which are used to study air ow and temperature in the atmosphere. The
variable x is related to the intensity of the uid motion, while the variable y and z are related
to the temperature variations in the horizontal and vertical directions.
When r < 1 the system has a unique critical point at the origin. When r > 1 the system has
three critical points,
(0, 0, 0), (
_
b(r 1),
_
b(r 1), r 1), and (
_
b(r 1),
_
b(r 1), r 1).
We will denote the origin by P
1
and the latter two critical points by P
2
and P
3
respectively.
Note that all critical points coincide when r = 1. As r increases through the value 1, the critical
point P
1
at the origin bifurcates, and the critical points P
2
and P
3
come into existence.
For the earths atmosphere, the reasonable values of the parameters are = 10 and b = 8/3.
Near (0, 0, 0), the approximating linear system is
_
_
x
y
z
_
_

=
_
_
10 10 0
r 1 0
0 0 8/3
_
_
_
_
x
y
z
_
_
.
The eigenvalues are

1
=
8
3
,
2
=
11

81 + 40r
2
,
3
=
11 +

81 + 40r
2
.
When r < 1, all three eigenvalues are negative and so the origin is asymptotically stable. When
r > 1,
3
is positive and therefore the origin becomes unstable.
Next let us consider the neighborhood of P
2
(
_
8(r 1)/3,
_
8(r 1)/3, r 1) for r > 1. By a
suitable change of coordinates, the approximating linear system is
_
_
u
v
w
_
_

=
_
_
10 10 0
1 1
_
8(r 1)/3
_
8(r 1)/3
_
8(r 1)/3 8/3
_
_
_
_
u
v
w
_
_
.
The eigenvalues are determined from the characteristic equation
3
3
+ 41
2
+ 8(r + 10) + 160(r 1) = 0.
The nature of eigenvalues and the stability of the critical point P
2
and P
3
are shown in the
following table
Stability of P
2
and P
3
r Eigenvalues Stability
1 < r < r
1
1.3456 3 negative eigenvalues Asymptotically stable
r
1
< r < r
2
24.737
1 negative eigenvalue and 2 complex
eigenvalues with negative real part
Asymptotically stable
r
2
< r
1 negative eigenvalue and 2 complex
eigenvalues with positive real part
Unstable
All three critical points are unstable when r > r
2
. Most solutions near P
2
and P
3
spiral away
from the critical point. Since none of the critical points is stable, one might expect that most
trajectories would approach innity for large t. However, it can be shown that all solutions
remain bounded as t goes to innity. Here are some properties of solutions of Lorenz system.
Nonlinear Dierential Equations and Stability 96
1. All solutions are bounded.
2. Almost every solution rotates around P
2
or P
3
and then moves over to a neighborhood of
the other. This pattern repeats innitely often.
3. Almost all solutions ultimately approach a certain limitimg set of points that has zero
volume called a strange attractor. Strange attractor is bounded but is not a closed
trajectory. It has a shape which looks like a buttery and has a fractal (Hausdor)
dimension of about 2.06.
4. The solutions are extremely sensitive to perturbations in the initial conditions. The fol-
lowing gure shows two solutions remain close initially and after a certain time, they are
quite dierent and seem to have no relation to each other. The term chaotic is used
to describe this property and it is this chaotic property that particularly attracted the
atttention of Lorenz in his original study of these equations. It caused him to conclude
that detailed long-range weather predictions are probably not possible.
6 Answers to Exercises
Exercise 4.2.7:
1. x = c
1
e
3t
_
1
1
_
+c
2
e
2t
_
2
3
_
2. x = c
1
_
cos 2t
cos 2t + 2 sin 2t
_
+c
2
_
sin 2t
2 cos 2t + sin 2t
_
3. x = c
1
e
2t
_
_
1
1
1
_
_
+c
2
e
t
_
_
1
1
0
_
_
+c
3
e
2t
_
_
0
1
1
_
_
4. x = c
1
e
3t
_
_
1
0
1
_
_
+c
2
e
3t
_
_
1
1
0
_
_
+c
3
e
2t
_
_
1
1
1
_
_
Exercise 4.3.9
1.
1
7
_
3e
5t
+ 4e
2t
3e
5t
3e
2t
4e
5t
4e
2t
4e
5t
+ 3e
2t
_
2. e
3t
_
1 +t t
t 1 t
_
3.
_
_
1 t t 6t
2
0 1 3t
0 0 1
_
_
4. e
3t
_
_
1 t
1
2
t
2
0 1 t
0 0 1
_
_
Exercise 4.4.10:
1.
_
4e
2t
e
t
e
2t
e
t
_
2.
_
_
0 e
3t
0
2e
5t
0 e
3t
e
5t
e
3t
e
3t
_
_
Exercise 4.4.11:
1. = exp(At) =
_
4
3
e
2t

1
3
e
t 4
3
e
2t

4
3
e
t

1
3
e
2t
+
1
3
e
t

1
3
e
2t
+
4
3
e
t
_
2. = exp(At) =
_
_
e
3t
0 0
2e
5t
+ 2e
3t
2e
5t
e
3t
2e
5t
+ 2e
3t
e
5t
+e
3t
e
5t
e
3t
e
5t
+ 2e
3t
_
_
Exercise 4.4.12:
Answers to Exercises 98
1. We have Q is non-singular since both Q and are non-singular. Now Q is a funda-
mental matrix for the system if and only if
dQ
dt
= AQ
Q
d
dt
= AQ
QA = AQ
QA = AQ.
2. By dierentiating
1
= I, we have
d
dt
(
1
) = 0
d
1
dt
+
1
d
dt
= 0
d
1
dt
=
1
d
dt
d
1
dt
=
1
d
dt

1
.
Now
d
dt
(
T
)
1
= (
d
1
dt
)
T
= (
1
d
dt

1
)
T
= (
1
A
1
)
T
= (
1
A)
T
= A
T
(
1
)
T
= A
T
(
T
)
1
and (
T
)
1
is non-singular. Therefore (
T
)
1
is a fundamental matrix for the system
x

= A
T
x.
Exercise 4.4.13: Write

1
=
_
x
(1)
x
(2)
x
(n)

and
2
=
_
y
(1)
y
(2)
y
(n)

,
then {x
(1)
, x
(2)
, , x
(n)
} and {y
(1)
, y
(2)
, , y
(n)
} constitute two fundamental sets of solutions
to the system. In particular for any i = 1, 2, , n,
y
(i)
= p
1i
x
(1)
+p
2i
x
(2)
+ +p
ni
x
(n)
, for some constants p
1i
, p
2i
, , p
ni
.
Now let
P =
_
_
_
_
_
p
11
p
12
p
1n
p
21
p
22
p
2n
.
.
.
.
.
.
.
.
.
.
.
.
p
n1
p
n2
p
nn
_
_
_
_
_
,
we have
2
=
1
P. The matrix P must be non-singular, otherwise
2
cannot be non-singular.
Answers to Exercises 99
Alternative solution: We have
d
1
1

2
dt
=
d
1
1
dt

2
+
1
1
d
2
dt
=
1
1
d
1
dt

1
1

2
+
1
1
A
2
(see solution of Exercise 4.4.12.)
=
1
1
A
1

1
1

2
+
1
1
A
2
=
1
1
A
2
+
1
1
A
2
= 0
Therefore
1
1

2
= P is a non-singular constant matrix and the result follows.
Exercise 4.5.8:
1. x = e
t
_
c
1
_
1
1
_
+c
2
_
1 + 2t
2t
__
2. x = e
2t
_
_
c
1
_
_
1
0
1
_
_
+c
2
_
_
1 t
1
1 t
_
_
+c
3
_
_
1 +t
1
2
t
2
t
t
1
2
t
2
_
_
_
_
3. x = e
2t
_
_
c
1
_
_
1
2
1
_
_
+c
2
_
_
1 +t
2t
t
_
_
+c
3
_
_
t +
1
2
t
2
1 +t
2

1
2
t
2
_
_
_
_
Exercise 4.6.9
1. J =
_
2 1
0 2
_
, = e
2t
_
3 3t + 1
3 3t
_
2. J =
_
_
3 1 0
0 3 1
0 0 3
_
_
, = e
3t
_
_
0 2 2t + 1
2 2t + 1 t
2
+t
2 2t 3 t
2
3t
_
_
3. J =
_
_
1 0 0
0 2 1
0 0 2
_
_
, =
_
_
0 e
2t
te
2t
e
t
e
2t
te
2t
e
t
0 e
2t
_
_
Exercise 4.7.4: (The answers are not unique.)
1. x = e
t
_
1 3t
4 3t
_
2. x = e
5t
_
1
1
_
Exercise 4.7.6: Same as Exercise 4.7.4.

S-ar putea să vă placă și