Sunteți pe pagina 1din 100

KIX1002: ENGINEERING

MATHEMATICS 2
WEEK 8: LAPLACE TRANSFORM

WEEK 9: LAPLACE TRANSFORM SOLUTIONS FOR DIFFERENTIAL EQUATIONS

WEEK 10: FROBENIUS METHOD

WEEK 11: FOURIER SERIES

WEEK 12: PARTIAL DIFFERENTIAL EQUATIONS

WEEK 13: HEAT & WAVE EQUATIONS

WEEK 14: LAPLACE’S EQUATIONS

2017/2018
CONTENTS

WEEK 8: LAPLACE TRANSFORM ............................................................................................. 1


8.1 Introduction ............................................................................................................................ 1
8.2 Definition ................................................................................................................................ 1
8.3 Existence And Uniqueness Of Laplace Transform ........................................................... 2
8.3.1 Existence Theorem ......................................................................................................... 2
8.3.2 Uniqueness ....................................................................................................................... 2
8.4 Linearity .................................................................................................................................. 2
8.4.1 Laplace Transform Pairs ................................................................................................ 4
8.5 Inverse Transform .................................................................................................................. 4
8.6 Partial Fraction ...................................................................................................................... 5
8.7 Laplace Transform Of Derivatives ...................................................................................... 6
8.7.1 Laplace Transform Of First Derivative ....................................................................... 6
8.7.2 Laplace Transform Of Second Derivative .................................................................. 6
8.7.3 Laplace Transform Of The Derivative 𝑓 (𝑛) of Any Order ......................................... 6
8.8 Laplace Transform Of Integral ............................................................................................ 6
8.9 First Shift Theorem: s-Shifting ........................................................................................... 7
8.10 Unit Step Function (Heaviside Function) ........................................................................ 8
8.11 Second Shift Theorem: Time Shifting (t-Shifting) ........................................................ 9
WEEK 9: LAPLACE TRANSFORM SOLUTIONS FOR DIFFERENTIAL EQUATIONS .....13
9.1 Differentiation Of Transforms ............................................................................................13
9.2 Integration Of Transforms ................................................................................................. 14
9.3 Diract Delta Function ................................................................................................................ 14
9.4 Convolution ............................................................................................................................15
9.4.1 Properties Of Convolution .......................................................................................... 16
9.4.2 Integral Equations ........................................................................................................ 16
9.5 System OF ODEs .................................................................................................................. 16
WEEK 10: FROBENIUS METHOD .................................................................................................. 23
10.1 Solutions About Singular Points .............................................................................................. 23
10.2 Frobenius Method .............................................................................................................. 28
WEEK 11: FOURIER SERIES .......................................................................................................... 46
11.1 Introduction ............................................................................................................................. 46

2
11.2 Periodic Functions ................................................................................................................... 46
11.3 Trigonometric Series................................................................................................................ 47
11.4 Fourier Series ........................................................................................................................... 47
11.5 Derivation of the Euler Formula (6) ...........................................................................................51
11.6 Convergence and Sum of a Fourier Series ................................................................................ 53
11.7 Arbitrary Period (From Period 2𝜋 to Any Period 𝑝 = 2𝐿) ......................................................... 55
11.8 Even and Odd Function............................................................................................................ 59
11.9 Half-Range Expansions ............................................................................................................ 62
WEEK 12: PARTIAL DIFFERENTIAL EQUATIONS ........................................................................ 65
12.1 Basic concepts of PDE.............................................................................................................. 65
12.2 Solution by Direct Integration ................................................................................................. 67
12.3 Separable Partial Differential Equations .................................................................................. 68
Linear Partial Differential Equation ............................................................................................. 68
Solution of a PDE ........................................................................................................................ 69
Separation of Variables ............................................................................................................... 69
Superposition Principle ................................................................................................................ 71
Classification of Equations ........................................................................................................... 71
12.4 Classical PDEs and Boundary Value Problems ......................................................................... 72
Classical Equations ...................................................................................................................... 72
Boundary Value Problems ............................................................................................................ 73
WEEK 13: HEAT & WAVE EQUATIONS .........................................................................................75
13.1: Heat Equation ..........................................................................................................................75
13.2: Wave Equation ....................................................................................................................... 82
WEEK 14: LAPLACE’S EQUATION ................................................................................................ 90

3
LAPLACE TRANSFORM
WEEK 8: LAPLACE TRANSFORM
8.1 INTRODUCTION

Laplace transforms are invaluable for any engineer’s mathematical toolbox as they make solving
linear ODEs and related initial value problems, as well as systems of linear ODEs, much easier. The
key motivation for learning about Laplace transforms is that the process of solving an ODE is
simplified to an algebraic problem (and transformations). Applications abound: electrical networks,
springs, mixing problems, signal processing, and other areas of engineering and physics.

8.2 DEFINITION

Laplace transform of a function f(t) is defined as


𝐹(𝑠) = ℒ{𝑓(𝑡)} = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡


0

Laplace transform is called an integral transform because it transforms (changes) a function in one
space to a function in another space by a process of integration that involves a kernel, 𝑘(𝑠, 𝑡).

𝐹(𝑠) = ∫ 𝑘(𝑠, 𝑡)𝑓(𝑡) 𝑑𝑡


0

Examples:

1. Let 𝑓(𝑡) = 1 when t ≥ 0. Find 𝐹(𝑠).


Solution:

𝐹(𝑠) = ℒ(1)

= ∫0 1 ∙ 𝑒 −𝑠𝑡 𝑑𝑡

1 ∞
= − 𝑒 −𝑠𝑡 |
𝑠 0

1
=𝑠

2. Let 𝑓(𝑡) = 𝑒 𝑎𝑡 when t ≥ 0, where a is a constant. Find ℒ{𝑓(𝑡)}.


Solution:

ℒ(𝑒 𝑎𝑡 ) = ∫0 𝑒 𝑎𝑡 ∙ 𝑒 −𝑠𝑡 𝑑𝑡

1 ∞
= − 𝑠−𝑎 𝑒 −(𝑠−𝑎)𝑡 |
0

1
= 𝑠−𝑎

1
8.3 EXISTENCE AND UNIQUENESS OF LAPLACE TRANSFORM

8.3.1 EXISTENCE THEOREM

If 𝑓(𝑡) is defined and piecewise continuous on every finite interval on the semi-axis 𝑡 ≧ 0 and
satisfies the “growth restriction”

|𝑓(𝑡)| ≦ 𝑀𝑒 𝑘𝑡

for all 𝑡 ≧ 0 and some constants 𝑀 and 𝑘, then the Laplace transform ℒ(𝑓) exists for all 𝑠 > 𝑘.

piecewise continuous function

f is of exponential order c

8.3.2 UNIQUENESS

If the Laplace transform of a given function exists, it is uniquely determined. In particular, if two
continuous functions have the same transform, they are completely identical.

8.4 LINEARITY

For any functions 𝑓(𝑡) and 𝑔(𝑡) whose transforms exist and any constants 𝑎 and 𝑏, the transform of
𝑎𝑓(𝑡) + 𝑏𝑔(𝑡) is given by

ℒ{𝑎𝑓(𝑡) + 𝑏𝑔(𝑡)} = 𝑎ℒ{𝑓(𝑡)} + 𝑏ℒ{𝑔(𝑡)}


Proof:

ℒ{𝑎𝑓(𝑡) + 𝑏𝑔(𝑡)} = ∫0 𝑒 −𝑠𝑡 (𝑎𝑓(𝑡) + 𝑏𝑔(𝑡)) 𝑑𝑡

∞ ∞
= 𝑎 ∫0 𝑒 −𝑠𝑡 𝑓(𝑡) 𝑑𝑡 + 𝑏 ∫0 𝑒 −𝑠𝑡 𝑔(𝑡) 𝑑𝑡

= 𝑎ℒ{𝑓(𝑡)} + 𝑏ℒ{𝑔(𝑡)}

Examples:

1. Find the transforms of cosh(𝑎𝑡) and sinh(𝑎𝑡).


Solution:
𝑒 𝑎𝑡 +𝑒 −𝑎𝑡 𝑒 𝑎𝑡 −𝑒 −𝑎𝑡
ℒ{cosh(𝑎𝑡)} = ℒ ( 2
) ℒ{sinh(𝑎𝑡)} = ℒ ( 2
)

1 1 1 1
= ℒ(𝑒 𝑎𝑡 ) + ℒ(𝑒 −𝑎𝑡 ) = ℒ(𝑒 𝑎𝑡 ) − ℒ(𝑒 −𝑎𝑡 )
2 2 2 2

1 1 1 1 1 1
= ( + ) = ( − )
2 𝑠−𝑎 𝑠+𝑎 2 𝑠−𝑎 𝑠+𝑎

𝑠 𝑎
= 𝑠2 −𝑎2 = 𝑠2 −𝑎2

2
2. Calculate the Laplace transforms of cos(𝜔𝑡) and sin(𝜔𝑡).
Solution:

(a) ℒ{cos(𝜔𝑡)} = ∫0 𝑒 −𝑠𝑡 cos(𝜔𝑡) 𝑑𝑡

𝑒 −𝑠𝑡 𝜔 ∞
=− cos(𝜔𝑡)| − ∫0 𝑒 −𝑠𝑡 sin(𝜔𝑡) 𝑑𝑡
𝑠 0 𝑠


1 𝜔 𝑒 −𝑠𝑡 𝜔 ∞
= 𝑠 − 𝑠 (− 𝑠
sin(𝜔𝑡)| + 𝑠 ∫0 𝑒 −𝑠𝑡 cos(𝜔𝑡) 𝑑𝑡)
0

1 𝜔2 ∞ −𝑠𝑡
=𝑠− ∫ 𝑒
𝑠2 0
cos(𝜔𝑡) 𝑑𝑡

1 𝜔2
= − ℒ{cos(𝜔𝑡)}
𝑠 𝑠2

𝜔2 1
(1 + 𝑠2
) ℒ{cos(𝜔𝑡)} =𝑠

𝑠
ℒ{cos(𝜔𝑡)} = 𝑠2 +𝜔2


(b) ℒ{sin(𝜔𝑡)} = ∫0 𝑒 −𝑠𝑡 sin(𝜔𝑡) 𝑑𝑡

𝑒 −𝑠𝑡 𝜔 ∞
=− sin(𝜔𝑡)| + ∫0 𝑒 −𝑠𝑡 cos(𝜔𝑡) 𝑑𝑡
𝑠 0 𝑠

𝜔
= ℒ{cos(𝜔𝑡)}
𝑠

𝜔
= 𝑠2 +𝜔2

3. Solve ℒ(𝑡 𝑛+1 ).


Solution:

ℒ(𝑡 𝑛+1 ) = ∫0 𝑒 −𝑠𝑡 𝑡 𝑛+1 𝑑𝑡

𝑒 −𝑠𝑡 𝑛+1 𝑛+1 ∞ −𝑠𝑡 𝑛
=− 𝑡 | + ∫0 𝑒 𝑡 𝑑𝑡
𝑠 0 𝑠

𝑛+1
= 𝑠
ℒ(𝑡 𝑛 )

𝑛+1 𝑛
= 𝑠 𝑠
ℒ(𝑡 𝑛−1 )

𝑛+1 𝑛 21
= 𝑠 𝑠
… 𝑠 𝑠 ℒ(𝑡 0 )

𝑛+1 𝑛 211
= 𝑠 𝑠
…𝑠𝑠𝑠

(𝑛+1)!
= 𝑠𝑛+2

3
8.4.1 LAPLACE TRANSFORM PAIRS

8.5 INVERSE TRANSFORM

If 𝐹(𝑠) represents the Laplace transform of a function 𝑓(𝑡), that is, ℒ{𝑓(𝑡)} = 𝐹(𝑠), we say 𝑓(𝑡) is
the inverse Laplace transform of 𝐹(𝑠).

The inverse Laplace transform is denoted as

𝑓(𝑡) = ℒ −1 {𝐹(𝑠)}

Note: ℒ −1 {ℒ(𝑓(𝑡))} = 𝑓(𝑡) and ℒ{ℒ −1 (𝐹(𝑠))} = 𝐹(𝑠)

In determining the inverse Laplace transform, some manipulations must be done to get 𝐹(𝑠) into a
form suitable for the direct use of the Laplace transform table.

Example:
1 1 −2𝑠+6
Evaluate (a) ℒ −1 (𝑠5 ) (b) ℒ −1 (𝑠2 +7) (c) ℒ −1 ( 𝑠2 +4 ) .

Solution:
1 1 4! 1
(a) ℒ −1 (𝑠5 ) = 4! ℒ −1 (𝑠5 ) = 24 𝑡 4

1 1 −1 √7 1
(b) ℒ −1 (𝑠2 +7) = ℒ (𝑠2 +7) = sin √7𝑡
√7 √7

−2𝑠+6 −2𝑠 6 𝑠 6 2
(c) ℒ −1 ( 𝑠2 +4 ) = ℒ −1 (𝑠2 +4 + 𝑠2 +4) = −2ℒ −1 (𝑠2 +4) + 2 ℒ −1 (𝑠2 +4) = −2 cos 2𝑡 + 3 sin 2𝑡

4
8.6 PARTIAL FRACTION

The solution 𝐹(𝑠) usually comes out as a general form of

𝑃(𝑠) 𝑅(𝑠) 𝑎𝑛 𝑠 𝑛 + 𝑎𝑛−1 𝑠 𝑛−1 + ⋯ + 𝑎1 𝑠 + 𝑎0


𝐹(𝑠) = = 𝑆(𝑠) + =
𝑄(𝑠) 𝑄(𝑠) 𝑏𝑚 𝑠 𝑚 + 𝑏𝑚−1 𝑠 𝑚−1 + ⋯ + 𝑏1 𝑠 + 𝑏0

where 𝑃(𝑠) and 𝑅(𝑠) are the numerators and 𝑄(𝑠) is the denominator.

A proper rational function of 𝐹(𝑠) should be expanded as a sum of partial fraction before its inverse
Laplace transform can be found.

Depending on the roots of 𝑄(𝑠) we have fraction expansion in the form:

Denominator 𝑸(𝒔) Example of 𝑭(𝒔) Partial fraction expansion

1. Distinct & real roots 96𝑠 𝐴 𝐵 𝐶


+ +
𝑄(𝑠) = (𝑎1 𝑠 + 𝑏1 )(𝑎2 𝑠 + 𝑏2 ) … (𝑎𝑘 𝑠 + 𝑏𝑘 ) 𝑠(𝑠 + 8)(𝑠 + 6) 𝑠 𝑠+8 𝑠+6

2. Repeated & real roots 𝑠 + 30 𝐴 𝐵


+
𝑄(𝑠) = (𝑎1 𝑠 + 𝑏1 )𝑟 (𝑠 + 3)2 (𝑠 + 3)2 𝑠 + 3

3. Distinct irreducible quadratic factors 𝑠+3 𝐴𝑠 + 𝐵


𝑄(𝑠) = 𝑎𝑠 2 + 𝑏𝑠 + 𝑐, where 𝑏 2 − 4𝑎𝑐 < 0 𝑠2 + 6𝑠 + 25 𝑠2 + 6𝑠 + 25

4. Repeated irreducible quadratic


𝑠+6 𝐴𝑠 + 𝐵 𝐶𝑠 + 𝐷
factors +
(𝑠 2 + 6𝑠 + 25)2 𝑠 2 + 6𝑠 + 25 (𝑠 2 + 6𝑠 + 25)2
𝑄(𝑠) = (𝑎𝑠 2 + 𝑏𝑠 + 𝑐)𝑟 , where 𝑏 2 − 4𝑎𝑐 < 0

where A, B, C and D are constants.

Example:

10𝑠2 +4
Find the inverse Laplace transform of 𝐹(𝑠) = 𝑠(𝑠+1)(𝑠+2)2.

Solution:

10𝑠2 +4 𝐴 𝐵 𝐶 𝐷
𝐹(𝑠) = 𝑠(𝑠+1)(𝑠+2)2 = 𝑠 + 𝑠+1 + (𝑠+2)2 + 𝑠+2

10𝑠 2 + 4 = 𝐴(𝑠 + 1)(𝑠 + 2)2 + 𝐵𝑠(𝑠 + 2)2 + 𝐶𝑠(𝑠 + 1) + 𝐷𝑠(𝑠 + 1)(𝑠 + 2)

If we set 𝑠 = 0, 𝑠 = −1, 𝑠 = −2 and 𝑠 = 1, we obtain

𝐴 = 1, 𝐵 = −14, 𝐶 = 22, 𝐷 = 13 respectively.

Therefore,

5
10𝑠2 +4 1 14 22 13
ℒ −1 {𝑠(𝑠+1)(𝑠+2)2 } = ℒ −1 (𝑠 − 𝑠+1 + (𝑠+2)2 + 𝑠+2) = 1 − 14𝑒 −𝑡 + 22𝑡𝑒 −2𝑡 + 13𝑒 −2𝑡

8.7 LAPLACE TRANSFORM OF DERIVATIVES

8.7.1 LAPLACE TRANSFORM OF FIRST DERIVATIVE

If 𝑓(𝑡) is continuous for all 𝑡 ≥ 0, satisfies the growth condition and 𝑓 ′ (𝑡) is piecewise continuous
on every finite interval on the semi-axis 𝑡 ≥ 0, then
ℒ{𝑓 ′ (𝑡)} = 𝑠ℒ{𝑓(𝑡)} − 𝑓(0)
Proof:

ℒ{𝑓 ′ (𝑡)} = ∫0 𝑒 −𝑠𝑡 𝑓 ′ (𝑡) 𝑑𝑡

= 𝑒 −𝑠𝑡 𝑓(𝑡)|∞
0 + 𝑠 ∫0 𝑒
−𝑠𝑡
𝑓(𝑡) 𝑑𝑡

= −𝑓(0) + 𝑠ℒ{𝑓(𝑡)}

8.7.2 LAPLACE TRANSFORM OF SECOND DERIVATIVE

If 𝑓(𝑡) and 𝑓 ′ (𝑡) are continuous for all 𝑡 ≥ 0, satisfies the growth condition and 𝑓 ′′ (𝑡) is piecewise
continuous on every finite interval on the semi-axis 𝑡 ≥ 0, then
ℒ{𝑓 ′′ (𝑡)} = 𝑠 2 ℒ{𝑓(𝑡)} − 𝑠𝑓(0) − 𝑓′(0)
Proof:
ℒ{𝑓 ′′ (𝑡)} = 𝑠ℒ{𝑓′(𝑡) } − 𝑓 ′ (0)
= 𝑠(𝑠ℒ{𝑓(𝑡)} − 𝑓(0)) − 𝑓 ′ (0)
= 𝑠 2 ℒ{𝑓(𝑡) } − 𝑠𝑓(0) − 𝑓 ′ (0)

8.7.3 LAPLACE TRANSFORM OF THE DERIVATIVE 𝑓 (𝑛) OF ANY ORDER

If 𝑓 , 𝑓’, . . . 𝑓 (𝑛−1) are continuous for all 𝑡 ≥ 0, satisfies the growth condition and 𝑓 (𝑛) is piecewise
continuous on every finite interval on the semi-axis 𝑡 ≥ 0, then
ℒ{𝑓 (𝑛) (𝑡)} = 𝑠 𝑛 ℒ{𝑓(𝑡)} − 𝑠 𝑛−1 𝑓(0) − 𝑠 𝑛−2 𝑓 ′ (0) − ⋯ − 𝑓 𝑛−1 (0)

8.8 LAPLACE TRANSFORM OF INTEGRAL

If 𝑓(𝑡) is piecewise continuous for all 𝑡 ≥ 0,


𝑡
1
ℒ {∫ 𝑓(𝜏) 𝑑𝜏} = 𝐹(𝑠)
𝑠
0

Proof:
𝑡 ′ 𝑡 0
ℒ {∫0 𝑓(𝜏) 𝑑𝜏} = 𝑠ℒ {∫0 𝑓(𝜏) 𝑑𝜏} − ∫0 𝑓(𝜏) 𝑑𝜏

6
𝑡
ℒ{𝑓(𝜏)} = 𝑠ℒ {∫0 𝑓(𝜏) 𝑑𝜏}
𝑡 1
ℒ {∫0 𝑓(𝜏) 𝑑𝜏} = 𝑠 ℒ{𝑓(𝜏)}

Example:
1 1
Find the inverse of 𝑠(𝑠2 +𝜔2 ) and 𝑠2 (𝑠2 +𝜔2 ).

Solution:
1 sin 𝜔𝑡
We know that ℒ −1 (𝑠2 +𝜔2 ) = 𝜔
= 𝑓(𝜏)
1 1
ℒ −1 {𝑠(𝑠2 +𝜔2 )} = ℒ −1 {𝑠 ℒ(𝑓(𝜏))}
𝑡 sin 𝜔𝜏
= ∫0 𝜔
𝑑𝜏
1
= (1 − cos 𝜔𝑡)
𝜔2
1 1 𝑡
ℒ −1 {𝑠2 (𝑠2 +𝜔2 )} = 𝜔2 ∫0 (1 − cos 𝜔𝜏) 𝑑𝜏
𝑡 sin 𝜔𝑡
= 𝜔2 − 𝜔3

8.9 FIRST SHIFT THEOREM: s-SHIFTING

If 𝑓(𝑡) has the transform 𝐹(𝑠) (where 𝑠 > 𝑘 for some 𝑘), then 𝑒 𝑎𝑡 𝑓(𝑡) has the transform
ℒ{𝑒 𝑎𝑡 𝑓(𝑡)} = 𝐹(𝑠 − 𝑎)
where (𝑠 – 𝑎) > 𝑘.

Proof:

𝐹(𝑠 − 𝑎) = ∫0 𝑒 −(𝑠−𝑎)𝑡 𝑓(𝑡)𝑑𝑡

= ∫0 (𝑒 𝑎𝑡 𝑓(𝑡))𝑒 −𝑠𝑡 𝑑𝑡
= ℒ{𝑒 𝑎𝑡 𝑓(𝑡)}
Examples:
𝑠 𝜔
1. From previous example, we know that ℒ{cos(𝜔𝑡)} = 𝑠2 +𝜔2 and ℒ{sin(𝜔𝑡)} = 𝑠2 +𝜔2

then
𝑠−𝑎
ℒ{𝑒 𝑎𝑡 cos(𝜔𝑡)} = (𝑠−𝑎)2 +𝜔2

7
𝜔
ℒ{𝑒 𝑎𝑡 sin(𝜔𝑡)} = (𝑠−𝑎)2 +𝜔2
3𝑠−137
2. Using first shift theorem, find the inverse of the transform ℒ{𝑓(𝑡)} = 𝑠2 +2𝑠+401 .
Solution:
3𝑠−137 3(𝑠+1)−140
ℒ −1 (𝑠2 +2𝑠+401) = ℒ −1 ((𝑠+1)2 +400)
𝑠+1 20
= 3ℒ −1 ((𝑠+1)2 )− 7ℒ −1 ((𝑠+1)2 )
+400 +400

= 3𝑒 −𝑡 cos 20𝑡 − 7 e−t sin 20𝑡

8.10 UNIT STEP FUNCTION (HEAVISIDE FUNCTION)

Mathematical definition of a unit step function is

0 if t  a
u(t  a)  
1 if t  a

The unit step function has a discontinuity, or jump, at the origin for 𝑢(𝑡) or at the position a for
𝑢(𝑡 – 𝑎) where a is an arbitrary positive.

The transform of u(t – a) follows directly from the defining integral:



ℒ{𝑢(𝑡 − 𝑎)} = ∫0 𝑒 −𝑠𝑡 𝑢(𝑡 − 𝑎)𝑑𝑡

= ∫𝑎 𝑒 −𝑠𝑡 ∙ 1 𝑑𝑡

1 ∞
= − 𝑠 𝑒 −𝑠𝑡 |
𝑎

𝑒 −𝑎𝑠
= 𝑠

8
Examples:

8.11 SECOND SHIFT THEOREM: TIME SHIFTING (t-SHIFTING)

If 𝑓(𝑡) has the transform 𝐹(𝑠), then the “shifted-function”

ℒ{𝑓(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)} = 𝑒 −𝑎𝑠 𝐹(𝑠)

ℒ{𝑓(𝑡)𝑢(𝑡 − 𝑎)} = 𝑒 −𝑎𝑠 ℒ{𝑓(𝑡 + 𝑎)}

Proof:

Let 𝜏 = 𝑡 − 𝑎,

ℒ{𝑓(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)} = ∫0 𝑒 −𝑠𝑡 𝑓(𝑡 − 𝑎)𝑢(𝑡 − 𝑎)𝑑𝑡

= ∫−𝑎 𝑒 −𝑠(𝜏+𝑎) 𝑓(𝜏)𝑢(𝜏)𝑑𝜏

= 𝑒 −𝑎𝑠 ∫0 𝑒 −𝑠𝜏 𝑓(𝜏)𝑑𝜏

= 𝑒 −𝑎𝑠 𝐹(𝑠)

9
Examples:

1. Write the following function using unit step functions and find its transform.
2 if 0 < 𝑡 < 1
1 2 1
𝑡 if 1 < 𝑡 < 𝜋
𝑓(𝑡) = 2 2
1
cos 𝑡 if 𝑡 > 𝜋
{ 2

Solution:

1 1 1
𝑓(𝑡) = 2(𝑢(𝑡)
⏟ − 𝑢(𝑡 − 1)) + 2 𝑡 2 (𝑢(𝑡 ⏟(𝑡 − 2 𝜋)) + (cos
⏟ − 1) − 𝑢 ⏟ 𝑡) 𝑢 (𝑡 − 2 𝜋)
𝑝𝑎𝑟𝑡 (𝑎) 𝑝𝑎𝑟𝑡 (𝑏) 𝑝𝑎𝑟𝑡 (𝑐) 𝑝𝑎𝑠𝑟𝑡 (𝑑)

1 𝑒 −𝑠
Part (a): ℒ{2(𝑢(𝑡) − 𝑢(𝑡 − 1))} = 2 (𝑠 − 𝑠
)

1 1
Part (b): ℒ { 𝑡 2 𝑢(𝑡 − 1)} = ℒ{(𝑡 − 1)2 𝑢(𝑡 − 1) + 2(𝑡 − 1) 𝑢(𝑡 − 1) + 𝑢(𝑡 − 1)}
2 2

1 2 2 1
= 𝑒 −𝑠 ( 3 + + )
2 𝑠 𝑠2 𝑠

1 1 1
= 𝑒 −𝑠 (𝑠3 + 𝑠2 + 2𝑠)

Part (c):

1 1 1 1 2 1 1 1 1 1
ℒ {2 𝑡 2 𝑢 (𝑡 − 2 𝜋)} = 2 ℒ {(𝑡 − 2 𝜋) 𝑢 (𝑡 − 2 𝜋) + 𝜋 (𝑡 − 2 𝜋) 𝑢 (𝑡 − 2 𝜋) + 4 𝜋 2 𝑢 (𝑡 − 2 𝜋)}

1 2 𝜋 𝜋2
= 2 𝑒 −𝜋𝑠/2 (𝑠3 + 𝑠2 + 4𝑠 )

1 𝜋 𝜋2
= 𝑒 −𝜋𝑠/2 (𝑠3 + 2𝑠2 + 8𝑠 )

1 1 1
Part (d): ℒ {(cos 𝑡) 𝑢 (𝑡 − 2 𝜋)} = ℒ {− sin (𝑡 − 2 𝜋) 𝑢 (𝑡 − 2 𝜋)}

𝑒 −𝜋𝑠/2
=− 𝑠2 +1

Combining all the terms:


𝜋𝑠
2 2 1 1 1 1 𝜋 𝜋2 𝑒 −𝜋𝑠/2
ℒ{𝑓(𝑡)} = 𝑠 − 𝑠 𝑒 −𝑠 + 𝑒 −𝑠 (𝑠3 + 𝑠2 + 2𝑠) − 𝑒 − 2 (𝑠3 + 2𝑠2 + 8𝑠 ) − 𝑠2 +1

10
𝑒 −𝑠 𝑒 −2𝑠 𝑒 −3𝑠
2. Find the inverse transform 𝑓(𝑡) of 𝐹(𝑠) = 𝑠2 +𝜋2 + 𝑠2 +𝜋2 + (𝑠+2)2.
Solution:

First, we consider without the exponential functions in the numerator.


1 sin 𝜋𝑡 1
ℒ −1 ( ) = , ℒ −1 ((𝑠+2)2 ) = 𝑡𝑒 −2𝑡
𝑠2 +𝜋2 𝜋

By second shift theorem,


1 1
𝑓(𝑡) = 𝜋 sin(𝜋(𝑡 − 1)) 𝑢(𝑡 − 1) + 𝜋 sin(𝜋(𝑡 − 2)) 𝑢(𝑡 − 2) + (𝑡 − 3)𝑒 −2(𝑡−3) 𝑢(𝑡 − 3)

3. Response of an RC-circuit to a single rectangular wave


Find the current in the RC-circuit if a single rectangular wave with voltage Vo is applied. The
circuit is assumed to be quiescent before the wave is applied.

Solution:

The input is 𝑉0 (𝑢(𝑡 – 𝑎) − 𝑢(𝑡 – 𝑏)).


1 𝑡
𝑅 𝑖(𝑡) + 𝐶 ∫0 𝑖(𝜏) 𝑑𝜏 = 𝑉0 (𝑢(𝑡 – 𝑎) − 𝑢(𝑡 – 𝑏))

𝐼(𝑠) 𝑉0
Laplace transform: 𝑅 𝐼(𝑠) + 𝑠𝐶
= 𝑠
(𝑒 −𝑎𝑠 − 𝑒 −𝑏𝑠 )

1 𝑉0
𝐼(𝑠) (𝑅 + 𝑠𝐶 ) = 𝑠
(𝑒 −𝑎𝑠 − 𝑒 −𝑏𝑠 )

𝑉0 1
𝐼(𝑠) = 𝐹(𝑠)(𝑒 −𝑎𝑠 − 𝑒 −𝑏𝑠 ) where 𝐹(𝑠) = 𝑅 𝑠+1/(𝑅𝐶)

𝑖(𝑡) = ℒ −1 {𝐼(𝑠)} = ℒ −1 {𝐹(𝑠)𝑒 −𝑎𝑠 } − ℒ −1 {𝐹(𝑠)𝑒 −𝑏𝑠 }

(𝑡−𝑎) (𝑡−𝑏)
𝑉0 −
= 𝑅
(𝑒 𝑅𝐶 𝑢(𝑡 − 𝑎) − 𝑒 − 𝑅𝐶 𝑢(𝑡 − 𝑏))

11
Therefore,

𝐾1 𝑒 𝑡/(𝑅𝐶) if 𝑎 < 𝑡 < 𝑏


𝑖(𝑡) = { (𝐾1 − 𝐾2 )𝑒 𝑡/(𝑅𝐶) if 𝑡 > 𝑏
0 if 𝑡 < 𝑎

where 𝐾1 = 𝑉𝑜 𝑒 𝑎⁄(𝑅𝐶) ⁄𝑅 and 𝐾2 = 𝑉𝑜 𝑒 𝑏⁄(𝑅𝐶) ⁄𝑅.

12
LAPLACE TRANSFORM SOLUTIONS
FOR DIFFERENTIAL EQUATIONS
WEEK 9: LAPLACE TRANSFORM SOLUTIONS FOR DIFFERENTIAL EQUATIONS
9.1 DIFFERENTIATION OF TRANSFORMS

If 𝐹(𝑠) = ℒ{𝑓(𝑡)} = ∫0 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡, then

′ (𝑠)
𝐹 = − ∫ 𝑡𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ℒ{−𝑡𝑓(𝑡)}
0

Proof:

𝐹(𝑠) = ∫0 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡

𝑑 ∞
𝐹 ′ (𝑠) = 𝑑𝑠 (∫0 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡)

∞ 𝑑
= ∫0 (𝑒 −𝑠𝑡 ) 𝑓(𝑡) 𝑑𝑡
𝑑𝑠


= ∫0 −𝑡𝑒 −𝑠𝑡 𝑓(𝑡) 𝑑𝑡

= ℒ{−𝑡𝑓(𝑡)}

In other words, if 𝐹(𝑠) = ℒ{𝑓(𝑡)} and n = 1, 2, 3, ..., then

𝑑𝑛
ℒ{𝑡 𝑛 𝑓(𝑡)} = (−1)𝑛 𝐹(𝑠)
𝑑𝑠 𝑛

Examples:
𝜔
1. Given ℒ{sin(𝜔𝑡)} = 𝑠2 +𝜔2 = 𝐹(𝑠),
2𝑠𝜔
Then ℒ{−𝑡 sin(𝜔𝑡)} = 𝐹 ′ (𝑠) = − 2
( 𝑠2 +𝜔2 )

𝜔2
2. Find the inverse transform of ln (1 + 𝑠2
).
Solution:

𝜔2 𝑠2 +𝜔2
Let 𝐹(𝑠) = ln (1 + ) = ln ( ) = ln(𝑠 2 + 𝜔2 ) − ln(𝑠 2 )
𝑠2 𝑠2

𝑑 2𝑠 2𝑠
𝐹 ′ (𝑠) = 𝑑𝑠 (ln(𝑠 2 + 𝜔2 ) − ln(𝑠 2 )) = 𝑠2 +𝜔2 − 𝑠2 = ℒ{−𝑡𝑓(𝑡)}

Taking inverse transform,

13
2𝑠 2𝑠
ℒ −1 {𝐹 ′ (𝑠) } = −𝑡𝑓(𝑡) = ℒ −1 (𝑠2 +𝜔2 − 𝑠2 )

−𝑡𝑓(𝑡) = 2 cos(𝜔𝑡) − 2

𝜔2 2
ℒ −1 {ln (1 + 𝑠2
)} = 𝑓(𝑡) = 𝑡 (1 − cos(𝜔𝑡))

9.2 INTEGRATION OF TRANSFORMS

If f(t) satisfies the assumptions of the existence theorem and the limit of 𝑓(𝑡)/𝑡 exists when t
approaches 0 from the right, then

𝑓(𝑡)
ℒ −1 (∫ 𝐹(𝑠̃ ) 𝑑𝑠̃ ) =
𝑡
𝑠

Proof:
∞ ∞ ∞
∫𝑠 𝐹(𝑠̃ ) 𝑑𝑠̃ = ∫𝑠 (∫0 𝑒 −𝑠̃ 𝑡 𝑓(𝑡) 𝑑𝑡) 𝑑𝑠̃
∞ ∞
= ∫0 (∫𝑠 𝑒 −𝑠̃ 𝑡 𝑑𝑠̃ )𝑓(𝑡) 𝑑𝑡

∞ 1 ∞
= ∫0 (− 𝑒 −𝑠̃ 𝑡 | ) 𝑓(𝑡) 𝑑𝑡
𝑡 𝑠

∞ 1 −𝑠𝑡
= ∫0 𝑒 𝑓(𝑡) 𝑑𝑡
𝑡

𝑓(𝑡)
= ℒ{ 𝑡
}

9.3 DIRACT DELTA FUNCTION

Mechanical systems are often acted on by an external force (or electromotive force in an electrical
circuit) of large magnitude that acts only for a very short period of time. We can model such
phenomena and problems by “Dirac delta function,” and solve them very effectively by the Laplace
transform.

To model situations of that type, we consider the function

1⁄𝑘 if 𝑎 ≤ 𝑡 ≤ 𝑎 + 𝑘
𝑓𝑘 (𝑡 − 𝑎) = {
0 otherwise
As 𝑘 → 0, this limit is denoted by 𝛿(𝑡 − 𝑎), that is

𝛿(𝑡 − 𝑎) = lim 𝑓𝑘 (𝑡 − 𝑎)
𝑘→0

𝛿(𝑡 − 𝑎) is called the Dirac delta function or the unit impulse function.

14
The Laplace transform of the Dirac delta function is given by

ℒ{𝛿(𝑡 − 𝑎)} = 𝑒 −𝑎𝑠 for 𝑎 > 0

Example:

Solve 𝑦 ′′ + 𝑦 = 4𝛿(𝑡 − 2𝜋) subject to 𝑦(0) = 1, 𝑦 ′ (0) = 0.

Solution:

The Laplace transform of the differential equation is

𝑠 2 𝑌(𝑠) − 𝑠𝑦(0) − 𝑦 ′ (0) + 𝑌(𝑠) = 4𝑒 −2𝜋𝑠

(𝑠 2 + 1)𝑌(𝑠) − 𝑠 = 4𝑒 −2𝜋𝑠

4𝑒 −2𝜋𝑠 +𝑠
𝑌(𝑠) = 𝑠2 +1

𝑦(𝑡) = 4 sin(𝑡 − 2𝜋) 𝑢(𝑡 − 2𝜋) + cos 𝑡

cos 𝑡 0 ≤ 𝑡 ≤ 2𝜋
𝑦(𝑡) = {
4 sin 𝑡 + cos 𝑡 𝑡 ≥ 2𝜋

9.4 CONVOLUTION

The convolution of two functions f(t) and g(t) is denoted by the standard notation 𝑓𝑔 and defined
𝑡
by the integral (𝑓 ∗ 𝑔)(𝑡) = ∫0 𝑓(𝜏)𝑔(𝑡 − 𝜏) 𝑑𝜏 .

The Laplace transform is given by ℒ{(𝑓 ∗ 𝑔)(𝑡)} = 𝐹(𝑠)𝐺(𝑠).

Proof:
∞ ∞
𝐹(𝑠)𝐺(𝑠) = (∫0 𝑒 −𝑠𝜏 𝑓(𝜏) 𝑑𝜏)(∫0 𝑒 −𝑠𝜎 𝑔(𝜎) 𝑑𝜎)
∞ ∞
= ∫0 (∫0 𝑒 −𝑠(𝜏+𝜎) 𝑔(𝜎) 𝑑𝜎)𝑓(𝜏) 𝑑𝜏
∞ ∞
= ∫0 (∫𝜏 𝑒 −𝑠𝑡 𝑔(𝑡 − 𝜏) 𝑑𝑡)𝑓(𝜏) 𝑑𝜏 [𝑡 = 𝜎 + 𝜏]

∞ 𝑡
= ∫0 𝑒 −𝑠𝑡 (∫0 𝑓(𝜏)𝑔(𝑡 − 𝜏) 𝑑𝜏) 𝑑𝑡

= ℒ{(𝑓 ∗ 𝑔)}

15
9.4.1 PROPERTIES OF CONVOLUTION

i. Commutative law: 𝑓 ∗ 𝑔 = 𝑔 ∗ 𝑓

ii. Distributive law: 𝑓 ∗ (𝑔1 + 𝑔2 ) = 𝑓 ∗ 𝑔1 + 𝑓 ∗ 𝑔2

iii. Associative law: (𝑓 ∗ 𝑔) ∗ 𝑣 = 𝑓 ∗ (𝑔 ∗ 𝑣)

iv. 𝑓 ∗ 0 = 0 ∗ 𝑓 = 0

v. 𝑓 ∗ 1 ≠ 𝑓

9.4.2 INTEGRAL EQUATIONS

Convolution also helps in solving certain integral equations, that is, equations in which the unknown
function y(t) appears in an integral.

Example: Volterra integral equation of the second kind


𝑡
Solve 𝑦(𝑡) − ∫0 𝑦(𝜏) sin(𝑡 − 𝜏) 𝑑𝜏 = 𝑡.

Solution:

𝑦 − 𝑦 ∗ sin 𝑡 = 𝑡

Applying Laplace transform and convolution theorem, we obtain


1 1
𝑌(𝑠) − 𝑌(𝑠) =
𝑠2 +1 𝑠2

𝑠2 1
𝑌(𝑠) =
𝑠2 +1 𝑠2

𝑠2 +1 1 1
𝑌(𝑠) = 𝑠4
= 𝑠2 + 𝑠4

𝑡3
Therefore, 𝑦(𝑡) = 𝑡 + 6

9.5 SYSTEM OF ODEs

We consider a first-order linear system with constant coefficients:

𝑦1′ = 𝑎11 𝑦1 + 𝑎12 𝑦2 + 𝑔1

𝑦2′ = 𝑎21 𝑦1 + 𝑎22 𝑦2 + 𝑔2

If we transform it,

𝑠𝑌1 − 𝑦1 (0) = 𝑎11 𝑌1 + 𝑎12 𝑌2 + 𝐺1

𝑠𝑌2 − 𝑦2 (0) = 𝑎21 𝑌1 + 𝑎22 𝑌2 + 𝐺2

By collecting the Y1- and Y2-terms we have

(𝑎11 − 𝑠)𝑌1 + 𝑎12 𝑌2 = −𝑦1 (0) − 𝐺1 (𝑠)

16
𝑎21 𝑌1 + (𝑎22 − 𝑠)𝑌2 = −𝑦2 (0) − 𝐺2 (𝑠)

By solving this system algebraically for Y1(s), Y2(s) and taking the inverse transform we obtain the
solution y1 and y2 of the system.

t-space s-space

Given problem Laplace Subsidiary equation


𝑦" − 𝑦 = 𝑡 transform (𝑠 2 − 1)𝑌 = s + 1 + 1/s 2
𝑦(0) = 1
𝑦 ′ (0) = 1

Inverse
Solution of given Laplace Solution of subsidiary
problem transform equation
𝑦(𝑡) = 𝑒 𝑡 − sinh 𝑡 − 𝑡 1 1 1
𝑌= + 2 − 2
𝑠 − 1 (𝑠 − 1) 𝑠

Steps of the Laplace transform method

Examples:

1. Damped Forced Vibrations

Solve the initial value problem for a damped mass–spring system acted upon by a sinusoidal force
for some time interval.

𝑦 ′′ + 2𝑦 ′ + 2𝑦 = 𝑟(𝑡), 𝑟(𝑡) = 10𝑠𝑖𝑛 2𝑡 if 0 < 𝑡 < 𝜋 and 0 if 𝑡 > 𝜋;

𝑦(0) = 1, 𝑦 ′ (0) = −5

Solution:

𝑦 ′′ + 2𝑦 ′ + 2𝑦 = 10 sin 2𝑡(𝑢(𝑡) − 𝑢(𝑡 − 𝜋))

Using Laplace transform,

17
2
(𝑠 2 𝑌 − 𝑠 + 5) + 2(𝑠𝑌 − 1) + 2𝑌 = 10 (1 − 𝑒 −𝜋𝑠 )
𝑠2 +4

2
(𝑠 2 + 2𝑠 + 2)𝑌 = 𝑠 − 3 + 10 (1 − 𝑒 −𝜋𝑠 )
𝑠2 +4

𝑠−3 20 20𝑒 −𝜋𝑠


𝑌 = (𝑠2 +2𝑠+2) + (𝑠2 +2𝑠+2)(𝑠2 +4) − (𝑠2 +2𝑠+2)(𝑠2 +4)
⏟ ⏟ ⏟
𝑝𝑎𝑟𝑡 (𝑎) 𝑝𝑎𝑟𝑡 (𝑏) 𝑝𝑎𝑟𝑡 (𝑏1)

Applying inverse Laplace transform,


𝑠−3 (𝑠+1)−4
Part (a): ℒ −1 (𝑠2 +2𝑠+2) = ℒ −1 ((𝑠+1)2 +1)

= 𝑒 −𝑡 (cos 𝑡 − 4 sin 𝑡)
20 𝐴𝑠+𝐵 𝑀𝑠+𝑁
Part (b): Partial fraction expansion: (𝑠2 +2𝑠+2)(𝑠2 +4) = (𝑠+1)2 +1 + 𝑠2 +4

20 = (𝐴𝑠 + 𝐵)(𝑠 2 + 4) + (𝑀𝑠 + 𝑁)(𝑠 2 + 2𝑠 + 2)

20 = (𝐴 + 𝑀)𝑠 3 + (2𝐴 + 𝐵 + 𝑁)𝑠 2 + (2𝐴 + 2𝐵 + 4𝑀)𝑠 + (2𝐵 + 4𝑁)

Equating the coefficients of each power of s on both sides gives the four equations:

𝐴 + 𝑀 = 0; 2𝐴 + 𝐵 + 𝑁 = 0;

2𝐴 + 2𝐵 + 4𝑀 = 0; 2𝐵 + 4𝑁 = 20;

We determine A = 2, B = 6, M = -2, N = -2
20 2𝑠+6 (2𝑠+2) 2(𝑠+1)+4 2𝑠+2
(𝑠2 +2𝑠+2)(𝑠2 +4)
= (𝑠+1)2 − 2 = (𝑠+1)2 − 2
+1 𝑠 +4 +1 𝑠 +4

20
ℒ −1 {(𝑠2 +2𝑠+2)(𝑠2 +4)} = 𝑒 −𝑡 (2 cos 𝑡 + 4 sin 𝑡) − 2 cos 2𝑡 − sin 2𝑡

Part (b1): From second shift theorem, we have

20𝑒 −𝜋𝑠
ℒ −1 {(𝑠2 +2𝑠+2)(𝑠2 +4)} = 𝑒 −(𝑡−𝜋) (2 cos(𝑡 − 𝜋) + 4 sin(𝑡 − 𝜋)) − 2 cos 2(𝑡 − 𝜋) − sin 2(𝑡 − 𝜋)

cos(𝑡 − 𝜋) = − cos 𝑡
= 𝑒 −(𝑡−𝜋) (−2 cos 𝑡 − 4 sin 𝑡) − 2 cos 2𝑡 − sin 2𝑡 sin(𝑡 − 𝜋) = − sin 𝑡

Therefore, the solution is

𝑦(𝑡) = 𝑒 −𝑡 (cos 𝑡 − 4 sin 𝑡) + 𝑒 −𝑡 (2 cos 𝑡 + 4 sin 𝑡) − 2 cos 2𝑡 − sin 2𝑡 if 0 < t < π

= 3𝑒 −𝑡 cos 𝑡 − 2 cos 2𝑡 − sin 2𝑡 if 0 < t < π

𝑦(𝑡) = 3𝑒 −𝑡 cos 𝑡 − 2 cos 2𝑡 − sin 2𝑡 − [𝑒 −(𝑡−𝜋) (−2 cos 𝑡 − 4 sin 𝑡) − 2 cos 2𝑡 − sin 2𝑡] if t > π

= 𝑒 −𝑡 ((3 + 2𝑒 𝜋 ) cos 𝑡 + 4𝑒 𝜋 sin 𝑡) − 2 cos 2𝑡 − sin 2𝑡 if t > π

18
2. Mixing Problem Involving Two Tanks

Tank T1 initially contains 100 gal of pure water. Tank T2 initially contains 100 gal of water in which
150 lb of salt are dissolved. The inflow into T1 is 2 gal/min from T2 and 6 gal/min containing 6 lb of
salt from the outside. The inflow into T2 is 8 gal/min from T1. The outflow from T2 is 2 + 6 = 8 gal/min,
as shown in the figure below. The mixtures are kept uniform by stirring. Find and plot the salt
contents y1(t) and y2(t) in T1 and T2, respectively.

Solution:

The rate of change in two tanks is obtained by considering:

Rate of change = Inflow/min – Outflow/min


𝑦 𝑦
Thus, 𝑦1′ = − 100
1 2
8 + 100 2+6

𝑦 𝑦
𝑦2′ = 100
1 2
8 − 100 2

Given the initial conditions: y1(0) = 0, y2(0) = 150.

Using Laplace transform,


6
𝑠𝑌1 = −0.08𝑌1 + 0.02𝑌2 + 𝑠

𝑠𝑌2 − 150 = 0.08𝑌1 − 0.02𝑌2

Writing these two equations in vector form:

19
6
−0.08 − 𝑠 0.02 𝑌 −
( ) ( 1) = ( 𝑠 )
0.08 −0.02 − 𝑠 𝑌2 −150
Solving for Y1 and Y2 algebraically by Cramer’s rule:

6 / s 0.02
150 0.02  s 9 s  0.48 100 62.5 37.5
Y1     
0.08  s 0.02 s  s  0.12  s  0.04  s s  0.12 s  0.04
0.08 0.02  s

0.08  s 6 / s
0.08 150 150 s2  12s  0.48 100 125 75
Y2     
0.08  s 0.02 s  s  0.12  s  0.04  s s  0.12 s  0.04
0.08 0.02  s

By taking inverse transform,

𝑦1 = 100 − 62.5𝑒 −0.12𝑡 − 37.5𝑒 −0.04𝑡

𝑦2 = 100 + 125𝑒 −0.12𝑡 − 75𝑒 −0.04𝑡

3. Electrical Network

Find the currents 𝑖1 (𝑡) and 𝑖2 (𝑡) in the network with 𝐿 and 𝑅 measured in terms of the usual units,
𝑣(𝑡) = 100 volts if 0 ≤ 𝑡 ≤ 0.5 sec and 0 thereafter, and 𝑖(0) = 0, 𝑖’(0) = 0.

20
Solution:

The model of the network is obtained from Kirchhoff’s Voltage Law:

For the lower circuit:

0.8𝑖1′ + 1(𝑖1 − 𝑖2 ) + 1.4𝑖1 − 100[1 − 𝑢(𝑡 − 0.5)] = 0

For the upper circuit:

1𝑖2′ + 1(𝑖2 − 𝑖1 ) = 0

Applying Laplace transform,

1 𝑒 −0.5𝑠
0.8𝑠𝐼1 + (𝐼1 − 𝐼2 ) + 1.4𝐼1 = 100 [𝑠 − 𝑠
]

𝑠𝐼2 + (𝐼2 − 𝐼1 ) = 0

Solving algebraically for I1 and I2:


500 125 625
𝐼1 = ( 7𝑠 − 3(𝑠+0.5) − 21(𝑠+3.5)) (1 − 𝑒 −0.5𝑠 )

500 250 250


𝐼2 = ( 7𝑠 − 3(𝑠+0.5) + 21(𝑠+3.5)) (1 − 𝑒 −0.5𝑠 )

The inverse transform for 0 ≤ t ≤ 0.5


500 125 625 −3.5𝑡
𝑖1 (𝑡) = 7
− 3 𝑒 −0.5𝑡 − 21
𝑒

500 250 250 −3.5𝑡


𝑖2 (𝑡) = 7
− 3 𝑒 −0.5𝑡 + 21
𝑒

The inverse transform for t > 0.5


125 625
𝑖1 (𝑡) = 𝑖1 (𝑡) − 𝑖1 (𝑡 − 0.5) = − 3
(1 − 𝑒 0.25 )𝑒 −0.5𝑡 − 21
(1 − 𝑒 1.75 )𝑒 −3.5𝑡

250 250
𝑖2 (𝑡) = 𝑖2 (𝑡) − 𝑖2 (𝑡 − 0.5) = − 3
(1 − 𝑒 0.25 )𝑒 −0.5𝑡 + 21
(1 − 𝑒 1.75 )𝑒 −3.5𝑡

21
Table of Laplace Transform

𝒇(𝒕) 𝑭(𝒔) 𝒇(𝒕) 𝑭(𝒔)

1
1 𝑠 𝑎𝑓(𝑡) + 𝑏𝑔(𝑡) 𝑎𝐹(𝑠) + 𝑏𝐺(𝑠)

𝛿(𝑡) 1 𝑒 −𝑎𝑠
𝑢(𝑡 − 𝑎)
𝑠

1
𝑡 𝑠2 𝛿(𝑡 − 𝑎) 𝑒 −𝑎𝑠

𝑛!
𝑡 𝑛 , 𝑛 = 1,2,3, … 𝑠𝑛+1 𝑓(𝑡 − 𝑎)𝑢(𝑡 − 𝑎) 𝑒 −𝑎𝑠 𝐹(𝑠)

1
𝑒 𝑎𝑡 𝑠−𝑎 𝑒 𝑎𝑡 𝑓(𝑡) 𝐹(𝑠 − 𝑎)

𝑡𝑒 𝑎𝑡
1 𝑑𝑓
(𝑠−𝑎)2 𝑠𝐹(𝑠) − 𝑓(0)
𝑑𝑡

𝑡 𝑛 𝑒 𝑎𝑡 , 𝑛 = 1,2,3, …
𝑛! 𝑑2 𝑓
(𝑠−𝑎)𝑛+1 𝑠 2 𝐹(𝑠) − 𝑠𝑓(0) − 𝑓 ′ (0)
𝑑𝑡 2

𝜔 𝑑𝑛 𝑓
sin 𝜔𝑡 𝑠 𝑛 𝐹(𝑠) − 𝑠 𝑛−1 𝑓(0) − ⋯ − 𝑓 (𝑛−1) (0)
𝑠2 + 𝜔2 𝑑𝑡 𝑛

𝑠 𝑡
1
cos 𝜔𝑡 ∫ 𝑓(𝜏) 𝑑𝜏 𝐹(𝑠)
𝑠2 + 𝜔2 𝑠
0

𝜔 𝑑
𝑒 𝑎𝑡 sin 𝜔𝑡 𝑡𝑓(𝑡) − 𝐹(𝑠)
(𝑠 − 𝑎)2 + 𝜔 2 𝑑𝑠

𝑠−𝑎 𝑓(𝑡)

𝑒 𝑎𝑡 cos 𝜔𝑡 ∫ 𝐹(𝑠)𝑑𝑠
(𝑠 − 𝑎)2 + 𝜔 2 𝑡 𝑠

𝜔
sinh 𝜔𝑡 𝑓(𝑡) ∗ 𝑔(𝑡) 𝐹(𝑠)𝐺(𝑠)
𝑠2 − 𝜔2

𝑠
cosh 𝜔𝑡
𝑠2 − 𝜔2

22
FROBENIUS METHOD
WEEK 10: FROBENIUS METHOD
10.1 SOLUTIONS ABOUT SINGULAR POINTS

The power series method for solving linear differential equations with variable coefficients no
longer works when solving the differential equation about a singular point. It appears that some
features of the solutions of such equations of the most importance for applications are largely
determined by their behavior near their singular points. Frobenius method is usually used to solve
the differential equation about a regular singular point. This method does not always yield two
infinite series solutions. When only one solution is found, a certain formula can be used to get the
second solution.

Reduction of Order
The “reduction of order method” is a method for converting any linear differential equation to
another linear differential equation of lower order, and then constructing the general solution to the
original differential equation using the general solution to the lower-order equation.

Reduction of Order for Homogeneous Linear Second-Order Equations


This method is for finding a general solution to some homogeneous linear second-order differential
equation
𝑎𝑦 ′′ + 𝑏𝑦 ′ + 𝑐𝑦 = 0
where a, b, and c are known functions with a(x) never being zero on the interval of interest. Then
assume that there is already one nontrivial particular solution 𝑦1 (𝑥) to this generic differential
equation.
Now, the details in using the reduction of order method to solve the above:
Step 1: Let
𝑦 = 𝑦1 𝑢
Then, using the product rule, derive y’ and y’’:
𝑦 ′ = (𝑦1 𝑢)′ = 𝑦1′ 𝑢 + 𝑦1 𝑢′
and
𝑦 ′′ = (𝑦′)′ = (𝑦1′ 𝑢 + 𝑦1 𝑢′)′
= (𝑦1′ 𝑢)′ + (𝑦1 𝑢′)′
= (𝑦1′′ 𝑢 + 𝑦1′ 𝑢′) + (𝑦1′ 𝑢′ + 𝑦1 𝑢′′)
= 𝑦1′′ 𝑢 + 2𝑦1′ 𝑢′ + 𝑦1 𝑢′′

Step 2: Plug the formulas just computed for y , y′ and y′′ into the differential equation, group
together the coefficients for u and each of its derivatives, and simplify as far as possible.

23
0 = 𝑎𝑦 ′′ + 𝑏𝑦 ′ + 𝑐𝑦
= 𝑎[𝑦1′′ 𝑢 + 2𝑦1′ 𝑢′ + 𝑦1 𝑢′′] + 𝑏[𝑦1′ 𝑢 + 𝑦1 𝑢′] + 𝑐[𝑦1 𝑢]
= 𝑎𝑦1′′ 𝑢 + 2𝑎𝑦1′ 𝑢′ + 𝑎𝑦1 𝑢′′ + 𝑏𝑦1′ 𝑢 + 𝑏𝑦1 𝑢′ + 𝑐𝑦1 𝑢
= 𝑎𝑦1 𝑢′′ + [2𝑎𝑦1′ + 𝑏𝑦1′ ]𝑢′ + [𝑎𝑦1′′ + 𝑏𝑦1′ + 𝑐𝑦1 ]𝑢
The differential equation becomes
𝐴𝑢′′ + 𝐵𝑢′ + 𝐶𝑢 = 0
where
𝐴 = 𝑎𝑦1
𝐵 = 2𝑎𝑦1′ + 𝑏𝑦1′
𝐶 = 𝑎𝑦1′′ + 𝑏𝑦1′ + 𝑐𝑦1
But remember 𝑦1 is a solution to the homogeneous equation
𝑎𝑦 ′′ + 𝑏𝑦 ′ + 𝑐𝑦 = 0
Consequently,
𝐶 = 𝑎𝑦1′′ + 𝑏𝑦1′ + 𝑐𝑦1 = 0
and the differential equation for u automatically reduces to
𝐴𝑢′′ + 𝐵𝑢′ = 0
The u term always drops out.
Step 3: Now find the general solution to the second-order differential equation just obtained for u
𝐴𝑢′′ + 𝐵𝑢′ = 0
via the substitution method:
(a) Let 𝑢′ = 𝑣.
Thus,
𝑑𝑣
𝑢′′ = 𝑣 ′ =
𝑑𝑥
To convert the second-order differential equation for u to the first-order differential
equation for v:
𝑑𝑣
𝐴 + 𝐵𝑣 = 0
𝑑𝑥
Note: This first-order differential equation will be both linear and separable.
(b) Find the general solution v(x) to this first-order equation.
(c) Using the formula just found for v , integrate the substitution formula u′ = v to
obtain the formula for u

𝑢(𝑥) = ∫ 𝑣(𝑥)𝑑𝑥

Don’t forget all the arbitrary constants.

24
Step 4: Finally, plug the formula just obtained for u(x) into the first substitution 𝑦 = 𝑦1 𝑢 used to
convert the original differential equation for y to a differential equation for u. The resulting
formula for y(x) will be a general solution for that original differential equation.

To illustrate the method, use the differential equation


𝑥 2 𝑦 ′′ − 3𝑥𝑦 ′ + 4𝑦 = 0
Note that the first coefficient, 𝑥 2 , vanishes when 𝑥 = 0. So 𝑥 = 0 ought not be in any interval of
interest for this equation and solution should be found over the intervals (0, ∞) and (−∞, 0). Before
starting the reduction of order method, one nontrivial solution 𝑦1 is needed to the differential
equation. Ways for finding that first solution will be discussed in later chapters. For now let us just
observe that if
𝑦1 (𝑥) = 𝑥 2
then
𝑑2 2 𝑑 2
𝑥 2 𝑦1′′ − 3𝑥𝑦1′ + 4𝑦1 = 𝑥 2 [𝑥 ] − 3𝑥 [𝑥 ] + 4[𝑥 2 ]
𝑑𝑥 2 𝑑𝑥
= 𝑥 2 [2 ∙ 1] − 3𝑥[2𝑥] + 4𝑥 2
= 𝑥 2 [2 − (3 ∙ 2) + 4] = 0
Thus, one solution to the above differential equation is 𝑦1 (𝑥) = 𝑥 2
Step 1:
𝑦 = 𝑦1 𝑢 = 𝑥 2 𝑢
The derivatives of y are:
𝑦 ′ = (𝑥 2 𝑢)′ = 2𝑥𝑢 + 𝑥 2 𝑢′
and
𝑦 ′′ = (𝑦 ′ )′ = (2𝑥𝑢 + 𝑥 2 𝑢′)′
= (2𝑥𝑢)′ + (𝑥 2 𝑢′)′
= (2𝑢 + 2𝑥𝑢′) + (2𝑥𝑢′ + 𝑥 2 𝑢′′)
= 2𝑢 + 4𝑥𝑢′ + 𝑥 2 𝑢′′

Step 2:
0 = 𝑥 2 𝑦 ′′ − 3𝑥𝑦 ′ + 4𝑦
= 𝑥 2 [2𝑢 + 4𝑥𝑢′ + 𝑥 2 𝑢′′] − 3𝑥[2𝑥𝑢 + 𝑥 2 𝑢′ ] + 4[𝑥 2 𝑢]
= 2𝑥 2 𝑢 + 4𝑥 3 𝑢′ + 𝑥 4 𝑢′′ − 6𝑥 2 𝑢 − 3𝑥 3 𝑢′ + 4𝑥 2 𝑢
= 𝑥 4 𝑢′′ + [4𝑥 3 − 3𝑥 3 ]𝑢′ + [2𝑥 2 − 6𝑥 2 + 4𝑥 2 ]𝑢
= 𝑥 4 𝑢′′ + 𝑥 3 𝑢′ + 0 ∙ 𝑢
So, the resulting differential equation for u is
𝑥 4 𝑢′′ + 𝑥 3 𝑢′ = 0

25
Further simplify by dividing 𝑥 4
1
𝑢′′ + 𝑢′ = 0
𝑥

Step 3: Let 𝑣 = 𝑢′ and 𝑣′ = 𝑢′′. Hence, the above differential equation becomes
𝑑𝑣 1
+ 𝑣=0
𝑑𝑥 𝑥
Equivalently,
𝑑𝑣 1
=− 𝑣
𝑑𝑥 𝑥
This is separable first-order differential equation.
1 𝑑𝑣 1
=−
𝑣 𝑑𝑥 𝑥
1 1
∫ 𝑑𝑣 = ∫ − 𝑑𝑥
𝑣 𝑥
𝑙𝑛|𝑥| = −𝑙𝑛|𝑥| + 𝐶0
𝑣 = ±𝑒 −𝑙𝑛|𝑥|+𝐶0
𝐶1
𝑣 = ±𝑥 −1 𝑒 𝐶0 =
𝑥
Since 𝑢′ = 𝑣, then
𝐶1
𝑢(𝑥) = ∫ 𝑣(𝑥)𝑑𝑥 = ∫ 𝑑𝑥 = 𝐶1 𝑙𝑛|𝑥| + 𝐶2
𝑥

Step 4: Here, 𝑦1 (𝑥) = 𝑥 2


𝑦 = 𝑦1 𝑢 = 𝑥 2 [𝐶1 𝑙𝑛|𝑥| + 𝐶2 ]
= 𝐶1 𝑥 2 𝑙𝑛|𝑥| + 𝐶2 𝑥 2
This is the general solution to the differential equation 𝑥 2 𝑦 ′′ − 3𝑥𝑦 ′ + 4𝑦 = 0. The general solution
obtained can be viewed as a linear combination of the two functions
𝑦1 (𝑥) = 𝑥 2 and 𝑦2 = 𝑥 2 𝑙𝑛|𝑥|
Since the C1 and C2 in the above formula for y(x) are arbitrary constants, and y2 is given by that
formula for y with C1 = 1 and C2 = 0, it must be that this y2 is another particular solution to our
original homogeneous linear differential equation. What’s more, it is clearly not a constant multiple
of y1.

The two differential equations


(𝑎) 𝑦 ′′ + 𝑥𝑦 = 0 (𝑏) 𝑥𝑦 ′′ + 𝑦
=0 (7)

26
are similar only in that they are both examples of simple linear second-order differential equations
with variable coefficients. For (7a), x = 0 is an ordinary point; hence, there is no problem in finding
two distinct power series solution centered at that point. In contrast, x = 0 is a singular point for (7b),
finding two infinite series solutions about that point becomes more difficult task.

Types of Singular Points


A differential equation having a singular point at 0 ordinarily will not have power series solutions of
the form

𝑦(𝑥) = ∑ 𝑐𝑛 𝑥 𝑛

So the straightforward method of power series fails in this case.

A singular point x0 of a linear differential equation


𝐴(𝑥)𝑦 ′′ + 𝐵(𝑥)𝑦 ′ + 𝐶(𝑥)𝑦 = 0
is further classified as either regular or irregular. The classification depends on the functions P and Q
in the standard form
𝑦 ′′ + 𝑃(𝑥)𝑦 ′ + 𝑄(𝑥)𝑦 = 0

Definition (Regular or Irregular Singular Points)


A singular point x0 is said to be a regular singular point of the differential equation (8) if the
functions
𝑝(𝑥) = (𝑥 − 𝑥0 )𝑃(𝑥) and 𝑞(𝑥) = (𝑥 − 𝑥0 )2 𝑄(𝑥)
are both analytic at x0. A singular point that is not regular is said to be irregular singular point of the
equation.

Quick Visual Check (Regular or Irregular Singular Points)


If 𝑥 − 𝑥0 appears at most to the first power in the denominator of P(x) and at most to the second
power in the denominator of Q(x), then 𝑥 − 𝑥0 is a regular singular point.

Example:
Find the singular point(s) for the differential equation
(𝑥 2 − 4)2 𝑦 ′′ + 3(𝑥 − 2)𝑦 ′ + 5𝑦 = 0
Answer
Divide the equation with
(𝑥 2 − 4)2 = (𝑥 − 2)2 (𝑥 + 2)2
and reduce the coefficients to the lowest terms, produce

27
3 5
𝑃(𝑥) = and 𝑄(𝑥) =
(𝑥 − 2)(𝑥 + 2)2 (𝑥 − 2)2 (𝑥 + 2)2

Test P(x) and Q(x)


(i) For x = 2 to be a regular point, the factor x – 2 can appear at most to the first power in the
denominator of P(x) and at most to the second power in the denominator of Q(x). A check
of the denominators of P(x) and Q(x) shows that both these conditions are satisfied, so x =
2 is a regular singular point. Alternatively, the same conclusion is made by noting that
both rational functions
3 5
𝑝(𝑥) = (𝑥 − 2)𝑃(𝑥) = and 𝑞(𝑥) = (𝑥 − 2)2 𝑄(𝑥) =
(𝑥 + 2)2 (𝑥 + 2)2
are analytic at x =2.

(ii) Now since the factor x - (-2) = x + 2 appears to the second power in the denominator of P(x),
we can conclude immediately that x = -2 is an irregular singular point of the equation. This
also follows from the fact that
3
𝑝(𝑥) = (𝑥 + 2)𝑃(𝑥) =
(𝑥 − 2)(𝑥 + 2)
is not analytic at x = -2.

10.2 FROBENIUS METHOD

If x = x0 is a singular point of the differential equation (8), then there exists at least one solution of
the form
∞ ∞

𝑦(𝑥) = (𝑥 − 𝑥0 )𝑟 ∑ 𝑎𝑛 (𝑥 − 𝑥0 )𝑛 = ∑ 𝑎𝑛 (𝑥 − 𝑥0 )𝑛+𝑟
𝑛=0 𝑛=0
where the number r and the ck’s are constants to be determined. The series will converge at least on
some interval 0 < x – x0 < R.

An Introduction to the Method of Frobenius


Before actually starting the method, there are two “pre-steps”:
Pre-step 1: Choose a value for x0. If conditions are given for y(x) at some point, then use that
point for x0. Otherwise, choose x0 as convenient — which usually choose x0 = 0.
Pre-step 2: Get the differential equation into the form
𝐴(𝑥)𝑦 ′′ + 𝐵(𝑥)𝑦 ′ + 𝐶(𝑥)𝑦 = 0
where A, B, and C are polynomials.

28
Now for the basic method of Frobenius:
Step 1: (a) Start by assuming a solution of the form

𝑦 = 𝑦(𝑥) = (𝑥 − 𝑥0 )𝑟 ∑ 𝑎𝑘 (𝑥 − 𝑥0 )𝑘
𝑘=0

where a0 is an arbitrary constant. Since it is arbitrary, we can and will assume a0  0


in the following computations.
(b) Then simplify the formula for the following computations by bringing the
(𝑥 − 𝑥0 )𝑟 factor into the summation,

𝑦 = 𝑦(𝑥) = ∑ 𝑎𝑘 (𝑥 − 𝑥0 )𝑘+𝑟
𝑘=0

(c) And then compute the corresponding modified power series for y′ and y′′ from the
assumed series for y by differentiating “term-by-term”.
Step 2: Plug these series for y , y′ , and y′′ back into the differential equation, “multiply things out”,
and divide out the (𝑥 − 𝑥0 )𝑟 to get the left side of your equation in the form of the sum of
a few power series.
Some Notes:
ii. Absorb any x’s in A, B and C (of the differential equation) into the series.
iii. Dividing out the (𝑥 − 𝑥0 )𝑟 isn’t necessary, but it simplifies the expressions slightly
and reduces the chances of silly errors later.
iv. You may want to turn your paper sideways for more room!
Step 3: For each series in your last equation, do a change of index so that each series looks like

∑ [something not involving 𝑥](𝑥 − 𝑥0 )𝑛


𝑛=something

Be sure to appropriately adjust the lower limit in each series.

Step 4: Convert the sum of series in your last equation into one big series. The first few terms will
probably have to be written separately. Simplify what can be simplified.
Observe that the end result of this step will be an equation of the form
some big power series = 0
This, in turn, tells that each term that big power series must be 0.

Step 5: The first term in the last equation just derived will be of the form
𝑎0 [formula of 𝑟](𝑥 − 𝑥0 )something
But, remember, each term in that series must be 0. So we must have

29
𝑎0 [formula of 𝑟] = 0
Moreover, since a0  0 (by assumption), the above must reduce to
formula of 𝑟 = 0
This is the indicial equation for r. It will always be a quadratic equation for r (i.e., of the form
𝛼𝑟 2 + 𝛽𝑟 + 𝛿 = 0). Solve this equation for r. You will get two solutions (sometimes called
either the exponents of the solution or the exponents of the singularity). Denote them by r2
and r1 with r2 ≤ r1

Step 6: Using r1 , the larger r just found:


(a) Plug r1 into the last series equation (and simplify, if possible). This will give you an
equation of the form

∑ [𝑛𝑡ℎ formula of 𝑎𝑘 ′𝑠 ](𝑥 − 𝑥0 )𝑛 = 0


𝑛=𝑛0

Since each term must vanish, we have


𝑛𝑡ℎ formula of 𝑎𝑘′ 𝑠 = 0 for 𝑛0 ≤ 𝑛
(b) Solve this for
𝑎highest index = formula of n and lower indexed 𝑎𝑘 ′𝑠
A few of these equations may need to be treated separately, but you will also obtain
a relatively simple formula that holds for all indices above some fixed value. This
formula is the recursion formula for computing each coefficient an from the
previously computed coefficients.
(c) To simplify things just a little, do another change of indices so that the recursion
formula just derived is rewritten as
𝑎k = formula of k and lower indexed coefficients
Step 7: Use the recursion formula (and any corresponding formulas for the lower-order terms) to
find all the ak ’s in terms of a0 and, possibly, one other am . Look for patterns!

Step 8: Using r = r1 along with the formulas just derived for the coefficients, write out the resulting
series for y. Try to simplify it and factor out the arbitrary constant(s).

Step 9: If the indicial equation had two distinct solutions, now repeat steps 6 through 8 with the
smaller r, r2. Sometimes (but not always) this will give you a second independent solution
to the differential equation. Sometimes, also, the series formula derived in this mega-step
will include the series formula already derived.

Step 10: If the last step yielded y as an arbitrary linear combination of two different series, then
that is the general solution to the original differential equation. If the last step yielded y as
just one arbitrary constant times a series, then the general solution to the original

30
differential equation is the linear combination of the two series obtained at the end of
steps 8 and 9. Either way, write down the general solution (using different symbols for the
two different arbitrary constants!). If step 9 did not yield a new series solution, then at
least write down the one solution previously derived, noting that a second solution is still
needed for the general solution to the differential equation.

Last Step: See if you recognize the series as the series for some well-known function (you
probably won’t!).

The following Bessel’s equation of order ½ will be solved to illustrate the method.
𝑑2 𝑦 1 𝑑𝑦 1
2
+ + [1 − 2 ] 𝑦 = 0
𝑑𝑥 𝑥 𝑑𝑥 4𝑥
Pre-step 1: There is no initial values at any point, so we will choose x0 as simply as possibly;
namely, x0 = 0, which is a regular singular point.
Pre-step 2: To get the given differential equation into the form desired, we multiply the equation
by 4x2. That gives us the differential equation
𝑑2 𝑦 𝑑𝑦
4𝑥 2 2
+ 4𝑥 + [4𝑥 2 − 1]𝑦 = 0
𝑑𝑥 𝑑𝑥
Step 1: Since we’ve already decided x0 = 0, we assume
∞ ∞

𝑦(𝑥) = 𝑥 ∑ 𝑎𝑘 𝑥 = ∑ 𝑎𝑘 𝑥 𝑘+𝑟
𝑟 𝑘

𝑘=0 𝑘=0

with a0  0. Differentiating this term-by-term, we see that


∞ ∞ ∞

𝑑 𝑑
𝑦 = ∑ 𝑎𝑘 𝑥 𝑘+𝑟 = ∑ [𝑎 𝑥 𝑘+𝑟 ] = ∑(𝑘 + 𝑟)𝑎𝑘 𝑥 𝑘+𝑟−1
𝑑𝑥 𝑑𝑥 𝑘
𝑘=0 𝑘=0 𝑘=0
∞ ∞ ∞
′′
𝑑 𝑑
𝑦 = ∑(𝑘 + 𝑟)𝑎𝑘 𝑥 𝑘+𝑟−1 = ∑ [(𝑘 + 𝑟)𝑎𝑘 𝑥 𝑘+𝑟−1 ] = ∑(𝑘 + 𝑟)(𝑘 + 𝑟 − 1)𝑎𝑘 𝑥 𝑘+𝑟−2
𝑑𝑥 𝑑𝑥
𝑘=0 𝑘=0 𝑘=0

Step 2: Combining the above series formulas for y , y′ and y′′ with our differential equation, we get
𝑑2 𝑦 𝑑𝑦
0 = 4𝑥 2 2
+ 4𝑥 + [4𝑥 2 − 1]𝑦
𝑑𝑥 𝑑𝑥
∞ ∞ ∞
2 𝑘+𝑟−2 𝑘+𝑟−1 [4𝑥 2
= 4𝑥 ∑(𝑘 + 𝑟)(𝑘 + 𝑟 − 1)𝑎𝑘 𝑥 + 4𝑥 ∑(𝑘 + 𝑟)𝑎𝑘 𝑥 + − 1] ∑ 𝑎𝑘 𝑥 𝑘+𝑟
𝑘=0 𝑘=0 𝑘=0

31
∞ ∞ ∞
2 𝑘+𝑟−2 𝑘+𝑟−1
= 4𝑥 ∑(𝑘 + 𝑟)(𝑘 + 𝑟 − 1)𝑎𝑘 𝑥 + 4𝑥 ∑(𝑘 + 𝑟)𝑎𝑘 𝑥 + 4𝑥 ∑ 𝑎𝑘 𝑥 𝑘+𝑟
2

𝑘=0 𝑘=0 𝑘=0


− 1 ∑ 𝑎𝑘 𝑥 𝑘+𝑟
𝑘=0
∞ ∞ ∞
𝑘+𝑟 𝑘+𝑟
= ∑(𝑘 + 𝑟)(𝑘 + 𝑟 − 1)4𝑎𝑘 𝑥 + ∑(𝑘 + 𝑟)4𝑎𝑘 𝑥 + ∑ 4𝑎𝑘 𝑥 𝑘+2+𝑟
𝑘=0 𝑘=0 𝑘=0

+ ∑(−1)𝑎𝑘 𝑥 𝑘+𝑟
𝑘=0

Dividing out the xr from each term then yields


∞ ∞ ∞ ∞
𝑘 𝑘 𝑘+2
= ∑(𝑘 + 𝑟)(𝑘 + 𝑟 − 1)4𝑎𝑘 𝑥 + ∑(𝑘 + 𝑟)4𝑎𝑘 𝑥 + ∑ 4𝑎𝑘 𝑥 + ∑(−1)𝑎𝑘 𝑥 𝑘
𝑘=0 𝑘=0 𝑘=0 𝑘=0

Step 3: In all but the third series, the “change of index” is trivial, n = k . In the third series, we will
set n = k + 2 (equivalently, n − 2 = k). This means, in the third series, replacing k with n −2 ,
and replacing k = 0 with n = 0 + 2 = 2:
∞ ∞ ∞ ∞
𝑘 𝑘 𝑘+2
0 = ∑(𝑘 + 𝑟)(𝑘 + 𝑟 − 1)4𝑎𝑘 𝑥 + ∑(𝑘 + 𝑟)4𝑎𝑘 𝑥 + ∑ 4𝑎𝑘 𝑥 + ∑(−1)𝑎𝑘 𝑥 𝑘
𝑘=0 𝑘=0 𝑘=0 𝑘=0

n=k n=k n = k+2 n=k

∞ ∞ ∞ ∞

= ∑(𝑛 + 𝑟)(𝑛 + 𝑟 − 1)4𝑎𝑛 𝑥 + ∑(𝑛 + 𝑟)4𝑎𝑛 𝑥 + ∑ 4𝑎𝑛−2 𝑥 + ∑(−1)𝑎𝑛 𝑥 𝑛


𝑛 𝑛 𝑛

𝑛=0 𝑛=0 𝑛=2 𝑛=0

Step 4: Since one of the series in the last equation begins with n = 2, we need to separate out the
terms corresponding to n = 0 and n = 1 in the other series before combining series:
0 = 4𝑎0 (0 + 𝑟)(0 + 𝑟 − 1)𝑥 0 + 4𝑎1 (1 + 𝑟)(1 + 𝑟 − 1)𝑥 1

+ ∑(𝑛 + 𝑟)(𝑛 + 𝑟 − 1)4𝑎𝑛 𝑥 𝑛 + 4𝑎0 (0 + 𝑟)𝑥 0 + 4𝑎1 (1 + 𝑟)𝑥 1


𝑛=2
∞ ∞ ∞

+ ∑(𝑛 + 𝑟)4𝑎𝑛 𝑥 + ∑ 4𝑎𝑛−2 𝑥 − 𝑎0 𝑥 − 𝑎1 𝑥 + ∑(−1)𝑎𝑛 𝑥 𝑛


𝑛 𝑛 0 1

𝑛=2 𝑛=2 𝑛=2

So our differential equation reduces to


𝑎0 [4𝑟 2 − 1]𝑥 + 𝑎10 [4𝑟 2 + 8𝑟 + 3]𝑥 + ∑ 𝑎𝑛 [4(𝑛 + 𝑟)2 − 1) + 4𝑎𝑛−2 ]𝑥 𝑛 = 0


1
(∗)
𝑛=2

Observe that the end result of this step will be an equation of the form
some big power series = 0

32
This, in turn, tells us that each term that big power series must be 0.

Step 5: The first term in the “big series” is the first term in the equation
𝑎0 [4𝑟 2 − 1]𝑥 0
Since this must be zero (and a0  0 by assumption) the indicial equation is
4𝑟 2 − 1 = 0
Thus,

1 1
𝑟 = ±√ = ±
4 2

Following the convention given,


1 1
𝑟2 = − and 𝑟1 =
2 2

1
Step 6: Letting 𝑟 = 𝑟1 = 2, equation (*) yields

1 2 1 2 1 1 2
𝑎0 [4 ( ) − 1] 𝑥 + 𝑎1 [4 ( ) + 8 ( ) + 3] 𝑥 + ∑ 𝑎𝑛 [4 (𝑛 + ) − 1) + 4𝑎𝑛−2 ] 𝑥 𝑛 = 0
0 1
2 2 2 2
𝑛=2

𝑎0 0𝑥 0 + 𝑎1 8𝑥 1 + ∑[𝑎𝑛 (4𝑛2 + 4𝑛 + 1 − 1) + 4𝑎𝑛−2 ]𝑥 𝑛 = 0


𝑛=2

The first term vanishes (as it should since r = 1/2 satisfies the indicial equation, which came from
making the first term vanish). Doing a little more simple algebra, we see that, with r = 1/2 , equation
(*) reduces to

0𝑎0 𝑥 + 8𝑎1 𝑥 + ∑ 4[𝑛(𝑛 + 1)𝑎𝑛 + 𝑎𝑛−2 ]𝑥 𝑛 = 0


0 1
(∗∗)
𝑛=2

From the above series, we must have


𝑛(𝑛 + 1)𝑎𝑛 + 𝑎𝑛−2 = 0 for 𝑛 = 2, 3, 4, …
Solving for an leads to the recursion formula
−1
𝑎𝑛 = 𝑎 for 𝑛 = 2, 3, 4, …
𝑛(𝑛 + 1) 𝑛−2
Using the trivial change of index, k = n, this is
−1
𝑎𝑘 = 𝑎 for 𝑘 = 2, 3, 4, …
𝑘(𝑘 + 1) 𝑘−2

Step 7: From the first two terms in equation (**),


0𝑎0 = 0 ⟹ 𝑎0 is arbitrary
8𝑎1 = 0 ⟹ 𝑎1 = 0

33
Using these values and the recursion formula with k = 2, 3, 4, . . . (and looking for patterns):
−1 −1
𝑎2 = 𝑎2−2 = 𝑎
2(2 + 1) 2∙3 0
−1 −1 −1
𝑎3 = 𝑎3−2 = 𝑎1 = ∙0=0
3(3 + 1) 3∙4 3∙4
−1 −1 −1 −1 (−1)2 (−1)2
𝑎4 = 𝑎4−2 = 𝑎2 = ∙ 𝑎0 = 𝑎0 = 𝑎0
4(4 + 1) 4∙5 4∙5 2∙3 5∙4∙3∙2 5!
−1 −1 −1
𝑎5 = 𝑎5−2 = 𝑎3 = ∙0=0
5(5 + 1) 5∙6 5∙6
−1 −1 −1 (−1)2 (−1)3
𝑎6 = 𝑎6−2 = 𝑎4 = ∙ 𝑎0 = 𝑎0
6(6 + 1) 6∙7 7∙6 5! 7!

The patterns should be obvious here:
𝑎𝑘 = 0 for 𝑘 = 1, 3, 5, 7, …
and
(−1)𝑘/2
𝑎𝑘 = 𝑎 for 𝑘 = 2, 4, 6, 8, …
(𝑘 + 1)! 0
Using k = 2m, this be written more conveniently as
𝑎0
𝑎2𝑚 = (−1)𝑚 for 𝑚 = 1, 2, 3, 4, 5, …
(2𝑚 + 1)!

Step 8: Plugging r = 1/2 and the formulas just derived for the an’s into the formula originally
assumed for y, we get

𝑦 = 𝑥 ∑ 𝑎𝑘 𝑥 𝑘
𝑟

𝑘=0

∞ ∞

= 𝑥 [ ∑ 𝑎𝑘 𝑥 + ∑ 𝑎𝑘 𝑥 𝑘 ]
𝑟 𝑘

𝑘=0 𝑘=0
𝑘 𝑜𝑑𝑑 𝑘 𝑒𝑣𝑒𝑛

∞ ∞
1/2
𝑎0
=𝑥 [ ∑ 0 ∙ 𝑥 + ∑ (−1)𝑚
𝑘
𝑥 2𝑚 ]
(2𝑚 + 1)!
𝑘=0 𝑚=0
𝑘 𝑜𝑑𝑑

1/2
1
=𝑥 [0 + 𝑎0 ∑ (−1)𝑚 𝑥 2𝑚 ]
(2𝑚 + 1)!
𝑚=0

So one solution to Bessel’s equation of order 1/2 is given by



1/2
(−1)𝑚 2𝑚
𝑦 = 𝑎0 𝑥 ∑ 𝑥
(2𝑚 + 1)!
𝑚=0

34
1
Step 9: Letting 𝑟 = 𝑟2 = − in equation (*) yields
2

1 2 1 2 1 1 2
𝑎0 [4 (− ) − 1] 𝑥 + 𝑎1 [4 (− ) + 8 (− ) + 3] 𝑥 + ∑ 𝑎𝑛 [4 (𝑛 − ) − 1) + 4𝑎𝑛−2 ] 𝑥 𝑛
0 1
2 2 2 2
𝑛=2
=0

𝑎0 0𝑥 + 𝑎1 0𝑥 + ∑[𝑎𝑛 (4𝑛2 − 4𝑛 + 1 − 1) + 4𝑎𝑛−2 ]𝑥 𝑛


0 1

𝑛=2
=0
Then …
⋮ {“Fill in the dots” in the last statement. That is, do all the computations that were omitted.}

yielding
∞ ∞
−1/2
(−1)𝑚 2𝑚 (−1)𝑚 2𝑚+1
𝑦 = 𝑎0 𝑥 ∑ 𝑥 + 𝑎1 𝑥 −1/2 ∑ 𝑥
(2𝑚 + 1)! (2𝑚 + 1)!
𝑚=0 𝑚=0

Note that the second series term is the same series (slightly rewritten) since x−1/2x2m+1 = x 1/2x2m
Step 10: We are in luck. In the last step we obtained y as the linear combination of two different
series. So
∞ ∞
−1/2
(−1)𝑚 2𝑚 (−1)𝑚 2𝑚+1
𝑦 = 𝑎0 𝑥 ∑ 𝑥 + 𝑎1 𝑥 −1/2 ∑ 𝑥
(2𝑚 + 1)! (2𝑚 + 1)!
𝑚=0 𝑚=0

is the general solution to the original differential equation — Bessel’s equation of order 1/2.

Last Step: Our luck continues! The two series are easily recognized as the series for the sine and
the cosine functions:
𝑦 = 𝑎0 𝑥 −1/2 cos 𝑥 + 𝑎1 𝑥 −1/2 sin 𝑥
So the general solution to Bessel’s equation of order 1/2 is
cos 𝑥 sin 𝑥
= 𝑎0 + 𝑎1
√𝑥 √𝑥

Advice and Comments


1. If you get something like
2𝑎1 = 0
then you know a1 = 0. On the other hand, if you get something like
0𝑎1 = 0
then you have an equation that tells you nothing about a1 . This means that a1 is an arbitrary
constant (unless something else tells you otherwise).

35
2. If the recursion formula blows up at some point, then some of the coefficients must be zero.
For example, if
3
𝑎𝑛 = 𝑎
(𝑛 + 2)(𝑛 − 5) 𝑛−2
then, for n = 5 ,
3
𝑎𝑛 = 𝑎 = ∞ ∙ 𝑎3
(7)(0) 3
which can only make sense if a3 = 0. Note also, that, unless otherwise indicated, a5 here would
be arbitrary. (Remember, the last equation is equivalent to (7)(0)a5 = 3a3 .)
3 If you get a coefficient being zero, it is a good idea to check back using the recursion formula
to see if any of the previous coefficients must also be zero, or if many of the following
coefficients are zero. In some cases, you may find that an “infinite” series solution only
contains a finite number of nonzero terms, in which case we have a “terminating series”; i.e.,
a solution which is simply a polynomial.
On the other hand, obtaining a0 = 0, contrary to our basic assumption that a0  0, tells you
that there is no series solution of the form assumed for the basic Frobenius method using that
value of r.
4. It is possible to end up with a three term recursion formula, say,
1 2
𝑎𝑛 = 𝑎𝑛−1 + 𝑎
𝑛2 +1 3𝑛(𝑛 + 3) 𝑛−2
This, naturally, makes “finding patterns” rather difficult.
5. Keep in mind that, even if you find that “finding patterns” and describing them by “nice”
formulas is beyond you, you can always use the recursion formulas to compute (or have a
computer compute) as many terms as you wish of the series solutions.
6. The computations can become especially messy and confusing when a0  0. In this case,
simplify matters by using the substitutions
𝑌(𝑋) = 𝑦(𝑥) with 𝑋 = 𝑥 − 𝑥0
You can then easily verify that, under these substitutions,
𝑑𝑌 𝑑𝑦
𝑌 ′(𝑋) = = = 𝑦(𝑥)
𝑑𝑋 𝑑𝑥
and the differential equation
𝑑2 𝑦 𝑑𝑦
𝐴(𝑥) 2
+ 𝐵(𝑥) + 𝐶(𝑥)𝑦 = 0
𝑑𝑥 𝑑𝑥
becomes
𝑑2 𝑌 𝑑𝑌
𝐴1 (𝑋) 2
+ 𝐵1 (𝑋) + 𝐶1 (𝑋)𝑌 = 0
𝑑𝑋 𝑑𝑋
with
𝐴1 (𝑋) = 𝐴(𝑋 + 𝑥0 ) , 𝐵1 (𝑋) = 𝐵(𝑋 + 𝑥0 ) and 𝐶1 (𝑋) = 𝐶(𝑋 + 𝑥0 )

36
Use the method of Frobenius to find the modified power series solutions

𝑌(𝑋) = (𝑋)𝑟 ∑ 𝑎𝑘 𝑋 𝑘
𝑘=0
𝑑2 𝑌 𝑑𝑌
for equation 𝐴1 (𝑋) 𝑑𝑋 2 + 𝐵1 (𝑋) 𝑑𝑋 + 𝐶1 (𝑋)𝑌 = 0 . The corresponding solutions to the
𝑑2 𝑦 𝑑𝑦
original differential equation, equation 𝐴(𝑥) 𝑑𝑥2 + 𝐵(𝑥) 𝑑𝑥 + 𝐶(𝑥)𝑦 = 0, are then given from
this via the above substitution,
∞ ∞

𝑦(𝑥) = 𝑌(𝑋) = (𝑋)𝑟 𝑘


∑ 𝑎𝑘 𝑋 = (𝑥 − 𝑥0 )𝑟 ∑ 𝑎𝑘 (𝑥 − 𝑥0 )𝑘
𝑘=0 𝑘=0

The Big Theorem on the Frobenius Method


Let x0 be a regular singular point (on the real line) for
𝑑2 𝑦 𝑑𝑦
𝑎(𝑥) 2
+ 𝑏(𝑥) + 𝑐(𝑥)𝑦 = 0
𝑑𝑥 𝑑𝑥
Then the indicial equation arising in the basic method of Frobenius exists and is a quadratic
equation with two solutions r1 and r2 (which may be one solution, repeated). If r2 and r1 are real,
assume r2 ≤ r1. Then:
1. The basic method of Frobenius will yield at least one solution of the form

𝑦1 (𝑥) = (𝑥 − 𝑥0 )𝑟1 ∑ 𝑎𝑘 (𝑥 − 𝑥0 )𝑘
𝑘=0

where a0 is the one and only arbitrary constant.

2. If r1 − r2 is not an integer, then the basic method of Frobenius will yield a second independent
solution of the form

𝑦2 (𝑥) = (𝑥 − 𝑥0 )𝑟2 ∑ 𝑎𝑘 (𝑥 − 𝑥0 )𝑘
𝑘=0

where a0 is an arbitrary constant.

3. If r1 − r2 = N is positive integer, then the method of Frobenius might yield a second


independent solution of the form

𝑦2 (𝑥) = (𝑥 − 𝑥0 )𝑟2 ∑ 𝑎𝑘 (𝑥 − 𝑥0 )𝑘
𝑘=0

where a0 is an arbitrary constant. If it doesn’t, then a second independent solution exists of


the form

𝑦2 (𝑥) = 𝑦1 (𝑥) ln|𝑥 − 𝑥0 | + (𝑥 − 𝑥0 )𝑟2 ∑ 𝑏𝑘 (𝑥 − 𝑥0 )𝑘


𝑘=0

37
or, equivalently

𝑦2 (𝑥) = 𝑦1 (𝑥) [ln|𝑥 − 𝑥0 | + (𝑥 − 𝑥0 )−𝑁 ∑ 𝑐𝑘 (𝑥 − 𝑥0 )𝑘 ]


𝑘=0

where b0 and c0 are nonzero constants.

4. If r1 = r2, then there is a second solution of the form


𝑦2 (𝑥) = 𝑦1 (𝑥) ln|𝑥 − 𝑥0 | + (𝑥 − 𝑥0 )1+𝑟1 ∑ 𝑏𝑘 (𝑥 − 𝑥0 )𝑘


𝑘=0

or, equivalently

𝑦2 (𝑥) = 𝑦1 (𝑥) [ln|𝑥 − 𝑥0 | + (𝑥 − 𝑥0 )𝑙 ∑ 𝑐𝑘 (𝑥 − 𝑥0 )𝑘 ]


𝑘=0

In this case, b0 and c0 might be zero.


Moreover, if we let R be the distance between x0 and the nearest singular point (other than
x0 ) in the complex plane (with R = ∞ if x0 is the only singular point), then the series solutions
described above converge at least on the intervals (x0 − R, x0) and (x0, x0 + R).

Example 1: Apply the power series method to the following differential equations:
1 ′ 1
𝑦 ′′ + 𝑦 + 𝑦=0
2𝑥 4𝑥
Answer:
Regular singular point at x = 0.
Multiply the equation with 4x:
4𝑥𝑦 ′′ + 2𝑦′ + 𝑦 = 0
Assume solution:

𝑦 = ∑ 𝑎𝑚 𝑥 𝑚+𝑟
𝑚=0

Then,

𝑦 = ∑ (𝑚 + 𝑟) 𝑎𝑚 𝑥 𝑚+𝑟−1

𝑚=0

𝑦 = ∑ (𝑚 + 𝑟) (𝑚 + 𝑟 − 1)𝑎𝑚 𝑥 𝑚+𝑟−2
′′

𝑚=0

Hence,
∞ ∞ ∞
𝑚+𝑟−2 𝑚+𝑟−1
4𝑥 ∑ (𝑚 + 𝑟) (𝑚 + 𝑟 − 1)𝑎𝑚 𝑥 + 2 ∑ (𝑚 + 𝑟) 𝑎𝑚 𝑥 + ∑ 𝑎𝑚 𝑥 𝑚+𝑟 = 0
𝑚=0 𝑚=0 𝑚=0

38
∞ ∞ ∞
𝑚+𝑟−1 𝑚+𝑟−1
4 ∑ (𝑚 + 𝑟) (𝑚 + 𝑟 − 1)𝑎𝑚 𝑥 + 2 ∑ (𝑚 + 𝑟) 𝑎𝑚 𝑥 + ∑ 𝑎𝑚 𝑥 𝑚+𝑟 = 0
𝑚=0 𝑚=0 𝑚=0

For the first and second summation, let k + r = m + r – 1 which implies m = k + 1


For the third summation, let k + r = m + r, which implies m = k
∞ ∞ ∞
𝑘+𝑟 𝑘+𝑟
4 ∑ (𝑘 + 1 + 𝑟) (𝑘 + 𝑟)𝑎𝑘+1 𝑥 + 2 ∑ (𝑘 + 1 + 𝑟) 𝑎𝑘+1 𝑥 + ∑ 𝑎𝑘 𝑥 𝑘+𝑟 = 0
𝑘=−1 𝑘=−1 𝑘=0

4(𝑟)(𝑟 − 1)𝑎0 𝑥 𝑟−1 + 4 ∑ (𝑘 + 1 + 𝑟) (𝑘 + 𝑟)𝑎𝑘+1 𝑥 𝑘+𝑟 + 2(𝑟)𝑎0 𝑥 𝑟−1


𝑘=−1
∞ ∞
𝑘+𝑟
+ 2 ∑ (𝑘 + 1 + 𝑟) 𝑎𝑘+1 𝑥 + ∑ 𝑎𝑘 𝑥 𝑘+𝑟 = 0
𝑘=−1 𝑘=0

[4𝑟(𝑟 − 1) + 2𝑟]𝑎0 𝑥 𝑟−1 + ([ ∑ 4(𝑘 + 1 + 𝑟) (𝑘 + 𝑟)𝑎𝑘+1 + 2(𝑘 + 1 + 𝑟)𝑎𝑘+1 + 𝑎𝑘 ] 𝑥 𝑘+𝑟 ) = 0
𝑘=−1

Indicial Equation:
4𝑟(𝑟 − 1) + 2𝑟 = 0
⇒ 4𝑟 2 − 4𝑟 + 2𝑟 = 0
1
⇒ 𝑟2 − 𝑟 = 0
2
1
⇒ 𝑟 (𝑟 − ) = 0
2
1 Distinct roots not differing by integer
∴ 𝑟2 = 0 ; 𝑟1 = 2
(including complex conjugates)

4(𝑘 + 1 + 𝑟)(𝑘 + 𝑟)𝑎𝑘+1 + 2(𝑘 + 1 + 𝑟)𝑎𝑘+1 + 𝑎𝑘 = 0


−𝑎𝑘
𝑎𝑘+1 = 𝑘 = 0, 1, 2, …
4(𝑘 + 1 + 𝑟)(𝑘 + 𝑟) + 2(𝑘 + 1 + 𝑟)

1
First solution: r=2
−𝑎𝑘
𝑎𝑘+1 = 1 1 1
𝑘 = 0, 1, 2, …
4 (𝑘 + 1 + 2
) (𝑘 + 2) + 2 (𝑘 + 1 + 2)

−𝑎0 𝑎0
𝑎1 = =−
3∙2 3!
−𝑎1 −𝑎0 𝑎
𝑎2 = =− =
5∙4 5 ∙ 4 ∙ 3! 5!

39
−𝑎2 𝑎0 𝑎0
𝑎3 = =− =−
7∙6 7 ∙ 6 ∙ 5! 7!

In general, let a0 = 1

1⁄ 1 1 2 1 1 (−1)𝑚 𝑚
𝑦1 (𝑥) = 𝑥 2 (1 − 𝑥 + 𝑥 − 𝑥 3 ± ⋯ ) = 𝑥 ⁄2 ∑ 𝑥
6 120 5040 (2𝑚 + 1)!
𝑚=0

Second solution: r=0


−𝐴𝑘
𝐴𝑘+1 = 𝑘 = 0, 1, 2, …
4(𝑘 + 1)(𝑘) + 2(𝑘 + 1)

−𝐴0 𝐴0
𝐴1 = =−
2∙1 2!
−𝐴1 −𝐴0 𝐴0
𝐴2 = =− =
4∙3 4 ∙ 3 ∙ 2! 4!
−𝐴2 𝐴0 𝐴0
𝐴3 = =− =−
6∙5 6 ∙ 5 ∙ 4! 6!

In general, let A0 = 1

1 1 1 3 (−1)𝑚 𝑚
𝑦2 (𝑥) = 𝑥 (1 − 𝑥 + 𝑥 2 −
0
𝑥 ± ⋯) = ∑ 𝑥
2 24 720 (2𝑚)!
𝑚=0

Example 2: Solve the following differential equations using power series method:
𝑥(𝑥 − 1)𝑦 ′′ + (3𝑥 − 1)𝑦 ′ + 𝑦 = 0
Answer:
Assume solution:

𝑦 = ∑ 𝑎𝑚 𝑥 𝑚+𝑟
𝑚=0

Hence,
∞ ∞ ∞
𝑚+𝑟−2 𝑚+𝑟−1
𝑥(𝑥 − 1) ∑ (𝑚 + 𝑟) (𝑚 + 𝑟 − 1)𝑎𝑚 𝑥 + (3𝑥 − 1) ∑ (𝑚 + 𝑟) 𝑎𝑚 𝑥 + ∑ 𝑎𝑚 𝑥 𝑚+𝑟
𝑚=0 𝑚=0 𝑚=0
=0

40
∞ ∞ ∞
𝑚+𝑟 𝑚+𝑟−1
∑ (𝑚 + 𝑟) (𝑚 + 𝑟 − 1)𝑎𝑚 𝑥 − ∑ (𝑚 + 𝑟) (𝑚 + 𝑟 − 1)𝑎𝑚 𝑥 + 3 ∑ (𝑚 + 𝑟)𝑎𝑚 𝑥 𝑚+𝑟
𝑚=0 𝑚=0 𝑚=0
∞ ∞

− ∑ (𝑚 + 𝑟) 𝑎𝑚 𝑥 𝑚+𝑟−1 + ∑ 𝑎𝑚 𝑥 𝑚+𝑟 = 0
𝑚=0 𝑚=0

For the second and fourth summation, let k + r = m + r – 1 which implies m = k + 1


For the first, third, and fifth summation, let k + r = m + r, which implies m = k

∞ ∞ ∞
𝑘+𝑟 𝑘+𝑟
∑(𝑘 + 𝑟) (𝑘 + 𝑟 − 1)𝑎𝑘 𝑥 − ∑ (𝑘 + 1 + 𝑟)(𝑘 + 𝑟) 𝑎𝑘+1 𝑥 + 3 ∑(𝑘 + 𝑟) 𝑎𝑘 𝑥 𝑘+𝑟
𝑘=0 𝑘=−1 𝑘=0
∞ ∞

− ∑ (𝑘 + 1 + 𝑟) 𝑎𝑘+1 𝑥 𝑘+𝑟 + ∑ 𝑎𝑘 𝑥 𝑘+𝑟 = 0


𝑘=−1 𝑘=0

∞ ∞
𝑘+𝑟 𝑟−1
∑(𝑘 + 𝑟) (𝑘 + 𝑟 − 1)𝑎𝑘 𝑥 − (𝑟)(𝑟 − 1)𝑎0 𝑥 − ∑(𝑘 + 1 + 𝑟)(𝑘 + 𝑟) 𝑎𝑘+1 𝑥 𝑘+𝑟
𝑘=0 𝑘=0
∞ ∞ ∞
𝑘+𝑟 𝑟−1 𝑘+𝑟
+ 3 ∑(𝑘 + 𝑟) 𝑎𝑘 𝑥 − (𝑟)𝑎0 𝑥 − ∑(𝑘 + 1 + 𝑟) 𝑎𝑘+1 𝑥 + ∑ 𝑎𝑘 𝑥 𝑘+𝑟
𝑘=0 𝑘=0 𝑘=0
=0

(−(𝑟)(𝑟 − 1) − 𝑟)𝑎0 𝑥 𝑟−1 +


[∑(𝑘 + 𝑟) (𝑘 + 𝑟 − 1)𝑎𝑘 − (𝑘 + 1 + 𝑟)(𝑘 + 𝑟)𝑎𝑘+1 + 3(𝑘 + 𝑟)𝑎𝑘 − (𝑘 + 1 + 𝑟)𝑎𝑘+1 + 𝑎𝑘 ] 𝑥 𝑘+𝑟 = 0


𝑘=0

Indicial Equation:
−(𝑟)(𝑟 − 1) − 𝑟 = 0
⇒ −𝑟 2 + 𝑟 − 𝑟 = 0
⇒ 𝑟2 = 0
∴ 𝑟1 = 0 ; 𝑟2 = 0 Double root

(𝑘 + 𝑟)(𝑘 + 𝑟 − 1)𝑎𝑘 − (𝑘 + 1 + 𝑟)(𝑘 + 𝑟)𝑎𝑘+1 + 3(𝑘 + 𝑟)𝑎𝑘 − (𝑘 + 1 + 𝑟)𝑎𝑘+1 + 𝑎𝑘 = 0


𝑎𝑘+1 = 𝑎𝑘 𝑘 = 0, 1, 2, …

First solution: r=0


𝑎𝑘+1 = 𝑎𝑘 𝑘 = 0, 1, 2, …

41
𝑎1 = 𝑎0
𝑎2 = 𝑎1 = 𝑎0
𝑎3 = 𝑎2 = 𝑎0

In general, let a0 = 1

0 (1
1
𝑦1 (𝑥) = 𝑥 + 𝑥 + 𝑥 + 𝑥 + ⋯ ) = ∑ 𝑥𝑚 =
2 3
1−𝑥
𝑚=0

Second solution: r=0


𝐴𝑘+1 = 𝐴𝑘 𝑘 = 0, 1, 2, …

𝐴1 = 𝐴0
𝐴2 = 𝐴1 = 𝐴0
𝐴3 = 𝐴2 = 𝐴0

In general, let A0 = 1

1 1
𝑦2 (𝑥) = 𝑦1 (𝑥) ln 𝑥 + 𝑥 ( ∑ 𝑥 𝑚 ) =
1
ln 𝑥 + 𝑥 1 ( )
1−𝑥 1−𝑥
𝑚=0

1
= (ln 𝑥 − 𝑥)
1−𝑥

Example 3: Solve the following differential equations using power series method:
(𝑥 2 − 1)𝑥 2 𝑦 ′′ − (𝑥 2 + 1)𝑥𝑦 ′ + (𝑥 2 + 1)𝑦 = 0
Answer:
Assume solution:

𝑦 = ∑ 𝑎𝑚 𝑥 𝑚+𝑟
𝑚=0

Hence,
∞ ∞
(𝑥 2 2
− 1)𝑥 ∑ (𝑚 + 𝑟) (𝑚 + 𝑟 − 1)𝑎𝑚 𝑥 𝑚+𝑟−2
− (𝑥 2 + 1)𝑥 ∑ (𝑚 + 𝑟) 𝑎𝑚 𝑥 𝑚+𝑟−1 + (𝑥 2 + 1)
𝑚=0 𝑚=0

+ ∑ 𝑎𝑚 𝑥 𝑚+𝑟 = 0
𝑚=0

42
∞ ∞ ∞
𝑚+𝑟+2 𝑚+𝑟
∑ (𝑚 + 𝑟) (𝑚 + 𝑟 − 1)𝑎𝑚 𝑥 − ∑ (𝑚 + 𝑟) (𝑚 + 𝑟 − 1)𝑎𝑚 𝑥 − ∑ (𝑚 + 𝑟) 𝑎𝑚 𝑥 𝑚+𝑟+2
𝑚=0 𝑚=0 𝑚=0
∞ ∞ ∞

+ ∑ (𝑚 + 𝑟) 𝑎𝑚 𝑥 𝑚+𝑟 + ∑ 𝑎𝑚 𝑥 𝑚+𝑟+2 + ∑ 𝑎𝑚 𝑥 𝑚+𝑟 = 0


𝑚=0 𝑚=0 𝑚=0
∞ ∞

∑ (𝑚 + 𝑟 − 1) 𝑎𝑚 𝑥 𝑚+𝑟+2 − ∑ (𝑚 + 𝑟 − 1)(𝑚 + 𝑟 + 1) 𝑎𝑚 𝑥 𝑚+𝑟 = 0


𝑚=0 𝑚=0

For the first summation, let k + r = m + r +2 which implies m = k - 2


For the second summation, let k + r = m + r, which implies m = k

∞ ∞
2 𝑘+𝑟
∑(𝑘 + 𝑟 − 3) 𝑎𝑘−2 𝑥 − ∑(𝑘 + 𝑟 − 1)(𝑘 + 𝑟 + 1) 𝑎𝑘 𝑥 𝑘+𝑟 = 0
𝑘=2 𝑘=0

∑[(𝑘 + 𝑟 − 3)2 𝑎𝑘−2 − (𝑘 + 𝑟 − 1)(𝑘 + 𝑟 + 1)𝑎𝑘 ] 𝑥 𝑘+𝑟 − (𝑟 − 1)(𝑟 + 1)𝑎0 𝑥 𝑟


𝑘=2
− (𝑟)(𝑟 + 2)𝑎1 𝑥 𝑟+1 = 0

Indicial Equation:
−(𝑟 − 1)(𝑟 + 1) = 0
∴ 𝑟1 = 1 ; 𝑟2 = −1 Roots differing by an integer

(𝑘 + 𝑟 − 3)2 𝑎𝑘−2 − (𝑘 + 𝑟 − 1)(𝑘 + 𝑟 + 1)𝑎𝑘 = 0


(𝑘 + 𝑟 − 1)(𝑘 + 𝑟 + 1)𝑎𝑘 = 0
(𝑘 + 𝑟 − 3)2 𝑎𝑘−2
𝑎𝑘 = 𝑘 = 2, 3, 4, …
(𝑘 + 𝑟 − 1)(𝑘 + 𝑟 + 1)
−(𝑟)(𝑟 + 2)𝑎1 = 0
(−𝑟 2 − 2𝑟)𝑎1 = 0
𝑎1 = 0 since (−𝑟 2 − 2𝑟) ≠ 0

First solution: r=1


(𝑘 − 2)2 𝑎𝑘−2
𝑎𝑘 = 𝑘 = 2, 3, 4, …
(𝑘)(𝑘 + 2)

43
𝑎1 = 0
𝑎2 = 0
𝑎3 = 0

In general,
𝑦1 (𝑥) = 𝑥 1 (𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ ) = 𝑎0 𝑥

Second solution: r = -1
(𝑘 − 4)2 𝐴𝑘−2
𝐴𝑘 = 𝑘 = 2, 3, 4, …
(𝑘 − 2)(𝑘)
When k = 2, there is no solution. Hence, there will be no solution in the form of power series and
reduction of order should be used for getting the second solution.
Let 𝑦2 (𝑥) = 𝑥𝑢(𝑥) be a solution to the differential equation
Then 𝑦2′ (𝑥) = 𝑥𝑢′ (𝑥) + 𝑢(𝑥)
And 𝑦2′′ (𝑥) = 𝑥𝑢′′ (𝑥) + 𝑢′ (𝑥) + 𝑢′ (𝑥) = 𝑥𝑢′′ (𝑥) + 2𝑢′ (𝑥)
Substitute into the equation,
(𝑥 2 − 1)𝑥 2 [𝑥𝑢′′ (𝑥) + 2𝑢′ (𝑥)] − (𝑥 2 + 1)𝑥[𝑥𝑢′ (𝑥) + 𝑢(𝑥)] + (𝑥 2 + 1)[𝑥𝑢(𝑥)] = 0
𝑥 2 (𝑥 3 − 𝑥)𝑢′′ + 𝑥 2 (𝑥 2 − 3)𝑢′ = 0
(𝑥 3 − 𝑥)𝑢′′ + (𝑥 2 − 3)𝑢′ = 0
𝑢′′ (𝑥 2 − 3) 3 1 1

= 3
= − + +
𝑢 (𝑥 − 𝑥) 𝑥 𝑥+1 𝑥−1
𝑢′′ 3 1 1
∫ ′
𝑑𝑢 = ∫ − 𝑑𝑥 + ∫ 𝑑𝑥 + ∫ 𝑑𝑥
𝑢 𝑥 𝑥+1 𝑥−1
(𝑥 + 1)(𝑥 − 1) 𝑥2 − 1
ln 𝑢′ = −3 ln 𝑥 + ln(𝑥 + 1) + ln(𝑥 − 1) = ln = ln
𝑥3 𝑥3
1
𝑢 = ln 𝑥 +
2𝑥 2
Therefore, the second solution is
1 1
𝑦2 (𝑥) = 𝑥𝑢(𝑥) = 𝑥 (ln 𝑥 + 2
) = 𝑥 ln 𝑥 +
2𝑥 2𝑥

44
45
FOURIER SERIES
WEEK 11: FOURIER SERIES
11.1 INTRODUCTION

Fourier series are infinite series designed to represent general periodic functions in terms of simple
ones, namely, cosines and sines. This trigonometric system is orthogonal, allowing the computation
of the coefficients of the Fourier series by use of the well-known Euler formulas. Fourier series are
very important to the engineer and physicist such as in finding the solution of ordinary differential
equation (ODE) in connection with forced oscillations and the approximation of periodic functions.
Fourier series are, in a certain sense, more universal than the familiar Taylor series in calculus
because many discontinuous periodic functions that come up in applications can be developed in
Fourier series but do not have Taylor series expansions.

11.2 PERIODIC FUNCTIONS

Fourier series are infinite series that represent periodic functions in terms of cosines and sines. As
such, Fourier series are of greatest importance to the engineer and applied mathematician. To
define Fourier series, we first need some background material. A function 𝑓(𝑥) is called a periodic
function if 𝑓(𝑥)is defined for all real 𝑥, except possibly at some points, and if there is some positive
number 𝑝, called a period of 𝑓(𝑥) , such that

𝑓(𝑥 + 𝑝) = 𝑓(𝑥) (1)

for all 𝑥. The graph of a periodic function has the characteristic that it can be obtained by periodic
repetition of its function in any interval of length 𝑝 as shown below.

Periodic Function of period 𝑝

The smallest positive period is often called the fundamental period. Familiar periodic functions are
the cosine, sine, tangent, and cotangent. Examples of functions that are not periodic are, 𝑥, 𝑥 2 , 𝑥 3 ,
𝑒 𝑥 and ln 𝑥, to mention just a few.

If 𝑓(𝑥) has period 𝑝, it also has the period 2𝑝 because (1) implies 𝑓(𝑥 + 2𝑝) = 𝑓([𝑥 + 𝑝] + 𝑝) =
𝑓(𝑥), etc.; thus for any integer 𝑛 = 1,2,3…,

𝑓(𝑥 + 𝑛𝑝) = 𝑓(𝑥) (2)

46
for all 𝑥. Furthermore, if 𝑓(𝑥) and 𝑔(𝑥) have period 𝑝, then 𝑎𝑓(𝑥) + 𝑏𝑓(𝑥) with any constant 𝑎 and
𝑏 also has the period 𝑝.

11.3 TRIGONOMETRIC SERIES

Our problem in the first few sections of this chapter will be the representation of various functions of
𝑓(𝑥) period 2𝜋 in terms of the simple functions

1, cos 𝑥, sin 𝑥, cos 2𝑥, sin 2𝑥, … , cos 𝑛𝑥, sin 𝑛𝑥, … . (3)

All these functions have the period 2𝜋. They form the so-called trigonometric system. The figure
below shows the first few of them (except for the constant 1, which is periodic with any period).

Cosine and sine functions having the period 2𝜋 (the first few members
of the trigonometric system (3), except for the constant 1)

The series to be obtained will be a trigonometric series, that is, a series of the form

𝑎0 + 𝑎1 cos 𝑥 + 𝑏1 sin 𝑥 + 𝑎2 cos 2𝑥 + 𝑏2 sin 2𝑥 + ⋯



(4)
= 𝑎0 + ∑(𝑎𝑛 cos 𝑛𝑥 + 𝑏𝑛 sin 𝑛𝑥)
𝑛=1

𝑎0 , 𝑎1 , 𝑏1 , 𝑎2 , 𝑏2 , … are constants, called the coefficients of the series. We see that each term has
the period 2𝜋. Hence if the coefficients are such that the series converges, its sum will be a function of
period 2𝜋.

Expressions such as (4) will occur frequently in Fourier analysis. To compare the expression on the
right with that on the left, simply write the terms in the summation. Convergence of one side
implies convergence of the other and the sums will be the same.

11.4 FOURIER SERIES

Now suppose that 𝑓(𝑥) is a given function of period 2𝜋 and is such that it can be represented by a
series (4), that is, (4) converges and, moreover, has the sum 𝑓(𝑥). Then, using the equality sign, we
write

47

(5)
𝑓(𝑥) = 𝑎0 + ∑(𝑎𝑛 cos 𝑛𝑥 + 𝑏𝑛 sin 𝑛𝑥)
𝑛=1

and call (5) the Fourier series of 𝑓(𝑥). We shall prove that in this case the coefficients of (5) are the
so-called Fourier coefficients of 𝑓(𝑥), given by the Euler formulas

1 𝜋 (6)(0)
𝑎0 = ∫ 𝑓(𝑥) 𝑑𝑥
2𝜋 −𝜋

1 𝜋 (6)(a)
𝑎𝑛 = ∫ 𝑓(𝑥) 𝑐𝑜𝑠 𝑛𝑥 𝑑𝑥 𝑛 = 1,2, …
𝜋 −𝜋

1 𝜋 (6)(b)
𝑏𝑛 = ∫ 𝑓(𝑥) 𝑠𝑖𝑛 𝑛𝑥 𝑑𝑥 𝑛 = 1,2, …
𝜋 −𝜋

The name “Fourier series” is sometimes also used in the exceptional case that (5) with coefficients
(6) does not converge or does not have the sum —this may happen but is merely of theoretical
interest.

Example 1: Periodic Rectangular Wave

Find the Fourier coefficients of the periodic function 𝑓(𝑥) in the figure above. The formula is

−𝑘 if − 𝜋 < 𝑥 < 0 (7)


𝑓(𝑥) = { and 𝑓(𝑥 + 2𝜋) = 𝑓(𝑥)
𝑘 if 0 < 𝑥 < 𝜋

Functions of this kind occur as external forces acting on mechanical systems, electromotive forces
in electric circuits, etc. (The value of 𝑓(𝑥) at a single point does not affect the integral; hence we can
leave 𝑓(𝑥) undefined at 𝑥 = 0 and = ±𝜋 .)

Solution: From (6.0) we obtain 𝑎0 = 0. This can also be seen without integration, since the area
under the curve of 𝑓(𝑥) between – 𝜋 and 𝜋 (taken with a minus sign where is negative) is zero. From
(6a) we obtain the coefficients 𝑎1 , 𝑎2 , … of the cosine terms. Since 𝑓(𝑥) is given by two expressions,
the integrals from between – 𝜋 and 𝜋 split into two integrals:

1 𝜋 1 0 𝜋
𝑎𝑛 = ∫ 𝑓(𝑥) 𝑐𝑜𝑠 𝑛𝑥 𝑑𝑥 = [∫ (−𝑘) 𝑐𝑜𝑠 𝑛𝑥 𝑑𝑥 + ∫ 𝑘 𝑐𝑜𝑠 𝑛𝑥 𝑑𝑥 ]
𝜋 −𝜋 𝜋 −𝜋 0

48
1 𝑠𝑖𝑛 𝑛𝑥 0 𝑠𝑖𝑛 𝑛𝑥 𝜋
= [−𝑘 | +𝑘 | ]=0
𝜋 𝑛 −𝜋 𝑛 0
because 𝑠𝑖𝑛 𝑛𝑥 = 0 at - 𝜋, 0, and 𝜋 for all 𝑛 = 1,2, … . We see that all these cosine coefficients are
zero. That is, the Fourier series of (7) has no cosine terms, just sine terms, it is a Fourier sine series
with coefficients 𝑏1 , 𝑏2 ,… obtained from (6b);

1 𝜋 1 0 𝜋
𝑏𝑛 = ∫ 𝑓(𝑥) sin 𝑛𝑥 𝑑𝑥 = [∫ (−𝑘) sin 𝑛𝑥 𝑑𝑥 + ∫ 𝑘 sin 𝑛𝑥 𝑑𝑥 ]
𝜋 −𝜋 𝜋 −𝜋 0

1 cos 𝑛𝑥 0 cos 𝑛𝑥 𝜋
= [𝑘 | −𝑘 | ]=0
𝜋 𝑛 −𝜋 𝑛 0

Since cos (−𝛼) = cos 𝛼 and cos 0 = 1, this yields

𝑘 2𝑘
𝑏𝑛 = [cos 0 − cos (−𝑛 𝜋) − cos 𝑛𝜋 + cos 0] = (1 − cos 𝑛𝜋)
𝑛𝜋 𝑛𝜋

Now, cos 𝜋 = −1, cos 2𝜋 = 1, cos 3𝜋 = −1, etc.; in general,

−1 for odd 𝑛 2 for odd 𝑛


cos 𝑛𝜋 = { and thus 1 − cos 𝑛𝜋 = {
1 for even 𝑛, 0 for even 𝑛,

Hence the Fourier coefficient 𝑏𝑛 for our function are

4𝑘 4𝑘 4𝑘
𝑏1 = 𝜋
, 𝑏2 = 0, 𝑏3 = 3𝜋, 𝑏4 = 0, 𝑏4 = 5𝜋, … .

Since 𝑎𝑛 are zero, the Fourier Series of 𝑓(𝑥) is

4𝑘 1 1
(sin 𝑥 + sin 3𝑥 + sin 5𝑥 + ⋯ ). (8)
𝜋 3 5

The partial sums are

4𝑘 4𝑘 1
𝑆1 = sin 𝑥, 𝑆2 = (sin 𝑥 + sin 3𝑥). etc.
𝜋 𝜋 3
Their graphs in the figure below seem to indicate that the series is convergent and has the sum 𝑓(𝑥),
the given function. We notice that at 𝑥 = 0 and 𝑥 = 𝜋, the points of discontinuity of 𝑓(𝑥), all partial
sums have the value zero, the arithmetic mean of the limits −𝑘 and 𝑘 of our function, at these
points. This is typical.

49
Furthermore, assuming that 𝑓(𝑥) is the sum of the series and setting 𝑥 = 𝜋⁄2, we have

𝜋 4𝑘 1 1
𝑓( ) = 𝑘 = (1 − + − + ⋯ ).
2 𝜋 3 5

Thus

1 1 1 𝜋
1 − + − +−⋯ = .
3 5 7 4

This is a famous result obtained by Leibniz in 1673 from geometric considerations. It illustrates that
the values of various series with constant terms can be obtained by evaluating Fourier series at
specific points.

50
11.5 DERIVATION OF THE EULER FORMULA (6)

The key to the Euler formulas (6) is the orthogonality of (3), a concept of basic importance, as
follows. Here we generalize the concept of inner product to functions.

Theorem 1: Orthogonality of the Trigonometric System (3)


The trigonometric system (3) is orthogonal on the interval −𝜋 ≤ 𝑥 ≤ 𝜋 (hence also on 0 ≤ 𝑥 ≤ 2𝜋
or any other interval of length 2𝜋 because of periodicity); that is, the integral of the product of any
two functions in (3) over that interval is 0, so that for any integers n and m,

𝜋
(a)
∫ 𝑐𝑜𝑠 𝑛𝑥 𝑐𝑜𝑠 𝑚𝑥 𝑑𝑥 = 0 (𝑛 ≠ 𝑚)
−𝜋
𝜋
(9) (b)
∫ sin 𝑛𝑥 sin 𝑚𝑥 𝑑𝑥 = 0 (𝑛 ≠ 𝑚)
−𝜋
𝜋
(c)
∫ sin 𝑛𝑥 𝑐𝑜𝑠 𝑚𝑥 𝑑𝑥 = 0 (𝑛 ≠ 𝑚 𝑜𝑟 𝑛 = 𝑚)
−𝜋

Proof: This follows simply by transforming the integrands trigonometrically from products into
sums. In (9a) and (9b),

𝜋
1 𝜋 1 𝜋
∫ 𝑐𝑜𝑠 𝑛𝑥 𝑐𝑜𝑠 𝑚𝑥 𝑑𝑥 = ∫ 𝑐𝑜𝑠 (𝑛 + 𝑚)𝑥 𝑑𝑥 + ∫ 𝑐𝑜𝑠 (𝑛 − 𝑚)𝑥 𝑑𝑥
−𝜋 2 −𝜋 2 −𝜋

𝜋
1 𝜋 1 𝜋
∫ sin 𝑛𝑥 sin 𝑚𝑥 𝑑𝑥 = ∫ 𝑐𝑜𝑠 (𝑛 − 𝑚)𝑥 𝑑𝑥 − ∫ 𝑐𝑜𝑠 (𝑛 + 𝑚)𝑥 𝑑𝑥
−𝜋 2 −𝜋 2 −𝜋
Since 𝑚 ≠ 𝑛, the integrals on the right are all 0. Similarly, in (9c), for all integer 𝑚 or 𝑛

𝜋
1 𝜋 1 𝜋
∫ sin 𝑛𝑥 𝑐𝑜𝑠 𝑚𝑥 𝑑𝑥 = ∫ sin (𝑛 + 𝑚)𝑥 𝑑𝑥 + ∫ sin (𝑛 + 𝑚)𝑥 𝑑𝑥 = 0 + 0.
−𝜋 2 −𝜋 2 −𝜋

51
Application of Theorem 1 to the Fourier Series (5)

We prove (6.0). Integrating on both sides of (5) from −𝜋 to 𝜋, we get

𝜋 𝜋 ∞

∫ 𝑓(𝑥) 𝑑𝑥 = ∫ [𝑎0 + ∑(𝑎𝑛 cos 𝑛𝑥 + 𝑏𝑛 sin 𝑛𝑥)] 𝑑𝑥.


−𝜋 −𝜋 𝑛=1

We now assume that termwise integration is allowed. (We shall say in the proof of Theorem 2 when
this is true.) Then we obtain

𝜋 𝜋 ∞ 𝜋 𝜋
∫ 𝑓(𝑥) 𝑑𝑥 = 𝑎0 ∫ 𝑑𝑥 + ∑ (𝑎𝑛 ∫ cos 𝑛𝑥 𝑑𝑥 + 𝑏𝑛 ∫ sin 𝑛𝑥 𝑑𝑥 ) . (10)
−𝜋 −𝜋 𝑛=1 −𝜋 −𝜋

The first term on the right equals 2𝜋𝑎0 . Integration shows that all the other integrals are 0. Hence
division by 2𝜋 gives (6.0).

We prove (6a). Multiplying (5) on both sides by cos 𝑚𝑥 with any fixed positive integer 𝑚 and
integrating from −𝜋 to 𝜋, we have

𝜋 𝜋 ∞

∫ 𝑓(𝑥) 𝑐𝑜𝑠 𝑚𝑥 𝑑𝑥 = ∫ [𝑎0 + ∑(𝑎𝑛 cos 𝑛𝑥 + 𝑏𝑛 sin 𝑛𝑥)] 𝑐𝑜𝑠 𝑚𝑥 𝑑𝑥.


−𝜋 −𝜋 𝑛=1

We now integrate term by term. Then on the right we obtain an integral of 𝑎0 𝑐𝑜𝑠 𝑚𝑥, which is 0; an
integral of 𝑎𝑛 𝑐𝑜𝑠 𝑛𝑥 𝑐𝑜𝑠 𝑚𝑥, which is 𝑎𝑚 𝜋 for 𝑛 = 𝑚 and 0 for 𝑛 ≠ 𝑚 by (9a); and an integral of
𝑏𝑛 sin 𝑛𝑥 cos 𝑚𝑥, which is 0 for all n and m by (9c). Hence the right side of (10) equals 𝑎𝑚 𝜋. Division
by 𝜋 gives (6a) (with 𝑚 instead of 𝑛).

We finally prove (6b). Multiplying (5) on both sides by sin 𝑚𝑥 with any fixed positive integer 𝑚 and
integrating from −𝜋 to 𝜋 , we get

𝜋 𝜋 ∞

∫ 𝑓(𝑥) sin 𝑚𝑥 𝑑𝑥 = ∫ [𝑎0 + ∑(𝑎𝑛 cos 𝑛𝑥 + 𝑏𝑛 sin 𝑛𝑥)] sin 𝑚𝑥 𝑑𝑥.


−𝜋 −𝜋 𝑛=1

Integrating term by term, we obtain on the right an integral of 𝑎0 sin 𝑚𝑥, which is 0; an integral of
𝑎𝑛 cos 𝑛𝑥 sin 𝑚𝑥, which is 0 by (9c); and an integral of 𝑏𝑛 sin 𝑛𝑥 sin 𝑚𝑥, which is 𝑏𝑚 𝜋 if for 𝑛 = 𝑚
and 0 for 𝑛 ≠ 𝑚, by (9b). This implies (6b) (with 𝑛 denoted by 𝑚). This completes the proof of the
Euler formulas (6) for the Fourier coefficients.

52
11.6 CONVERGENCE AND SUM OF A FOURIER SERIES

The class of functions that can be represented by Fourier series is surprisingly large and general.
Sufficient conditions valid in most applications are as follows.

Theorem 2: Representation by a Fourier Series


Let 𝑓(𝑥) be periodic with period 2𝜋 and piecewise continuous in the interval −𝜋 ≤ 𝑥 ≤ 𝜋.
Furthermore, let 𝑓(𝑥) have a left-hand derivative and a right-hand derivative at each point of that
interval. Then the Fourier series (5) of 𝑓(𝑥) [with coefficients (6)] converges. Its sum is 𝑓(𝑥), except
at points 𝑥0 where 𝑓(𝑥) is discontinuous. There the sum of the series is the average of the left- and
right-hand limits¥ of 𝑓(𝑥) at 𝑥0 .

¥
The left-hand limit of 𝑓(𝑥) at at 𝑥0 is defined as the limit of 𝑓(𝑥) as 𝑥 approaches 𝑥0 from the left
and is commonly denoted by 𝑓(𝑥0 − 0). Thus
𝑓(𝑥0 − 0) = limℎ→0 𝑓(𝑥0 − ℎ) as ℎ → 0 through positive values.

The right-hand limit is denoted by 𝑓(𝑥0 + 0) and


𝑓(𝑥0 + 0) = limℎ→0 𝑓(𝑥0 + ℎ) as ℎ → 0 through positive values.

The left- and right-hand derivatives of 𝑓(𝑥) at 𝑥0 are defined as the limits of
𝑓(𝑥0 + ℎ) − 𝑓(𝑥0 − 0) 𝑓(𝑥0 + ℎ) − 𝑓(𝑥0 + 0)
−ℎ and −ℎ

respectively, as ℎ → 0 through positive values. Of course if 𝑓(𝑥) is continuous at 𝑥0 , the last term in
both numerators is simply 𝑓(𝑥0 ).

Proof: We prove convergence, but only for a continuous function 𝑓(𝑥)having continuous first and
second derivatives. And we do not prove that the sum of the series is 𝑓(𝑥) because these proofs are
much more advanced.

Integrating (6a) by parts, we obtain

1 𝜋 𝑓(𝑥) sin 𝑛𝑥 𝜋 1 𝜋
𝑎𝑛 = ∫ 𝑓(𝑥) cos 𝑛𝑥 𝑑𝑥 = | − ∫ 𝑓′(𝑥) sin 𝑛𝑥 𝑑𝑥
𝜋 −𝜋 𝑛𝜋 −𝜋 𝑛𝜋 −𝜋

53
The first term on the right is zero. Another integration by parts gives

𝑓(𝑥) cos 𝑛𝑥 𝜋 1 𝜋
𝑎𝑛 = | − ∫ 𝑓′(𝑥) cos 𝑛𝑥 𝑑𝑥 .
𝑛2 𝜋 −𝜋 𝑛2 𝜋 −𝜋

The first term on the right is zero because of the periodicity and continuity of 𝑓′(𝑥) . Since 𝑓 ′′ is
continuous in the interval of integration, we have

|𝑓 ′′ (𝑥)| < 𝑀.

for an appropriate constant 𝑀. Furthermore, |cos 𝑛𝑥| ≤ 1. It follows that

𝜋 𝜋
1 1 2𝑀
|𝑎𝑛 | = 2
|∫ 𝑓′′(𝑥) cos 𝑛𝑥 𝑑𝑥 | < 2
∫ 𝑀 𝑑𝑥 = 2 .
𝑛 𝜋 −𝜋 𝑛 𝜋 −𝜋 𝑛

Similarly, |𝑏𝑛 | < 2𝑀⁄𝑛2 for all 𝑛. Hence the absolute value of each term of the Fourier series of
𝑓(𝑥) is at most equal to the corresponding term of the series

1 1 1 1
|𝑎𝑛 | + 2𝑀 (1 + 1 + 2
+ 2 + 2 + 2 + ⋯)
2 2 3 3

which is convergent. Hence that Fourier series converges and the proof is complete.

Example 2: Convergence at a Jump as Indicated in Theorem 2

The rectangular wave in Example 1 has a jump at 𝑥 = 0. Its left-hand limit there is −𝑘 and its right-
hand limit is 𝑘. Hence the average of these limits is 0. The Fourier series (8) of the wave does indeed
converge to this value when 𝑥 = 0 because then all its terms are 0. Similarly for the other jumps,
this is in agreement with Theorem 2.

Summary. A Fourier series of a given function 𝑓(𝑥) of period 2𝜋 is a series of the form (5) with
coefficients given by the Euler formulas (6). Theorem 2 gives conditions that are sufficient for this
series to converge and at each 𝑥 to have the value 𝑓(𝑥) , except at discontinuities of 𝑓(𝑥), where
the series equals the arithmetic mean of the left-hand and right-hand limits of 𝑓(𝑥) at that point.

54
11.7 ARBITRARY PERIOD (FROM PERIOD 2𝜋 TO ANY PERIOD 𝑝 = 2𝐿)

Clearly, periodic functions in applications may have any period, not just as in the last section
(chosen to have simple formulas). The transition from period 2𝜋 to be period 𝑝 = 2𝐿 is effected by a
suitable change of scale, as follows. Let 𝑓(𝑥) have period 𝑝 = 2𝐿 . Then we can introduce a new
variable 𝑣 such that 𝑓(𝑥), as a function of v, has period of 2𝜋. If we set

𝑝 2𝜋 𝜋
(a) 𝑥= 𝑣 so that (b) 𝑣= 𝑥= 𝑥 (1)
2𝜋 𝑝 𝐿

then 𝑣 = ±𝜋 corresponds to 𝑥 = ±𝐿. This means that 𝑓 , as a function of 𝑣, has period 2𝜋 and,
therefore, a Fourier series of the form


𝐿 (2)
𝑓(𝑥) = 𝑓 ( 𝑣) = 𝑎0 + ∑(𝑎𝑛 cos 𝑛𝑣 + 𝑏𝑛 sin 𝑛𝑣)
𝜋
𝑛=1

with coefficients obtained from (6) in the last section

1 𝜋 𝐿 1 𝜋 𝐿
𝑎0 = ∫ 𝑓 ( 𝑣) 𝑑𝑣 𝑎𝑛 = ∫ 𝑓 ( 𝑣) cos 𝑛𝑣 𝑑𝑣
2𝜋 −𝜋 𝜋 2𝜋 −𝜋 𝜋
(3)
1 𝜋 𝐿
𝑏𝑛 = ∫ 𝑓 ( 𝑣) sin 𝑛𝑣 𝑑𝑣
2𝜋 −𝜋 𝜋

We could use these formulas directly, but the change to x simplifies calculations. Since

𝜋 𝜋
𝑣= 𝑥, we have 𝑑𝑣 = 𝑑𝑥 (4)
𝐿 𝐿

and we integrate over 𝑥 from −𝐿 to 𝐿. Consequently, we obtain for a function 𝑓(𝑥) of

period 2𝐿 the Fourier series


𝑛𝜋 𝑛𝜋 (5)
𝑓(𝑥) = 𝑎0 + ∑(𝑎𝑛 cos 𝑥 + 𝑏𝑛 sin 𝑥)
𝐿 𝐿
𝑛=1

with the Fourier coefficients of 𝑓(𝑥) given by the Euler formulas (𝜋⁄𝐿 in 𝑑𝑥 cancels 1⁄𝜋 in (3))

1 𝐿 (6)(0)
𝑎0 = ∫ 𝑓(𝑥) 𝑑𝑥
2𝐿 −𝐿

1 𝐿 𝑛𝜋𝑥 (6)(a)
𝑎𝑛 = ∫ 𝑓(𝑥) 𝑐𝑜𝑠 𝑑𝑥 𝑛 = 1,2, …
𝐿 −𝐿 𝐿

55
1 𝐿 𝑛𝜋𝑥 (6)(b)
𝑏𝑛 = ∫ 𝑓(𝑥) 𝑠𝑖𝑛 𝑑𝑥 𝑛 = 1,2, …
𝐿 −𝐿 𝐿

Just as in the previous section, we continue to call (5) with any coefficients a trigonometric series.
And, we can integrate from 0 to 2𝐿 or over any other interval of length = 2𝐿 .

Example 1: Periodic Rectangular Wave

Find the Fourier series of the function

0 𝑖𝑓 − 2 < 𝑥 < −1
𝑓(𝑥) = { 𝑘 𝑖𝑓 − 1 < 𝑥 < 1 𝑝 = 2𝐿 = 4, 𝐿=2
0 𝑖𝑓 1<𝑥<2

Solution: From (6.0) we obtain 𝑎0 = 𝑘⁄2. From (6.a) we obtain

1 2 𝑛𝜋𝑥 1 1 𝑛𝜋𝑥 2𝑘 𝑛𝜋
𝑎𝑛 = ∫ 𝑓(𝑥) 𝑐𝑜𝑠 𝑑𝑥 = ∫ 𝑘 𝑐𝑜𝑠 𝑑𝑥 = 𝑠𝑖𝑛
2 −2 𝐿 2 −1 𝐿 𝑛𝜋 2

Thus 𝑎𝑛 = 0 if 𝑛 is even and

𝑎𝑛 = 2𝑘⁄𝑛𝜋 𝑖𝑓 𝑛 = 1, 5, 9, … 𝑎𝑛 = −2𝑘⁄𝑛𝜋 𝑖𝑓 𝑛 = 3, 7, 11, …

From (6b) we find that 𝑏𝑛 = 0 for = 1, 2, 3, … . Hence the Fourier cosine series (that is, it has no sine
terms)

𝑘 2𝑘 𝜋 1 3𝜋 1 5𝜋
𝑓(𝑥) = + (𝑐𝑜𝑠 − 𝑐𝑜𝑠 + 𝑐𝑜𝑠 − + ⋯).
2 𝜋 2 3 2 5 2

56
Example 2: Periodic Rectangular Wave. Change of Scale

Find the Fourier series of the function

−𝑘 if 2 < 𝑥 < 0
𝑓(𝑥) = { 𝑝 = 2𝐿 = 4, 𝐿=2
𝑘 if 0 < 𝑥 < 2

Solution. Since 𝐿 = 2, we have in (3) 𝑣 = 𝜋𝑥⁄2 and obtain from (8) in Section 10.4 with 𝑣 instead of
𝑥, that is,

4𝑘 1 1
𝑔(𝑣) = (sin 𝑣 + sin 3𝑣 + sin 5𝑣 + ⋯ )
𝜋 3 5

the present Fourier Series

4𝑘 𝜋 1 3𝜋 1 5𝜋
𝑓(𝑥) = (sin + sin + sin + ⋯)
𝜋 2 3 2 5 2

Confirm this by using (6) and integrating. Example 3. Half-wave Rectifier

A sinusoidal voltage 𝐸 sin 𝜔𝑡 , where 𝑡 is time, is passed through a half-wave rectifier that clips the
negative portion of the wave as shown in the Figure above. Find the Fourier series of the resulting
periodic function

0 if − 𝐿 < 𝑡 < 0 2𝜋 𝜋
𝑢(𝑡) = { 𝑝 = 2𝐿 = , 𝐿=𝜔
𝐸 sin 𝜔𝑡 if 0 < 𝑡 < 𝐿 𝜔

Solution. Since 𝑢 = 0 when −𝐿 < 𝑡 < 0 , we obtain from (6.0), with 𝑡 instead of 𝑥,

57
2𝜋 𝐿
𝑎0 = ∫ 𝐸 sin 𝜔𝑡 𝑑𝑡
𝜔 0

and from (6a), by using formula below with 𝑥 = 𝜔𝑡 and 𝑦 = 𝑛𝜔𝑡 ,

1
sin 𝑥 cos 𝑦 = [sin(𝑥 + 𝑦) + sin(𝑥 − 𝑦)]
2

𝜔 𝐿 𝜔𝐸 𝐿
𝑎𝑛 = ∫ 𝐸 sin 𝜔𝑡 𝑐𝑜𝑠 𝜔𝑡 𝑑𝑡 = ∫ sin(1 + 𝑛)𝜔𝑡 + sin(1 − 𝑛)𝜔𝑡 𝑑𝑡.
𝜋 −𝐿 2𝜋 −𝐿

If 𝑛 = 1, the integral on the right is zero, and if 𝑛 = 2, 3, …, we readily obtain

𝜋⁄𝜔
𝜔𝐸 cos(1 + 𝑛)𝜔𝑡 cos(1 − 𝑛)𝜔𝑡
𝑎𝑛 = [− − ]
2𝜋 (1 + 𝑛)𝜔 (1 − 𝑛)𝜔 0

𝜋⁄𝜔
𝐸 − cos(1 + 𝑛)𝜋 + 1 −cos(1 − 𝑛)𝜋 + 1
= [ + ]
2𝜋 (1 + 𝑛) (1 − 𝑛) 0

If 𝑛 is odd, this is equal to zero, and for even 𝑛 we have

𝐸 2 2 2𝐸
𝑎𝑛 = ( + )=− (𝑛 = 2, 4, … ).
2𝜋 (1 + 𝑛) (1 − 𝑛) (𝑛 − 1)(𝑛 + 1)𝜋

In a similar fashion we find from (6b) that 𝑏1 = 𝐸 ⁄2 and 𝑏𝑛 = 0 for = 2, 3, … . Consequently,

𝐸 𝐸 2𝐸 1 1
𝑢(𝑡) = + sin 𝜔𝑡 − ( cos 2𝜔𝑡 + cos 4𝜔𝑡 + ⋯ ).
𝜋 2 𝜋 1. 3 3. 5

58
11.8 EVEN AND ODD FUNCTION

Even function Odd function

If 𝑓(𝑥) is an even function, that is, 𝑓(−𝑥) = 𝑓(𝑥) (as indicated in the figure above), its Fourier series
(5) reduces to a Fourier cosine series


𝑛𝜋 (5*)
𝑓(𝑥) = 𝑎0 + ∑(𝑎𝑛 cos 𝑥) (𝑓 even)
𝐿
𝑛=1

with coefficients (note: integration from 0 to 𝐿 only!)

1 𝐿 (6*)(i)
𝑎0 = ∫ 𝑓(𝑥) 𝑑𝑥
𝐿 0

2 𝐿 𝑛𝜋𝑥 (6*)(ii)
𝑎𝑛 = ∫ 𝑓(𝑥) 𝑐𝑜𝑠 𝑑𝑥 𝑛 = 1,2, …
𝐿 0 𝐿

If is an odd function, that is, 𝑓(−𝑥) = −𝑓(𝑥) (as indicated in the figure above), its Fourier series (5)
reduces to a Fourier sine series


𝑛𝜋 (5**)
𝑓(𝑥) = ∑ 𝑏𝑛 sin 𝑥 (𝑓 odd)
𝐿
𝑛=1

with coefficients

2 𝐿 𝑛𝜋𝑥 (6**)
𝑏𝑛 = ∫ 𝑓(𝑥) sin 𝑑𝑥 𝑛 = 1, 2, …
𝐿 0 𝐿

59
These formulas follow from (5) and (6) by remembering from calculus that the definite integral gives
the net area (area above the axis minus area below the axis) under the curve of a function between
the limits of integration. This implies

𝐿 𝐿 (7)(a)
∫ 𝑔(𝑥) 𝑑𝑥 = 2 ∫ 𝑔(𝑥) 𝑑𝑥 for even 𝑔
−𝐿 0

𝐿 (7)(b)
∫ ℎ(𝑥) 𝑑𝑥 = 0 for odd ℎ
−𝐿

Formula (7b) implies the reduction to the cosine series (even 𝑓 makes 𝑓(𝑥) sin (𝑛𝜋𝑥⁄𝐿) odd since
sin is odd) and to the sine series (odd 𝑓 makes odd since 𝑓(𝑥) cos (𝑛𝜋𝑥⁄𝐿) is even). Similarly, (7a)
reduces the integrals in (6*) and (6**) to integrals from 0 to 𝐿. These reductions are obvious from
the graphs of an even and an odd function.

Summary
Even Function of Period 𝟐𝛑. If 𝑓 is even and 𝐿 = 𝜋, then

𝑓(𝑥) = 𝑎0 + ∑(𝑎𝑛 cos 𝑛𝑥) (𝑓 even)


𝑛=1

with coefficients
1 𝜋 2 𝜋
𝑎0 = ∫ 𝑓(𝑥) 𝑑𝑥 𝑎𝑛 = ∫ 𝑓(𝑥) 𝑐𝑜𝑠 𝑛𝑥 𝑑𝑥 𝑛 = 1, 2, …
𝜋 0 𝜋 0

Odd Function of Period 𝟐𝝅. If 𝑓 is odd and 𝐿 = 𝜋, then



𝑛𝜋
𝑓(𝑥) = ∑ 𝑏𝑛 sin 𝑥 (𝑓 odd)
𝐿
𝑛=1

with coefficients
2 𝜋
𝑏𝑛 = ∫ 𝑓(𝑥) sin 𝑛𝑥 𝑑𝑥 𝑛 = 1,2, …
𝜋 0

60
Example 4: The rectangular wave in Example 1 is even. Hence it follows without calculation that its
Fourier series is a Fourier cosine series, the 𝑏𝑛 are all zero. Similarly, it follows that the Fourier series
of the odd function in Example 2 is a Fourier sine series.
1
In Example 3 you can see that the Fourier cosine series represents (𝑡) − 𝐸 ⁄𝜋 − 𝐸 sin 𝜔𝑡 . Can you
2
prove that this is an even function?

Further simplifications result from the following property, whose very simple proof is left to the
student.

Sum and Scalar Multiple

The Fourier coefficients of a sum 𝑓1 + 𝑓2 are the sums of the corresponding Fourier coefficients of 𝑓1
and 𝑓2.

The Fourier coefficients of 𝑐𝑓 are 𝑐 times the corresponding Fourier coefficients of 𝑓.

Example 5:

The function 𝑓(𝑥). Sawtooth wave Partial sums 𝑆1 , 𝑆2 , 𝑆3 , 𝑆20

Find the Fourier Series of the function

𝑓(𝑥) = 𝑥 + 𝜋 𝑖𝑓 − 𝜋 < 𝑥 < 𝜋 𝑎𝑛𝑑 𝑓(𝑥 + 2𝜋) = 𝑓(𝑥)

Solution. We have 𝑓1 + 𝑓2 where 𝑓1 = 𝑥 and 𝑓2 = 𝜋. The Fourier coefficients 𝑓2 of are zero, except
for the first one (the constant term), which is 𝜋. Hence, by Theorem 1, the Fourier coefficients 𝑎𝑛 ,
𝑏𝑛 are those of 𝑓1, except for 𝑎0 , which is 𝜋. Since is 𝑓1 odd, 𝑎𝑛 = 0 for 𝑛 = 1, 2, …, and

61
2 𝜋 2 𝜋
𝑏𝑛 = ∫ 𝑓1 (𝑥) sin 𝑛𝑥 𝑑𝑥 = ∫ 𝑥 sin 𝑛𝑥 𝑑𝑥
𝜋 0 𝜋 0

Integrating by parts, we obtain

2 −𝑥 cos 𝑛𝑥 𝜋 1 𝜋 2
𝑏𝑛 = ⌈ | + ∫ cos 𝑛𝑥 𝑑𝑥 = − cos 𝑛𝜋
𝜋 𝑛 0 𝜋 0 𝑛

2 2 2
Hence 𝑏1 = 2, 𝑏2 = − 2, 𝑏3 = − 3, 𝑏4 = − 4, … , and the Fourier series of 𝑓(𝑥) is

1 1
𝑓(𝑥) = 𝜋 + 2 (sin 𝑥 − sin 2𝑥 + sin 3𝑥 − + ⋯ ).
2 3

which is illustrated in the figure above on the right.

11.9 HALF-RANGE EXPANSIONS

(0) The given function 𝑓(𝑥)

(a) 𝑓(𝑥)continued as an even periodic function of period 2𝐿

(b) 𝑓(𝑥)continued as an odd periodic function of period 2𝐿

62
Half-range expansions are Fourier series. The idea is simple and useful. The figures above explain it.
We want to represent in Figure (0) by a Fourier series, where 𝑓(𝑥) may be the shape of a distorted
violin string or the temperature in a metal bar of length 𝐿, for example. Now comes the idea.

We could extend 𝑓(𝑥) as a function of period 𝐿 and develop the extended function into a Fourier
series. But this series would, in general, contain both cosine and sine terms. We can do better and
get simpler series. Indeed, for our given 𝑓 we can calculate Fourier coefficients from (6*) or from
(6**). And we have a choice and can take what seems more practical. If we use (6*), we get (5*).
This is the even periodic extension 𝑓1 of 𝑓 in Figure (a). If we choose (6**) instead, we get the odd
periodic extension 𝑓2 of 𝑓 in Figure (b).

Both extensions have period 2𝐿. This motivates the name half-range expansions: 𝑓 is given (and of
physical interest) only on half the range, that is, on half the interval of periodicity of length 2𝐿.

Example 6.

Find the two half-range expansions of the function

2𝑘 𝐿
𝑥, 0<𝑥<
𝑓(𝑥) = { 𝐿 2
2𝑘 𝐿
(𝐿 − 𝑥), <𝑥<𝐿
𝐿 2

Solution.

(a) Even periodic extension. From (6*) we obtain

1 2𝑘 𝐿⁄2 2𝑘 𝐿 𝑘
𝑎0 = [ ∫ 𝑥 𝑑𝑥 + ∫ (𝐿 − 𝑥) 𝑑𝑥 ] =
𝐿 𝐿 0 𝐿 𝐿⁄2 2

2 2𝑘 𝐿⁄2 𝑛𝜋 2𝑘 𝐿 𝑛𝜋
𝑎𝑛 = [ ∫ 𝑥 cos 𝑥 𝑑𝑥 + ∫ (𝐿 − 𝑥) cos 𝑥 𝑑𝑥 ].
𝐿 𝐿 0 𝐿 𝐿 𝐿⁄2 𝐿

63
We consider 𝑎𝑛 . For the first integral we obtain by integration by parts

𝐿⁄2
𝑛𝜋 𝐿𝑥 𝑛𝜋 𝐿⁄2 𝐿 𝐿⁄2 𝑛𝜋
∫ 𝑥 cos 𝑥 𝑑𝑥 = sin 𝑥| − ∫ sin 𝑥 𝑑𝑥
0 𝐿 𝑛𝜋 𝐿 0 𝑛𝜋 0 𝐿

𝐿2 𝑛𝜋 𝐿2 𝑛𝜋
= sin + 2 2 (cos − 1).
2𝑛𝜋 2 𝑛 𝜋 2

Similarly, for the second integral we obtain

𝐿
𝑛𝜋 𝐿 𝑛𝜋 𝐿 𝐿 𝐿 𝑛𝜋
∫ (𝐿 − 𝑥) cos 𝑥 𝑑𝑥 = (𝐿 − 𝑥) sin 𝑥| + ∫ sin 𝑥 𝑑𝑥
𝐿⁄2 𝐿 𝑛𝜋 𝐿 𝐿⁄2 𝑛𝜋 𝐿⁄2 𝐿

𝐿 𝐿 𝑛𝜋 𝐿2 𝑛𝜋
= (0 − (𝐿 − ) sin ) − 2 2 (cos 𝑛𝜋 − cos ).
𝑛𝜋 2 2 𝑛 𝜋 2

We insert these two results into the formula for 𝑎𝑛 . The sine terms cancel and so does a factor 𝐿2 .
This gives

4𝑘 𝑛𝜋
𝑎𝑛 = 2 2
(2 cos − cos 𝑛𝜋 − 1).
𝑛 𝜋 2

Thus,

𝑎2 = −16𝑘⁄(22 𝜋 2 ), 𝑎6 = −16𝑘⁄(62 𝜋 2 ), 𝑎10 = −16𝑘⁄(102 𝜋 2 ), …

and 𝑎𝑛 = 0 if 𝑛 ≠ 2, 6, 10, 14, … . Hence the first half-range expansion of 𝑓(𝑥) is (Figure (a))

𝑘 16𝑘 1 2𝜋 1 6𝜋
𝑓(𝑥) = − 2 ( 2 cos + 2 cos + ⋯ ).
2 𝜋 2 𝐿 6 𝐿

The Fourier cosine series represent the even periodic extension of the given function 𝑓(𝑥), of period
2𝐿.

(b) Odd periodic extension. Similarly, from (6**) we obtain

8𝑘 𝑛𝜋
𝑏𝑛 = 2 2
sin 𝑥.
2 𝜋 2

Hence the other half-range expansion of 𝑓(𝑥) is (Figure (a))

8𝑘 1 𝜋 1 3𝜋 1 5𝜋
𝑓(𝑥) = 2
( 2 sin − 2 sin + 2 sin − + ⋯ ).
𝜋 1 𝐿 3 𝐿 5 𝐿

The series represent the odd periodic extension of 𝑓(𝑥), of period 2𝐿.

64
PARTIAL DIFFERENTIAL EQUATIONS
(PDEs)
WEEK 12: PARTIAL DIFFERENTIAL EQUATIONS
12.1 BASIC CONCEPTS OF PDE

A partial differential equation (PDE) is an equation involving one or more partial derivatives of an
(unknown) function, call it 𝑢, that depends on two or more variables, often time 𝑡 and one or several
variables in space. The order of the highest derivative is called the order of the PDE. Just as was the
case for ODEs, second-order PDEs will be the most important ones in applications.

Just as for ordinary differential equations (ODEs) we say that a PDE is linear if it is of the first degree
in the unknown function u and its partial derivatives. Otherwise we call it nonlinear. Thus, all the
equations in Example 1 are linear. We call a linear PDE homogeneous if each of its terms contains
either 𝑢 or one of its partial derivatives. Otherwise we call the equation nonhomogeneous. Thus, (4)
in Example 1 (with 𝑓 not identically zero) is nonhomogeneous, whereas the other equations are
homogeneous.

Example 1: Important Second-Order PDEs

𝜕2𝑢 2
𝜕2𝑢 One-dimensional wave equation (1)
= 𝑐
𝜕𝑡 2 𝜕𝑥 2

𝜕𝑢 2
𝜕2𝑢 One-dimensional heat equation (2)
=𝑐
𝜕𝑡 𝜕𝑥 2

𝜕2𝑢 𝜕2𝑢 Two-dimensional Laplace equation (3)


+ =0
𝜕𝑥 2 𝜕𝑦 2

𝜕2𝑢 𝜕2𝑢 Two-dimensional Poisson equation (4)


+ = 𝑓(𝑥, 𝑦)
𝜕𝑥 2 𝜕𝑦 2

𝜕2𝑢 𝜕2𝑢 𝜕2𝑢 Two-dimensional wave equation (5)


= +
𝜕𝑡 2 𝜕𝑥 2 𝜕𝑦 2

𝜕2𝑢 𝜕2𝑢 𝜕2𝑢 Three-dimensional Laplace equation (6)


+ + =0
𝜕𝑥 2 𝜕𝑦 2 𝜕𝑧 2

*Here 𝑐 is a positive constant, 𝑡 is time, 𝑥 𝑦, 𝑧 are Cartesian coordinates, and dimension is the
number of these coordinates in the equation.

A solution of a PDE in some region 𝑅 of the space of the independent variables is a function that has
all the partial derivatives appearing in the PDE in some domain 𝐷 containing R, and satisfies the
PDE everywhere in 𝑅.

65
Often one merely requires that the function is continuous on the boundary of 𝑅, has those
derivatives in the interior of 𝑅, and satisfies the PDE in the interior of R. Letting 𝑅 lie in 𝐷 simplifies
the situation regarding derivatives on the boundary of 𝑅, which is then the same on the boundary
as it is in the interior of 𝑅.

In general, the totality of solutions of a PDE is very large. For example, the functions

𝑢 = 𝑥2 − 𝑦2 𝑢 = 𝑒 𝑥 cos 𝑦 𝑢 = sin 𝑥 cosh 𝑦 𝑢 = ln(𝑥 2 + 𝑦 2 ) (7)

which are entirely different from each other, are solutions of (3), as you may verify. We shall see
later that the unique solution of a PDE corresponding to a given physical problem will be obtained
by the use of additional conditions arising from the problem. For instance, this may be the condition
that the solution 𝑢 assume given values on the boundary of the region 𝑅 (“boundary conditions”). Or,
when time 𝑡 is one of the variables, 𝑢 (or 𝑢𝑡 = 𝜕𝑢⁄𝜕𝑡 or both) may be prescribed at 𝑡 = 0 (“initial
conditions”).

We know that if an ODE is linear and homogeneous, then from known solutions we can obtain
further solutions by superposition. For PDEs the situation is quite similar:

Theorem 1: Fundamental Theorem on Superposition

If 𝑢1 and 𝑢2 are solutions of a homogeneous linear PDE in some region 𝑅, then

𝑢 = 𝑐1 𝑢1 + 𝑐2 𝑢2

with any constants 𝑐1 and 𝑐2 is also a solution of that PDE in the region 𝑅.

In PDE, the notational convention is as the following examples:

𝜕𝑢 𝜕𝑢 𝜕2𝑢 𝜕2𝑢
= 𝑢𝑥 = 𝑢𝑡 = 𝑢𝑥𝑥 = 𝑢𝑥𝑡
𝜕𝑥 𝜕𝑡 𝜕𝑥 2 𝜕𝑥𝜕𝑡

66
Example 2: Solving 𝑢𝑥𝑥 − 𝑢 = 0 Like an ODE. Find solutions 𝑢 of the PDE 𝑢𝑥𝑥 − 𝑢 = 0 depending
on 𝑥 and 𝑦.

Solution: Since no 𝑦-derivative occur, we can solve this PDE like 𝑢′′ − 𝑢 = 0. The characteristic
equation is 𝑟 2 − 1 = 0. Its roots are 𝑟1 = 1 and 𝑟2 = −1. Hence the basis of solution 𝑒 𝑥 and 𝑒 −𝑥 :

𝑢(𝑥, 𝑦) = 𝐴(𝑦)𝑒 𝑥 + 𝐵(𝑦)𝑒 −𝑥

with arbitrary function 𝐴 and 𝐵. We thus have a great variety of solutions. Check the result by
differentiation.

Example 3: Solving 𝑢𝑥𝑦 = −𝑢𝑥 Like an ODE. Find solutions 𝑢 = 𝑢(𝑥, 𝑦) of this PDE.

Solution: Setting 𝑢𝑥 = 𝑝, we have 𝑝𝑦 = −𝑝, 𝑝𝑦 ⁄𝑝 = −1, ln|𝑝| = −𝑦 + ã(𝑥), 𝑝 = 𝑎(𝑥)𝑒 −𝑦 and by


integration with respect to 𝑥,

𝑢(𝑥, 𝑦) = 𝑓(𝑥)𝑒 −𝑦 + 𝑔(𝑦) where


𝑓(𝑥) = ∫ 𝑎(𝑥)𝑑𝑥,

here, 𝑓(𝑥) and 𝑔(𝑥) are arbitrary.

12.2 SOLUTION BY DIRECT INTEGRATION

Some PDEs can be solved by direct partial integration. However, only simple PDEs can be solved
with this approach. Consider the following examples.

Example 4: Find the solution of 𝑢𝑥𝑥 = 6𝑥𝑒 −𝑡 given that the initial conditions 𝑢(0, 𝑡) = 𝑡 and
𝑢𝑥 (0, 𝑡) = 𝑒 −𝑡 .

Solution: Integrate the given PDE with respect to 𝑥:

𝑢𝑥 = 3𝑥 2 𝑒 −𝑡 + 𝑓(𝑡)

This equation can be integrated again with respect to 𝑥:

𝑢(𝑥, 𝑡) = 𝑥 3 𝑒 −𝑡 + 𝑥𝑓(𝑡) + 𝑔(𝑡)

Using the boundary conditions 𝑢(0, 𝑡) = 𝑡 and 𝑢𝑥 (0, 𝑡) = 𝑒 −𝑡 , 𝑓(𝑡) = 𝑒 −𝑡 and 𝑔(𝑡) = 𝑡. The exact
solution is found to be:

𝑢(𝑥, 𝑡) = 𝑥 3 𝑒 −𝑡 + 𝑥𝑒 −𝑡 + 𝑡

67
Example 4*: Find the solution of the partial differential equation, 𝑢𝑥𝑦 = sin 𝑥 cos 𝑦, given that the
𝜋
boundary conditions at 𝑢𝑥 = 2𝑥 when 𝑦 = 2 and 𝑢 = 2 sin 𝑦 when 𝑥 = 𝜋.

Solution:

Integrate the given PDE with respect to y:

𝑢𝑥 = sin 𝑥 sin 𝑦 + 𝑓(𝑥)


𝜋
Using the first boundary condition, 𝑢𝑥 = 2𝑥 at 𝑦 = 2 :

𝜋
2𝑥 = sin 𝑥 sin + 𝑓(𝑥)
2
The value of 𝑓(𝑥) is found to be 2𝑥 − sin 𝑥 and the resulting equations is:

𝑢𝑥 = sin 𝑥 sin 𝑦 + 2𝑥 − sin 𝑥

𝑢(𝑥, 𝑡) can be found by integrating this equation with respect to x:

𝑢(𝑥, 𝑡) = 𝑥 2 − cos 𝑥 sin 𝑦 + cos 𝑥 + 𝑔(𝑦)

The value of 𝑔(𝑦) can be found by using the second boundary condition 𝑢 = 2 sin 𝑦 at 𝑥 = 𝜋.

2 sin 𝑦 = 𝜋 2 − cos 𝜋 sin 𝑦 + cos 𝜋 + 𝑔(𝑦)

𝑔(𝑦) = 1 − 𝜋 2 + sin 𝑦

The exact solution is:

𝑢(𝑥, 𝑡) = 1 + 𝑥 2 − 𝜋 2 − cos 𝑥 sin 𝑦 + cos 𝑥 + sin 𝑦

12.3 SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS

Partial differential equations (PDEs), like an ordinary differential equations (ODEs), are classified as
either linear or nonlinear. Analogous to a linear ODE, the dependent variable and its partial
derivatives in a linear PDE are only to the first power.

LINEAR PARTIAL DIFFERENTIAL EQUATION

Let u(x,y) denote the dependent variable and let x and y denote the independent variables. Then the
general form of a linear second-order partial differential equation is given by

𝜕2𝑢 𝜕2𝑢 𝜕2𝑢 𝜕𝑢 𝜕𝑢


𝐴 2
+ 𝐵 + 𝐶 2
+ 𝐷 + 𝐸 + 𝐹𝑢 = 𝐺 (1)
𝜕𝑥 𝜕𝑥𝜕𝑦 𝜕𝑦 𝜕𝑥 𝜕𝑦

where the coefficients A, B, C, …, G are the functions of x and y. When G(x, y) = 0, equation (1) is said
to be homogeneous; otherwise, it is nonhomogeneous.

68
SOLUTION OF A PDE

A solution of a linear partial differential equation (1) is a function u(x, y) of two independent
variables that possesses all partial derivatives occurring in the equation and that satisfies the
equation in some region of the xy-plane.

Not only is it often difficult to obtain a general solution of a linear second-order PDE, but a general
solution is usually not all that useful in applications. Thus the focus throughout will be finding
particular solutions of some of the more important linear PDEs, that is, equations that appear in
many applications

SEPARATION OF VARIABLES

There are several methods that can be tried to find particular solutions of a linear PDE. One of them
is called method of separation of variables. In this method, a particular solution is sought in the form
of a product of a function of x and a function of y:

𝑢(𝑥, 𝑦) = 𝑋(𝑥)𝑌(𝑦)

With this assumption it is sometimes possible to reduce a linear PDE in two variables to two ODEs.

Note that:

𝜕𝑢 𝜕𝑢 𝜕2𝑢 𝜕2𝑢
= 𝑋 ′ 𝑌, = 𝑋𝑌 ′ , = 𝑋 ′′ 𝑌, = 𝑋𝑌 ′′
𝜕𝑥 𝜕𝑦 𝜕𝑥 2 𝜕𝑦 2

where the primes denote ordinary differentiation.

Example 5:

Find product solutions of

𝜕2𝑢 𝜕𝑢
2
=4
𝜕𝑥 𝜕𝑦

Solution:

Substituting 𝑢(𝑥, 𝑦) = 𝑋(𝑥)𝑌(𝑦) into the partial differential equation yields

𝑋 ′′ 𝑌 = 4𝑋𝑌 ′ .

After dividing both sides by 4𝑋𝑌, we have separated the variables:

𝑋′′ 𝑌 ′
=
4𝑋 𝑌
Since the left-hand side of the last equation is independent of 𝑦 and is equal to the right-hand side,
which is independent of 𝑥, we conclude that both sides of the equation are independent of 𝑥 and 𝑦.

69
In other words, each side of the equation must be a constant. In practice it is convenient to write
this real separation constant as −𝜆 (using 𝜆 would lead to the same solutions).

From the two equalities

𝑋′′ 𝑌 ′
= = −𝜆
4𝑋 𝑌
we obtain the two linear ordinary differential equations

𝑋 ′′ + 4𝜆𝑋 = 0 And 𝑌 ′ + 𝜆𝑌 = 0 (2)

Now, we consider three cases for 𝜆: zero, negative, or positive, that is, 𝜆 = 0, 𝜆 = −𝛼 2 < 0, and
𝜆 = 𝛼 2 > 0, where 𝛼 > 0 .

Case 1: If 𝜆 = 0, then the two ODEs in (2) are

𝑋 ′′ = 0 And 𝑌′ = 0

Solving each equation (by, say integration), we find 𝑋 = 𝑐1 + 𝑐2 𝑥 and 𝑌 = 𝑐3 . Thus a particular
product solution of the given PDE is

𝑢 = 𝑋𝑌 = (𝑐1 + 𝑐2 𝑥)𝑐3 = 𝐴1 + 𝐵1 𝑥 , (3)

where we have replaced 𝑐1 𝑐3 and 𝑐1 𝑐3 by 𝐴1 𝑎𝑛𝑑 𝐵1 , respectively.

Case 2: If 𝜆 = −𝛼 2, then the DEs in (2) are

𝑋 ′′ − 4𝛼 2 𝑋 = 0 And 𝑌′ − 𝛼 2𝑌 = 0 .

From their general solutions

2𝑦
𝑋 = 𝑐4 cosh 2𝛼𝑥 + 𝑐5 sinh 2𝛼𝑥 and 𝑌 = 𝑐6 𝑒 𝛼

give yet another particular solution

2𝑦
𝑢 = 𝑋𝑌 = (𝑐4 cosh 2𝛼𝑥 + 𝑐5 sinh 2𝛼𝑥)𝑐9 𝑒 𝛼 ,

2𝑦 2𝑦
𝑢 = 𝐴2 𝑒 𝛼 cosh 2𝛼𝑥 + 𝐵2 𝑒 𝛼 sinh 2𝛼𝑥, (4)

where 𝐴2 = 𝑐4 𝑐6 and 𝐵2 = 𝑐5 𝑐6 .

Case 3: If 𝜆 = 𝛼 2 , then the DEs

𝑋 ′′ + 4𝛼 2 𝑋 = 0 and 𝑌′ + 𝛼 2𝑌 = 0 .

70
and their general solutions

2𝑦
𝑋 = 𝑐7 cos 2𝛼𝑥 + 𝑐8 sin 2𝛼𝑥 and 𝑌 = 𝑐9 𝑒 −𝛼

give yet another particular solution

2𝑦 2𝑦
𝑢 = 𝐴3 𝑒 −𝛼 cos 2𝛼𝑥 + 𝐵3 𝑒 −𝛼 sin 2𝛼𝑥, (5)

where 𝐴3 = 𝑐7 𝑐9 and 𝐵3 = 𝑐8 𝑐9 .

SUPERPOSITION PRINCIPLE

If u1, u2, …, uk are solutions of a homogeneous linear partial differential equation, then the linear
combination

u = c1u1 + c2u2 + … + ckuk

where the ci , i = 1, 2, …, k are constants, is also a solution.

In addition, whenever there is an infinite set u1, u2, uk, … of solutions of a homogeneous linear
equation, then another solution u can be constructed by forming the infinite series

𝑢 = ∑ 𝑐𝑘 𝑢𝑘
𝑘=1

where the ci , i = 1, 2, 3, … are constants.

CLASSIFICATION OF EQUATIONS

A linear second-order partial differential equation in two independent variables with constant
coefficients can be classified as one of three types. This classification depends only on the
coefficients of the second-order derivatives with the assumption that at least one of the coefficients
A, B, and C is not zero.

The linear second-order partial differential equation

𝜕2𝑢 𝜕2𝑢 𝜕2𝑢 𝜕𝑢 𝜕𝑢


𝐴 2
+ 𝐵 + 𝐶 2
+ 𝐷 + 𝐸 + 𝐹𝑢 = 0
𝜕𝑥 𝜕𝑥𝜕𝑦 𝜕𝑦 𝜕𝑥 𝜕𝑦

where A, B, C, D, E, and F are real constants, is said to be

hyperbolic if 𝐵2 − 4𝐴𝐶 > 0

parabolic if 𝐵2 − 4𝐴𝐶 = 0

elliptic if 𝐵2 − 4𝐴𝐶 < 0

71
Example 6:

Classify the following equations:

𝜕 2 𝑢 𝜕𝑢
1. 3 =
𝜕𝑥 2 𝜕𝑦

[Solution: parabolic]

𝜕2𝑢 𝜕2𝑢
2. =
𝜕𝑥 2 𝜕𝑦 2

[Solution: hyperbolic]

𝜕2𝑢 𝜕2𝑢
3. + =0
𝜕𝑥 2 𝜕𝑦 2

[Solution:elliptic]

12.4 CLASSICAL PDES AND BOUNDARY VALUE PROBLEMS

If u(x, t) is a solution of a PDE, where x represents a spatial dimension and t represents time, then
the value u, or u/x, or a linear combination of u and u/x may be prescribed at a specified as well
as to prescribe u and u/t at a given time t (usually t = 0). In other words, a “boundary value
problem” may consist of a PDE, along with boundary conditions and initial conditions.

CLASSICAL EQUATIONS

Principally, the method of separation of variables will be applied to find product solutions of the
following classical equations of mathematical physics:

(I) The One-Dimensional Heat Equation (Parabolic)


𝜕 2 𝑢 𝜕𝑢
𝑘 2= (1)
𝜕𝑥 𝜕𝑡
{x denotes a spatial variable, whereas t represents time}

(II) The One-Dimensional Wave Equation (Hyperbolic)

𝜕2𝑢 𝜕2𝑢
𝑎2 = (2)
𝜕𝑥 2 𝜕𝑡 2
{x denotes a spatial variable, whereas t represents time}

(III) The Two-Dimensional form of Laplace’s Equation (Elliptic)


𝜕2𝑢 𝜕2𝑢
+ =0 (3)
𝜕𝑥 2 𝜕𝑦 2

{x and y are both spatial variables}

72
BOUNDARY VALUE PROBLEMS

Problems such as the following are called boundary-value problems.

Solve:

𝜕2𝑢 𝜕2𝑢
𝑎2 = , 0 < 𝑥 < 𝐿, 𝑡>0
𝜕𝑥 2 𝜕𝑡 2
Subject to: (Boundary condition, BC)

𝑢(0, 𝑡) = 0, 𝑢(𝐿, 𝑡) = 0, 𝑡>0

(Initial condition, IC)

𝜕𝑢
𝑢(𝑥, 0) = 𝑓(𝑥), | = 𝑔(𝑥), 0<𝑥<𝐿
𝜕𝑡 𝑡=0

and

Solve:

𝜕2𝑢 𝜕2𝑢
+ = 0, 0 < 𝑥 < 𝑎, 0 < 𝑦 > 𝑏
𝜕𝑥 2 𝜕𝑦 2

Subject to: (BC)

𝜕𝑢 𝜕𝑢
| = 0, | = 0, 0<𝑦<𝑏
𝜕𝑥 𝑥=0 𝜕𝑥 𝑥=𝑎

𝑢(𝑥, 0) = 0, 𝑢(𝑥, 𝑏) = 𝑓(𝑥), 0<𝑥<𝑎

Example 7

A rod of length L coincides with the interval [0, L] on the x-axis. Set up the boundary value problem
for the temperature u(x, t) when the left end is held at temperature zero, and the right end is
insulated. The initial temperature is f(x) throughout.

Solution

𝜕 2 𝑢 𝜕𝑢
𝑘 = 0 < 𝑥 < 𝐿, 𝑡>0
𝜕𝑥 2 𝜕𝑡
𝜕𝑢
𝑢(0, 𝑡) = 0, | = 0, 𝑡>0
𝜕𝑥 𝑥=𝐿

𝑢(𝑥, 0) = 𝑓(𝑥), 0 < 𝑥<𝐿

73
Example 8

A string of length L coincides with interval [0, L] on the x-axis. Set up the boundary value problem for
the displacement u(x, t) when the ends are secured to the x-axis. The string is released from rest
from the initial displacement x(L – x).

Solution

𝜕2𝑢 𝜕2𝑢
𝑎2 = 0 < 𝑥 < 𝐿, 𝑡>0
𝜕𝑥 2 𝜕𝑡 2

𝑢(0, 𝑡) = 0, 𝑢(𝐿, 𝑡) = 0 𝑡>0


𝜕𝑢
𝑢(𝑥, 0) = 𝑥(𝐿 − 𝑥), | = 0, 0 < 𝑥<𝐿
𝜕𝑡 𝑡=0

Example 9

Set up the boundary value problem for the steady-state temperature u(x, y) for a thin rectangular
plate coincides with the region defined by 0 ≤ x ≤ 4, 0 ≤ y ≤ 2. The left end and the bottom of the
plate are insulated. The top of the plate is held at temperature zero, and the right end of the plate is
held at temperature f(y).

Solution

𝜕2𝑢 𝜕2𝑢
+ =0 0 < 𝑥 < 4, 0<𝑦<2
𝜕𝑥 2 𝜕𝑦 2
𝜕𝑢
| = 0, 𝑢(4, 𝑦) = 𝑓(𝑦) 0<𝑦<2
𝜕𝑥 𝑥=0

𝜕𝑢
| = 0, 𝑢(𝑥, 2) = 0, 0 < 𝑥<4
𝜕𝑦 𝑦=0

74
HEAT & WAVE EQUATIONS
WEEK 13: HEAT & WAVE EQUATIONS

13.1: HEAT EQUATION

This equation occurs in the theory of heat flow, that is, heat transferred by conduction in a rod or in
a thin wire. The function u(x, t) represents temperature at a point along the rod/thin wire at some
time t.
Suppose a thin circular rod of length L has a cross-sectional area A and coincides with the x-axis on
the interval [0, L] as shown in Figure 1. Suppose that the following conditions apply to the rod:
 The flow of heat within the rod takes place only in the x-direction
 The lateral, or curved, surface of the rod is insulated; that is, no heat escapes from this
surface
 No heat is being generated within the rod
 The rod is homogeneous; that is, its mass  per unit volume is constant
 The specific heat  and thermal conductivity  of the material of the rod are constants

Mass, 𝑚 = 𝜌𝐴∆𝑥

Figure 1. One-dimensional flow of heat.

To derive the partial differential equation satisfied by the temperature u(x, t), two empirical laws of
heat conduction are needed:
(i) The quantity of heat Q in an element of mass m is
𝑄 = 𝛾𝑚𝑢 (1)

where u is the temperature of the element

(ii) The rate of heat flow Qt through the cross-section indicated in Figure 1 is proportional
to the area A of the cross section and the partial derivative with respect to x of the
temperature
𝑄𝑡 ∝ 𝐴𝑢𝑥
or
𝑄𝑡 = −𝐾𝐴𝑢𝑥 (2)

Since heat flows in the direction of decreasing temperature, the minus sign in equation (2) is used to
ensure that Qt is
 positive for ux < 0 (heat flow to the right) and

75
 negative for ux > 0 (heat flow to the left)

If the circular slice of the rod in the figure between x and x + x is very thin, then u(x, t) can be taken
as the approximate temperature at each point in the interval. The mass of the slice is
𝑚 = 𝜌(𝐴Δ𝑥)
and from equation (1)
𝑄 = 𝛾 𝜌𝐴Δ𝑥 𝑢(𝑥, 𝑡) (3)

Furthermore, when heat flows in the positive x-direction, from equation (3), it is seen that heat
builds up in the slice at the net rate:

𝑑𝑄 𝑑𝑢
= 𝛾 𝜌𝐴Δ𝑥
𝑑𝑡 𝑑𝑡
or
𝑄𝑡 = 𝛾 𝜌𝐴Δ𝑥 𝑢𝑡 (4)

In addition, when heat flows from left to right, using equation (2) the net rate is given by

𝑄𝑡 = −𝐾𝐴𝑢𝑥 (𝑥, 𝑡) − [−𝐾𝐴𝑢𝑥 (𝑥 + ∆𝑥, 𝑡)] = 𝐾𝐴[𝑢𝑥 (𝑥 + ∆𝑥, 𝑡), 𝑢𝑥 (𝑥, 𝑡)] (5)

Equating (4) and (5) gives

𝛾 𝜌𝐴Δ𝑥 𝑢𝑡 = 𝐾𝐴[𝑢𝑥 (𝑥 + ∆𝑥, 𝑡), 𝑢𝑥 (𝑥, 𝑡)]

which implies

𝐾 𝑢𝑥 (𝑥 + ∆𝑥, 𝑡) − 𝑢𝑥 (𝑥, 𝑡)
= 𝑢𝑡 (6)
𝛾𝜌 ∆𝑥

Finally by taking the limit of (6) as x →0, the heat equation is obtained in the form of

𝐾
( 𝜌) 𝑢𝑥𝑥 = 𝑢𝑡
𝛾

It is customary to let
𝐾
𝑘=
𝛾𝜌
and this positive constant is called the thermal diffusivity.

Problem Set 1:
Consider a thin rod of length L with an initial temperature f(x) throughout and whose ends are held
at temperature zero for all time t > 0. If the rod shown in Figure 2 satisfies the assumptions given
before, the temperature u(x, t) in the rod is determined from the boundary value problem

𝜕 2 𝑢 𝜕𝑢
𝑘 = , 0 < 𝑥 < 𝐿, 𝑡 > 0 (7)
𝜕𝑥 2 𝜕𝑡

76
(𝑎) 𝑢(0, 𝑡) = 0, (𝑏) 𝑢(𝐿, 𝑡) = 0, 𝑡>0 (8)

𝑢(𝑥, 0) = 𝑓(𝑥), 0 < 𝑥<𝐿 (9)

Figure 2. Temperature in rod of length L

Solution

Use the product u(x, t) = X(x)T(t) or u(x, t) = XT as assume solution to separate variables in (7). Then,
the partial derivatives for u(x, t) needed are
𝜕𝑢 𝜕𝑢
= 𝑋′𝑇 = 𝑋𝑇′
𝜕𝑥 𝜕𝑡
𝜕2𝑢
= 𝑋 ′′ 𝑇
𝜕𝑥 2
Substitute back into equation (7):
𝑘𝑋′′𝑇 = 𝑋𝑇′
or
𝑋′′ 𝑇′
= = −𝜆 (10)
𝑋 𝑘𝑇

where  is a constant. Equation (10) will lead into the two ordinary differential equations, which are

𝑋 ′′ + 𝜆𝑋 = 0 (11)
𝑇 ′ + 𝑘𝜆𝑇 = 0 (12)

Equation (11) is a linear second order homogeneous ODE while equation (12) is a linear first order
ODE.

For parameter, three cases are possible which are  = 0,  < 0, and  > 0. They are studied below.

Firstly, since Equation (11) is a first order ODE, it is easier to solve it in universal term as follow:

𝑑𝑇
𝑇 ′ + 𝑘𝜆𝑇 = 0 or = −𝑘𝜆𝑇
𝑑𝑡
To get the solution, method of separation of variable can be used. Integrating both sides of the
equation

77
𝑑𝑇
∫ = ∫ −𝑘𝜆𝑑𝑡
𝑇
gives ln 𝑇 = −𝑘𝜆𝑡 + 𝐶 as the result, and 𝑇 = 𝑒 −𝑘𝜆𝑡+𝐶 = 𝑒 𝐶 𝑒 −𝑘𝜆𝑡 . The final solution is
𝑇(𝑡) = 𝑐3 𝑒 −𝑘𝜆𝑡 (13)

Next, the three cases are deliberated.


Case 1:  = 0

When  = 0, Equation (11) reduces to 𝑋 ′′ = 0 , which is a second order homogeneous linear ODE.
Its characteristic equation is 𝑚2 = 0 with roots 𝑚1 = 0, 𝑚2 = 0 (real and repeating roots).
Hence the solution is 𝑋(𝑥) = 𝑐1 + 𝑐2 𝑥.

When  = 0, Equation (13) becomes to 𝑇(𝑡) = 𝑐3 𝑒 −𝑘(0)𝑡 = 𝑐3


Finally, the solution for Case 1 is
𝑢(𝑥, 𝑡) = 𝑋𝑇 = (𝑐1 + 𝑐2 𝑥)(𝑐3 ) = 𝐴1 + 𝐵1 𝑥

Case 2:  < 0 or 𝜆 = −𝛼 2

Equation (11) becomes 𝑋 ′′ − 𝛼 2 𝑋 = 0 for  < 0, which is again a second order homogeneous
linear ODE. Its characteristic equation becomes 𝑚2 − 𝛼 2 = 0 with roots 𝑚1 = 𝛼, 𝑚2 = −𝛼
(real and distinct roots – special case). Therefore, the solution is
𝑋(𝑥) = 𝑐1 cosh 𝛼𝑥 + 𝑐2 sinh 𝛼𝑥.
2) 𝑡 2𝑡
For  < 0, Equation (13) turns into 𝑇(𝑡) = 𝑐3 𝑒 −𝑘(−𝛼 = 𝑐3 𝑒 𝑘𝛼

Subsequently, the solution for Case 2 is


2
𝑢(𝑥, 𝑡) = 𝑋𝑇 = (𝑐1 cosh 𝛼𝑥 + 𝑐2 sinh 𝛼𝑥)(𝑐3 𝑒 𝑘𝛼 𝑡 )
2
= 𝑒 𝑘𝛼 𝑡 (𝐴2 cosh 𝛼𝑥 + 𝐵2 sinh 𝛼𝑥)

Case 3:  > 0 or 𝜆 = 𝛼 2

For  > 0, Equation (10) turns into a second order homogeneous linear ODE, which is
𝑋 ′′ + 𝛼 2 𝑋 = 0, and the characteristic equation is given by 𝑚2 + 𝛼 2 = 0. The two roots are
𝑚1 = 𝛼𝑖, 𝑚2 = −𝛼𝑖 (complex conjugates roots), and the solution is 𝑋(𝑥) =
𝑐1 cos 𝛼𝑥 + 𝑐2 sin 𝛼𝑥.
2) 𝑡 2𝑡
For Equation (13), 𝑇(𝑡) = 𝑐3 𝑒 −𝑘(𝛼 = 𝑐3 𝑒 −𝑘𝛼
Consequently, the solution for Case 3 is
2
𝑢(𝑥, 𝑡) = 𝑋𝑇 = (𝑐1 cos 𝛼𝑥 + 𝑐2 sin 𝛼𝑥)(𝑐3 𝑒 −𝑘𝛼 𝑡 )
2
= 𝑒 −𝑘𝛼 𝑡 (𝐴3 cos 𝛼𝑥 + 𝐵3 sin 𝛼𝑥)

78
Case 1 to 3 gives the general solutions to the PDE and by superposition principle any linear
combination of the cases will also be the solution to the PDE.
However, in many situations the particular solution is more desirable. To get the particular solution
all the boundary and initial conditions are applied to the general solution. Initially, the boundary
conditions Equation (8a) and Equation (8b), u(0, t) = 0, u(L, t) = 0, are applied.
The three cases above are looked into again.
Case 1:  = 0
Applying Equation (8a) to u(x, t) of case 1;
𝑢(0, 𝑡) = 𝐴1 + 𝐵1 (0) = 0, which implies 𝐴1 = 0
Now, the solution becomes 𝑢(𝑥, 𝑡) = 𝐵1 𝑥.
Applying Equation (8b) to the updated u(x, t);
𝑢(𝐿, 𝑡) = 𝐵1 (𝐿) = 0, that gives 𝐵1 = 0.
Hence, there is no solution for Case 1.

Case 2:  < 0 (𝜆 = −𝛼 2 )
Applying Equation (8a) to u(x, t) of case 2;
2
𝑢(0, 𝑡) = 𝑒 𝑘𝛼 𝑡 ( 𝐴2 cosh 𝛼(0) + 𝐵2 sinh 𝛼(0)) = 0;
2𝑡
implying 𝐴2 𝑒 𝑘𝛼 = 0 and 𝐴2 = 0.
2
The solution now is reduced to 𝑢(𝑥, 𝑡) = 𝑒 𝑘𝛼 𝑡 (𝐵2 sinh 𝛼𝑥).
Applying Equation (8b) to the updated u(x, t);
2
𝑢(𝐿, 𝑡) = 𝑒 𝑘𝛼 𝑡 (𝐵2 sinh 𝛼(𝐿)) = 0.
2
This will lead to 𝐵2 𝑒 𝑘𝛼 𝑡 sinh 𝛼𝐿 = 0 with 𝐵2 sinh 𝛼𝐿 = 0 or 𝐵2 = 0.
This route is chosen instead of sinh 𝛼𝐿 = 0 which is more difficult to manage due to the
existence of hyperbolic function.
Hence, there is also no solution for Case 2.

Case 3:  > 0 (𝜆 = 𝛼 2 )
Applying Equation (8a) to u(x, t) of case 3;
2
𝑢(0, 𝑡) = 𝑒 −𝑘𝛼 𝑡 (𝐴3 cos 𝛼(0) + 𝐵3 sin 𝛼(0)) = 0;
2𝑡
the outcome is 𝐴3 𝑒 −𝑘𝛼 = 0 and 𝐴3 = 0.
2
The solution becomes 𝑢(𝑥, 𝑡) = 𝑒 𝑘𝛼 𝑡 (𝐵3 sin 𝛼𝑥).
Applying Equation (8b) to the updated u(x, t);
2
𝑢(𝐿, 𝑡) = 𝐵2 𝑒 −𝑘𝛼 𝑡 sin 𝛼(𝐿) = 0
2
which turns out to be 𝐵3 𝑒 −𝑘𝛼 𝑡 sin 𝛼𝐿 = 0 or 𝐵3 sin 𝛼𝐿 = 0 or 𝐵3 = 0.
Hence, there is no solution for Case 3, too.

Subsequently, Case 1, 2, and 3 give all trivial solutions. However, a solution is needed; for non-trivial
solution, let

79
𝑛𝜋
𝐵3 ≠ 0 which gives sin 𝛼𝐿 = 0 which gives 𝛼𝐿 = 𝑛𝜋 and 𝛼 = 𝐿
for 𝑛 = 1, 2, …

Equation (11) with boundary condition 𝑋(0) = 0 and 𝑋(𝐿) = 0 possess non-trivial solutions when
𝑛 2 𝜋2
𝜆𝑛 = 𝛼𝑛2 = 𝐿2
for n = 1, 2, … which is referred to as the eigenvalues of the problem; and the
eigenfunctions of the problem are
𝑛𝜋
𝑋(𝑥) = 𝐵𝑛 sin 𝑥, 𝑛 = 1, 2, 3, … .
𝐿
Hence, the general solution for the PDE becomes
2 2 𝜋2 ⁄𝐿2 )𝑡 𝑛𝜋
𝑢𝑛 (𝑥, 𝑡) = 𝐵𝑛 𝑒 −𝑘𝛼 𝑡 sin 𝛼𝑥 = 𝐵𝑛 𝑒 −𝑘(𝑛 sin 𝐿
𝑥 𝑛 = 1, 2, …

Each 𝑢𝑛 (𝑥, 𝑡) is a particular solution of the PDE, and satisfies the boundary conditions of Equations
(8a) and (8b), u(0, t) = 0, u(L, t) = 0. For the solution to satisfies the initial condition of Equation 9,
𝑢(𝑥, 0) = 𝑓(𝑥), the coefficient 𝐵𝑛 should be chosen in such a manner so that
2 𝜋2 ⁄𝐿2 )𝑡 𝑛𝜋
𝑢𝑛 (𝑥, 𝑡) = 𝐵𝑛 𝑒 −𝑘(𝑛 sin 𝐿
𝑥 = 𝑓(𝑥) 𝑛 = 1, 2, … (14)

Equation (14) is not expected to be satisfied for an arbitrary but reasonable choice of f. Therefore,
𝑢𝑛 (𝑥, 𝑡) is not the solution to the given problem.
However, by superposition principle,
∞ ∞
2 𝜋2 ⁄𝐿2 )𝑡 𝑛𝜋
𝑢(𝑥, 𝑡) = ∑ 𝑢𝑛 (𝑥, 𝑡) = ∑ 𝐵𝑛 𝑒 −𝑘(𝑛 sin 𝐿
𝑥
𝑛=0 𝑛=0

must also satisfy the PDE and its boundary conditions; and hence, is also a solution to the
differential equation.
Applying the initial condition from Equation (9): 𝑢(𝑥, 0) = 𝑓(𝑥):
∞ ∞
−𝑘(𝑛2 𝜋2 ⁄𝐿2 )(0) 𝑛𝜋 𝑛𝜋
𝑢(𝑥, 0) = ∑ 𝐵𝑛 𝑒 sin 𝐿 𝑥 = ∑ 𝐵𝑛 sin 𝐿
𝑥 = 𝑓(𝑥) (15)
𝑛=1 𝑛=1
The last expression in Equation (15) is a half-range range expansion of f in a Fourier Sine Series
where the coefficients 𝐵𝑛 can be obtained from
2 𝐿 𝑛𝜋
𝐵𝑛 = ∫ 𝑓(𝑥) sin 𝐿 𝑥 𝑑𝑥
𝐿 0
Finally, a solution of the boundary value problem set 1 is given by
𝐿 ∞
2 𝑛𝜋 2 2 2 𝑛𝜋
𝑢(𝑥, 𝑡) = ∑ (∫ 𝑓(𝑥) sin 𝐿 𝑥 𝑑𝑥) 𝑒 −𝑘(𝑛 𝜋 ⁄𝐿 )𝑡 sin 𝐿 𝑥
𝐿 0𝑛=0

Problem Set 2:
Consider a thin rod of length L with an initial temperature 100 throughout and whose ends are held
at temperature zero for all time t > 0. If the rod shown in Figure 2 satisfies the assumptions given
before, the temperature u(x, t) in the rod is determined from the boundary value problem

80
𝜕 2 𝑢 𝜕𝑢
= , 0 < 𝑥 < 𝜋, 𝑡 > 0 (16)
𝜕𝑥 2 𝜕𝑡
(𝑎) 𝑢(0, 𝑡) = 0, (𝑏) 𝑢(𝜋, 𝑡) = 0, 𝑡>0 (17)

𝑢(𝑥, 0) = 100, 0 < 𝑥<𝜋 (18)

Problem set 2 is a special case of Problem set 1 when the initial temperature is u(x, 0) = 100, L = π,
and k = 1. The solution from the problem set can be utilized up to before applying the initial
conditions.

Applying the initial condition from Equation (18): 𝑢(𝑥, 0) = 100:


∞ ∞
−𝑘(𝑛2 𝜋2 ⁄𝜋2 )(0) 𝑛𝜋 𝑛𝜋
𝑢(𝑥, 0) = ∑ 𝐵𝑛 𝑒 sin 𝜋 𝑥 = ∑ 𝐵𝑛 sin 𝜋
𝑥 = 100 (19)
𝑛=0 𝑛=0
The last expression in Equation (19) is a half-range range expansion of f in a Fourier Sine Series
where the coefficients 𝐵𝑛 can be obtained from
2 𝜋
𝐵𝑛 = ∫ 100 sin 𝑛𝑥 𝑑𝑥
𝜋 0

200 − cos 𝑛𝑥 𝜋
= [ ]
𝜋 𝑛 0

200 − cos 𝑛𝜋 cos 0


= [ + ]
𝜋 𝑛 𝑛
200 1 − (−1)
= [ ]
𝜋 𝑛
Finally, a solution of the boundary value problem set 2 is given by

200 1 − (−1)𝑛 −𝑛2 𝑡
𝑢(𝑥, 𝑡) = ∑[ ]𝑒 sin 𝑛𝑥 (20)
𝜋 𝑛
𝑛=1

Since u is a function of two variables, the graph of the solution (20) is a surface in 3-space. Any 3D-
plot application of a computer algebra system could be used to approximate this surface by
graphing partial sums Sn(x, t) over a rectangular region defined by 0 ≤ 𝑥 ≤ 𝜋, 0 ≤ 𝑡 ≤ 𝑇.
Alternatively, with the aid of the 2D-plot application of a CAS, the solution u(x, t) on the x-interval [0,
p] for increasing values of time t can be drawn. See Figure 3(a). In Figure 3(b) the solution u(x, t) is
graphed on the t-interval [0, 6] for increasing values of x (x = 0 is the left end and x = /2 is the
midpoint of the rod of length L = .) Both sets of graphs verify what is apparent in (20)—namely,
𝑢(𝑥, 𝑡) → 0 as 𝑡 → 0.

81
(a) 𝑢(𝑥, 𝑡) graphed as a function of
x for various fixed times

(b) 𝑢(𝑥, 𝑡) graphed as a function of


t for various fixed positions

Figure 3. Graphs of Equation (20) when one variable is held fixed

13.2: WAVE EQUATION

Consider a string of length L, stretched taut between two points on the x-axis, say x = 0 and x = L.
When the string starts to vibrate, assume that the motion takes place in the xu-plane in such a
manner that each point on the string moves in a direction perpendicular to the x-axis (transverse
vibrations).

82
Figure 1. Flexible string anchored at x = 0 and x = L

As shown in Figure 1 (a), let u(x, t) denote the vertical displacement of any point on the string
measured from the x-axis for t > 0. In addition, further assume the following:

 The string is perfectly flexible.


 The string is homogeneous, that is its mass per unit length  is a constant.
 The displacements u are small in comparison to the length of the string.
 The slope of the curve is small at all points.
 The tension T acts tangent to the string, and its magnitude T is the same at all points.
 The tension is large compared with the force of gravity.
 No other external forces act on the string.

In Figure 1 (b), the tensions T1 and T2 are tangent to the ends of the curve on the interval [x, x + x].
For small 1 and 2 the net vertical force acting on the corresponding element s of the string is then

T sin 2 – T sin 1  T tan 2 – T tan 1

83
= T [ux(x + x, t) - ux(x , t)]

where 𝑇 = |T1 | = |T2 |. Now  s   x is the mass of the string on [x, x + x], so Newton’s second
law gives

T [ux(x + x, t) - ux(x , t)] =  x utt

or

𝑢𝑥 (𝑥 + ∆𝑥, 𝑡) − 𝑢𝑥 (𝑥, 𝑡) 𝜌
= 𝑢𝑡𝑡
Δ𝑥 𝑇

𝜌
If the limit is taken as x →0, the last equation becomes 𝑢𝑥𝑥 = (𝑇 ) 𝑢𝑡𝑡 . This is the classical wave
equation
𝜕2𝑢 𝜕2𝑢
𝑎2 =
𝜕𝑥 2 𝜕𝑡 2
with a2 = T/.

Problem Set 1
The vertical displacement u(x, t) in the rod of the vibrating string of length L is determined from

𝜕2𝑢 𝜕2𝑢
𝑎2 = , 0 < 𝑥 < 𝐿, 𝑡 > 0 (1)
𝜕𝑥 2 𝜕𝑡 2
(𝑎) 𝑢(0, 𝑡) = 0, (𝑏) 𝑢(𝐿, 𝑡) = 0, 𝑡>0 (2)

𝜕𝑢
(𝑎) 𝑢(𝑥, 0) = 𝑓(𝑥), (𝑏) | = 𝑔(𝑥) 0 < 𝑥<𝐿 (3)
𝜕𝑡 𝑡=0

Solution of the Boundary Value Problem (BVP)

Use the product u(x, t) = X(x)T(t) to separate variables in (1). Then, if - is the separation constant,
the two equalities

𝑋′′ 𝑇′
= = −𝜆
𝑋 𝑘𝑇
lead to the two ordinary differential equations

𝑋 ′′ + 𝜆𝑋 = 0 (4)

𝑇 ′′ + 𝑎2 𝜆𝑇 = 0 (5)

Unlike the heat equation, both Equations (4) and (5) are a linear second order homogeneous ODEs.
However, as in the heat equation, three cases are possible for parameter which are  = 0,  < 0, and
 > 0. They are studied below.

Case 1:  = 0

84
When  = 0, Equation (4) reduces to 𝑋 ′′ = 0 . Its characteristic equation is 𝑚2 = 0 with roots
𝑚1 = 0, 𝑚2 = 0 (real and repeating roots). Hence the solution is 𝑋(𝑥) = 𝑐1 + 𝑐2 𝑥.

When  = 0, Equation (5) becomes 𝑇 ′′ = 0. Here, the characteristic equation is also 𝑚2 = 0 with
roots 𝑚1 = 0, 𝑚2 = 0 (real and repeating roots). Hence the solution is 𝑇(𝑡) = 𝑐3 + 𝑐4 𝑡.
Finally, the solution for Case 1 is
𝑢(𝑥, 𝑡) = 𝑋𝑇 = (𝑐1 + 𝑐2 𝑥)(𝑐3 + 𝑐4 𝑡)

Case 2:  < 0 or 𝜆 = −𝛼 2

Equation (4) becomes 𝑋 ′′ − 𝛼 2 𝑋 = 0 for  < 0. Its characteristic equation is 𝑚2 − 𝛼 2 = 0 with


roots 𝑚1 = 𝛼, 𝑚2 = −𝛼 (real and distinct roots – special case). Therefore, the solution is
𝑋(𝑥) = 𝑐1 cosh 𝛼𝑥 + 𝑐2 sinh 𝛼𝑥.

For  < 0, Equation (5) turns into 𝑇 ′′ − 𝑎2 𝛼 2 𝑇 = 0. The characteristic equation is 𝑚2 −


𝑎2 𝛼 2 = 0 with roots 𝑚1 = 𝑎𝛼, 𝑚2 = −𝑎𝛼 (real and distinct roots – special case). Therefore, the
solution is 𝑇(𝑡) = 𝑐3 cosh 𝑎𝛼𝑡 + 𝑐4 sinh 𝑎𝛼𝑡.

Subsequently, the solution for Case 2 is


𝑢(𝑥, 𝑡) = 𝑋𝑇 = (𝑐1 cosh 𝛼𝑥 + 𝑐2 sinh 𝛼𝑥)(𝑐3 cosh 𝑎𝛼𝑡 + 𝑐4 sinh 𝑎𝛼𝑡)

Case 3:  > 0 or 𝜆 = 𝛼 2

For  > 0, Equation (4) turns into 𝑋 ′′ + 𝛼 2 𝑋 = 0, and the characteristic equation is given by
𝑚2 + 𝛼 2 = 0. The two roots are 𝑚1 = 𝛼𝑖, 𝑚2 = −𝛼𝑖 (complex conjugates roots), and the
solution is 𝑋(𝑥) = 𝑐1 cos 𝛼𝑥 + 𝑐2 sin 𝛼𝑥.

For Equation (5), 𝑇 ′′ + 𝑎2 𝛼 2 𝑇 = 0, the characteristic equation is 𝑚2 + 𝑎2 𝛼 2 = 0 with roots


𝑚1 = 𝑎𝛼𝑖, 𝑚2 = −𝑎𝛼𝑖 (complex conjugates roots). Therefore, the solution is 𝑇(𝑡) =
𝑐3 cos 𝑎𝛼𝑡 + 𝑐4 sin 𝑎𝛼𝑡.
Consequently, the solution for Case 3 is
𝑢(𝑥, 𝑡) = 𝑋𝑇 = (𝑐1 cos 𝛼𝑥 + 𝑐2 sin 𝛼𝑥)(𝑐3 cos 𝑎𝛼𝑡 + 𝑐4 sin 𝑎𝛼𝑡)

As in the heat equation, Case 1 to 3 gives the general solutions to the PDE and by superposition
principle any linear combination of the cases will also be the solution to the PDE.
However, in many situations the particular solution is more desirable. To get the particular solution
all the boundary and initial conditions are applied to the general solution. Initially, the boundary
conditions Equation (2a) and Equation (2b), u(0, t) = 0, u(L, t) = 0, are applied.
The three cases above are looked into again.
Case 1:  = 0
Applying Equation (2a) to u(x, t) of case 1;
𝑢(0, 𝑡) = (𝑐1 + 𝑐2 (0))(𝑐3 + 𝑐4 𝑡) = 0, which implies 𝑐1 = 0
Now, the solution becomes 𝑢(𝑥, 𝑡) = (𝑐2 𝑥)(𝑐3 + 𝑐4 𝑡) = 0.

85
Applying Equation (2b) to the updated u(x, t);
𝑢(𝐿, 𝑡) = (𝑐2 (𝐿))(𝑐3 + 𝑐4 𝑡) = 0, that gives 𝑐2 = 0.
Hence, there is no meaningful solution for Case 1.

Case 2:  < 0 (𝜆 = −𝛼 2 )
Applying Equation (2a) to u(x, t) of case 2;
𝑢(0, 𝑡) = ( 𝑐1 cosh 𝛼(0) + 𝑐2 sinh 𝛼(0))(𝑐3 cosh 𝑎𝛼𝑡 + 𝑐4 sinh 𝑎𝛼𝑡) = 0; implying 𝑐1 = 0.
The solution now is reduced to 𝑢(𝑥, 𝑡) = (𝑐2 sinh 𝛼𝑥)(𝑐3 cosh 𝑎𝛼𝑡 + 𝑐4 sinh 𝑎𝛼𝑡).
Applying Equation (2b) to the updated u(x, t);
𝑢(𝐿, 𝑡) = (𝑐2 sinh 𝛼(𝐿))(𝑐3 cosh 𝑎𝛼𝑡 + 𝑐4 sinh 𝑎𝛼𝑡) = 0.
𝑘𝛼 2 𝑡
This will lead to 𝑐2 𝑒 sinh 𝛼𝐿 = 0 with 𝑐2 sinh 𝛼𝐿 = 0 or 𝑐2 = 0.
This route is chosen instead of sinh 𝛼𝐿 = 0 which is more difficult to manage due to the
existence of hyperbolic function.
Hence, again there is also no meaningful solution for Case 2.

Case 3:  > 0 (𝜆 = 𝛼 2 )
Applying Equation (2a) to u(x, t) of case 3;
𝑢(0, 𝑡) = (𝑐3 cos 𝛼(0) + 𝑐4 sin 𝛼(0))(𝑐3 cos 𝑎𝛼𝑡 + 𝑐4 sin 𝑎𝛼𝑡) = 0; the outcome is 𝑐3 = 0.
The solution becomes 𝑢(𝑥, 𝑡) = (𝑐4 sin 𝛼𝑥)(𝑐3 cos 𝑎𝛼𝑡 + 𝑐4 sin 𝑎𝛼𝑡) = 0.
Applying Equation (2b) to the updated u(x, t);
𝑢(𝐿, 𝑡) = (𝑐4 sin 𝛼(𝐿))(𝑐3 cos 𝑎𝛼𝑡 + 𝑐4 sin 𝑎𝛼𝑡) = 0
which turns out to be 𝑐4 sin 𝛼𝐿 = 0 or 𝑐4 sin 𝛼𝐿 = 0 or 𝑐4 = 0.
Hence, there is also no meaningful solution for Case 3, too.

Subsequently, Case 1, 2, and 3 give all trivial solutions. However, a solution is needed; for non-trivial
solution, let
𝑛𝜋
𝑐4 ≠ 0 which gives sin 𝛼𝐿 = 0 which gives 𝛼𝐿 = 𝑛𝜋 and 𝛼 = 𝐿
for 𝑛 = 1, 2, …

Equation (1) with boundary condition 𝑋(0) = 0 and 𝑋(𝐿) = 0 possess non-trivial solutions when
𝑛 2 𝜋2
𝜆𝑛 = 𝛼𝑛2 = 𝐿2
for n = 1, 2, … which is referred to as the eigenvalues of the problem; and the
eigenfunctions of the problem are
𝑛𝜋
𝑋(𝑥) = 𝑐𝑛 sin 𝑥, 𝑛 = 1, 2, 3, … .
𝐿
Hence, the general solution for the PDE becomes

𝑢𝑛 (𝑥, 𝑡) = (𝑐𝑛 sin 𝛼𝑥)(𝑐3 cos 𝑎𝛼𝑡 + 𝑐4 sin 𝑎𝛼𝑡)


𝑛𝜋 𝑛𝜋 𝑛𝜋
= (𝑐𝑛 sin 𝐿
𝑥) (𝑐3 cos 𝑎 𝐿 𝑡 + 𝑐4 sin 𝑎 𝐿 𝑡) 𝑛 = 1, 2, …

𝑛𝜋 𝑛𝜋 𝑛𝜋
= (𝐴𝑛 cos 𝑎 𝐿 𝑡 + 𝐵𝑛 sin 𝑎 𝐿 𝑡) (sin 𝐿
𝑥) 𝑛 = 1, 2, … (6)

86
Each 𝑢𝑛 (𝑥, 𝑡) is a particular solution of the PDE, and satisfies the boundary conditions of Equations
(2a) and (2b), u(0, t) = 0, u(L, t) = 0. The coefficient 𝐴𝑛 and 𝐵𝑛 can be determine using the initial
𝜕𝑢
condition of Equation 3, 𝑢(𝑥, 0) = 𝑓(𝑥) and |
𝜕𝑡 𝑡=0
= 𝑔(𝑥). To find the values, the particular
solution in (6) needs to be rewritten. Hence, by superposition principle,
∞ ∞
𝑛𝜋 𝑛𝜋 𝑛𝜋
𝑢(𝑥, 𝑡) = ∑ 𝑢𝑛 (𝑥, 𝑡) = ∑ (𝐴𝑛 cos 𝑎 𝐿 𝑡 + 𝐵𝑛 sin 𝑎 𝐿 𝑡) (sin 𝐿
𝑥) (7)
𝑛=1 𝑛=1

must also satisfy the PDE and its boundary conditions; and hence, is also a solution to the
differential equation.
Applying the initial condition from Equation (3a): 𝑢(𝑥, 0) = 𝑓(𝑥):
∞ ∞
𝑛𝜋 𝑛𝜋
𝑢(𝑥, 0) = ∑(𝐴𝑛 cos 0 + 𝐵𝑛 sin 0) (sin 𝐿 𝑥) = ∑ 𝐴𝑛 sin 𝐿
𝑥 = 𝑓(𝑥) (8)
𝑛=1 𝑛=1
The last expression in Equation (8) is a half-range range expansion of f in a Fourier Sine Series where
the coefficients 𝐴𝑛 can be obtained from
2 𝐿 𝑛𝜋
𝐴𝑛 = ∫ 𝑓(𝑥) sin 𝐿 𝑥 𝑑𝑥
𝐿 0
𝜕𝑢
To apply the initial condition from Equation (3b): | = 𝑔(𝑥), Equation (7) needs to be
𝜕𝑡 𝑡=0
differentated first.

𝜕𝑢 𝑛𝜋𝑎 𝑛𝜋 𝑛𝜋𝑎 𝑛𝜋 𝑛𝜋
= ∑ (− 𝐿 𝐴𝑛 sin 𝑎 𝐿 𝑡 + 𝐵𝑛 𝐿 cos 𝑎 𝐿 𝑡) (sin 𝐿 𝑥)
𝜕𝑡
𝑛=1
And, at t = 0,

𝜕𝑢 𝑛𝜋𝑎 𝑛𝜋𝑎 𝑛𝜋
| = ∑ (− 𝐿 𝐴𝑛 sin 0 + 𝐵𝑛 𝐿 cos 0) (sin 𝐿 𝑥) = 𝑔(𝑥)
𝜕𝑡 𝑡=0
𝑛=1

𝑛𝜋𝑎 𝑛𝜋
= ∑ (𝐵𝑛 𝐿
) (sin 𝐿 𝑥) = 𝑔(𝑥) (9)
𝑛=1
Again, Fourier Sine Series is identified to match with Equation (9). The coefficients 𝐵𝑛 can be
solicited from
𝑛𝜋𝑎 2 𝐿 𝑛𝜋
𝐵𝑛 = ∫ 𝑔(𝑥) sin 𝑥 𝑑𝑥
𝐿 𝐿 0 𝐿
𝐿
2 𝑛𝜋
= ∫ 𝑔(𝑥) sin 𝑥 𝑑𝑥
𝑛𝜋𝑎 0 𝐿
Finally, a solution of the boundary value problem set 1 is given by

2 𝐿 𝑛𝜋 𝑛𝜋𝑎 2 𝐿
𝑛𝜋 𝑛𝜋𝑎 𝑛𝜋𝑎
𝑢(𝑥, 𝑡) = ∑ (( ∫ 𝑓(𝑥) sin 𝑥 𝑑𝑥 ) cos 𝑡+( ∫ 𝑔(𝑥) sin 𝑥 𝑑𝑥 ) sin 𝑡) sin 𝑥
𝐿 0 𝐿 𝐿 𝑛𝜋𝑎 0 𝐿 𝐿 𝐿
𝑛=1

Plucked String

87
A special case of the boundary value problem in (1) – (3) is the model of plucked string. The motion
of the string can be seen by plotting the solution or displacement u(x, t) for increasing values of time
t.

Standing Waves

The constant a appearing in the solution of the boundary value problem (1), (2), and (3) is given by
√𝑇/𝜌, where  is mass per unit length and T is the magnitude of the tension in the string. When T is
large enough, the vibrating string produces a musical sound. The sound is the result of standing
waves. The solution (6) is a superposition of product solutions called standing waves or normal
modes:

𝑢(𝑥, 𝑡) = 𝑢1 (𝑥, 𝑡) + 𝑢2 (𝑥, 𝑡) + 𝑢3 (𝑥, 𝑡) + ⋯

where
𝜋𝑎 𝜋
𝑢1 (𝑥, 𝑡) = 𝐶1 sin ( 𝑡 + 𝜙1 ) sin 𝑥
𝐿 𝐿
and is called the first standing wave, the first normal mode, or the fundamental mode of vibration.

88
The frequency

𝑎 1 𝑇
𝑓1 = = √
2𝐿 2𝐿 𝜌

of the first normal mode is called the fundamental frequency or first harmonic and is directly related
to the pitch produced by a stringent instrument. It is apparent that the greater the tension on the
string, the higher the pitch of the sound. The frequencies fn of the other normal modes, which are
integer multiples of the fundamental frequencies are called overtones. The second harmonic is the
first overtone and so on.

89
LAPLACE EQUATION
WEEK 14: LAPLACE’S EQUATION

Laplace’s equation in two and three dimensions occurs in time-independent problems involving
potentials such as electrostatic, gravitational, and velocity in fluid mechanics. Moreover, a solution
of Laplace’s equation can also be interpreted as a steady-state temperature distribution. As in
Figure 1, a solution u(x, y) of

𝜕2𝑢 𝜕2𝑢
+ =0
𝜕𝑥 2 𝜕𝑦 2

could represent the temperature that varies from point to point, but not with time, of a rectangular
plate. Laplace’s equation in two dimensions and three dimensions is abbreviated as ∇2 𝑢 = 0, where

𝜕2u 𝜕2u
∇2 𝑢 = +
𝜕𝑥 2 𝜕𝑦 2

and

2
𝜕2u 𝜕2u 𝜕2u
∇ 𝑢= 2+ +
𝜕𝑥 𝜕𝑦 2 𝜕𝑧 2

are called the two-dimensional Laplacian and the three-dimensional Laplacian, respectively, of a
function u.

Figure 1. Steady-state temperatures in a rectangular plate

90
Problem Set 1
The steady-state temperature u(x, y) in a rectangular plate whose edges x = 0 and x = a are
insulated, as in Figure 1. When no heat escapes from the lateral faces of the plate, solve the
following boundary value problem:

𝜕2𝑢 𝜕2𝑢
+ = 0, 0 < 𝑥 < 𝑎, 0<𝑦<𝑏 (1)
𝜕𝑥 2 𝜕𝑦 2

𝜕𝑢 𝜕𝑢
(𝑎) | = 0, (𝑏) | = 0, 0<𝑦<𝑏 (2)
𝜕𝑥 𝑥=0 𝜕𝑥 𝑥=𝑎

(𝑎) 𝑢(𝑥, 0) = 0, (𝑏) 𝑢(𝑥, 𝑏) = 𝑓(𝑥) 0 < 𝑥<𝑎 (3)

Figure 1. Steady-state temperatures in a rectangular plate.

Solution of the Boundary Value Problem (BVP)

Use the product u(x, y) = X(x)Y(y) to separate variables in (1). Then, if - is the separation constant,
the two equalities

𝑋′′ 𝑌′′
=− = −𝜆
𝑋 𝑌
lead to the two ordinary differential equations

𝑋 ′′ + 𝜆𝑋 = 0 (4)

𝑌 ′′ − 𝜆𝑌 = 0 (5)

As in the wave equation, both Equations (4) and (5) are a linear second order homogeneous ODEs
(unlike the heat equation which as one first order and one second order ODEs). However, as in the
heat equation and wave equation, three cases are possible for parameter which are  = 0,  < 0,
and  > 0. They are studied below.

Case 1:  = 0

When  = 0, Equation (4) reduces to 𝑋 ′′ = 0 . Its characteristic equation is 𝑚2 = 0 with roots


𝑚1 = 0, 𝑚2 = 0 (real and repeating roots). Hence the solution is 𝑋(𝑥) = 𝑐1 + 𝑐2 𝑥.

91
When  = 0, Equation (5) becomes 𝑌 ′′ = 0. Here, the characteristic equation is also 𝑚2 = 0 with
roots 𝑚1 = 0, 𝑚2 = 0 (real and repeating roots). Hence the solution is 𝑇(𝑡) = 𝑐3 + 𝑐4 𝑦.
Finally, the solution for Case 1 is
𝑢(𝑥, 𝑦) = 𝑋𝑌 = (𝑐1 + 𝑐2 𝑥)(𝑐3 + 𝑐4 𝑦) (6)

Case 2:  < 0 or 𝜆 = −𝛼 2

Equation (4) becomes 𝑋 ′′ − 𝛼 2 𝑋 = 0 for  < 0. Its characteristic equation is 𝑚2 − 𝛼 2 = 0 with


roots 𝑚1 = 𝛼, 𝑚2 = −𝛼 (real and distinct roots – special case). Therefore, the solution is
𝑋(𝑥) = 𝑐1 cosh 𝛼𝑥 + 𝑐2 sinh 𝛼𝑥.

For  < 0, Equation (5) turns into 𝑌 ′′ + 𝛼 2 𝑌 = 0. The characteristic equation is 𝑚2 + 𝛼 2 = 0


with roots 𝑚1 = 𝛼𝑖, 𝑚2 = −𝛼𝑖 (complex conjugates roots), and the solution is 𝑌(𝑦) =
𝑐1 cos 𝛼𝑦 + 𝑐2 sin 𝛼𝑦

Subsequently, the solution for Case 2 is


𝑢(𝑥, 𝑦) = 𝑋𝑌 = (𝑐1 cosh 𝛼𝑥 + 𝑐2 sinh 𝛼𝑥)(𝑐3 cos 𝛼𝑡 + 𝑐4 sin 𝛼𝑡) (7)

Case 3:  > 0 or 𝜆 = 𝛼 2

For  > 0, Equation (4) turns into 𝑋 ′′ + 𝛼 2 𝑋 = 0, and the characteristic equation is given by
𝑚2 + 𝛼 2 = 0. The two roots are 𝑚1 = 𝛼𝑖, 𝑚2 = −𝛼𝑖 (complex conjugates roots), and the
solution is 𝑋(𝑥) = 𝑐1 cos 𝛼𝑥 + 𝑐2 sin 𝛼𝑥.

For Equation (5), 𝑌 ′′ − 𝛼 2 𝑇 = 0, the characteristic equation is 𝑚2 − 𝛼 2 = 0 with roots


𝑚1 = 𝛼, 𝑚2 = −𝛼 (real and distinct roots – special case). Therefore, the solution is
𝑌(𝑦) = 𝑐3 cosh 𝛼𝑦 + 𝑐4 sinh 𝛼𝑦.

Consequently, the solution for Case 3 is


𝑢(𝑥, 𝑦) = 𝑋𝑌 = (𝑐1 cos 𝛼𝑥 + 𝑐2 sin 𝛼𝑥)(𝑐3 cosh 𝛼𝑦 + 𝑐4 sinh 𝛼𝑦) (8)

As in the heat equation and the wave equation, Case 1 to 3 gives the general solutions to the PDE
and by superposition principle any linear combination of the cases will also be the solution to the
PDE.
However, in many situations the particular solution is more desirable. To get the particular solution
all the boundary conditions are applied to the general solution. Initially, the boundary conditions
𝜕𝑢 𝜕𝑢
Equation (2a) and Equation (2b), |
𝜕𝑥 𝑥=0
= 0, |
𝜕𝑥 𝑥=𝑎
= 0, are applied.

The three cases above are looked into again. Before, applying Equations (2a) and (2b), the general
solution for each case needs to differentiated.
Case 1:  = 0
Differentiating Equation (6) gives:
𝜕𝑢
= (𝑐2 )(𝑐3 + 𝑐4 𝑦)
𝜕𝑥

92
Applying Equations (2a) and (2b) to the derivative u(x, y) of case 1;
𝜕𝑢
| = (𝑐2 )(𝑐3 + 𝑐4 𝑡) = 0 which implies 𝑐2 = 0
𝜕𝑥 𝑥=0
𝜕𝑢
| = (𝑐2 )(𝑐3 + 𝑐4 𝑡) = 0 which implies 𝑐2 = 0
𝜕𝑥 𝑥=𝑎
Hence, the updated solution for Case 1 is
𝑢(𝑥, 𝑦) = (𝑐1 )(𝑐3 + 𝑐4 𝑦) = 𝐴1 + 𝐵1 𝑦

Case 2:  < 0 (𝜆 = −𝛼 2 )
Differentiating Equation (7) gives:
𝜕𝑢
= (−𝑐1 𝛼 sinh 𝛼𝑥 + 𝑐2 𝛼 cosh 𝛼𝑥)(𝑐3 cos 𝛼𝑦 + 𝑐4 sin 𝛼𝑦)
𝜕𝑥
Applying Equations (2a) and (2b) to the derivative u(x, y) of case 1;
𝜕𝑢
| = (−𝑐1 𝛼 sinh 0 + 𝑐2 𝛼 cosh 0)(𝑐3 cos 𝛼𝑦 + 𝑐4 sin 𝛼𝑦) = 0 which implies 𝑐2 = 0
𝜕𝑥 𝑥=0
𝜕𝑢
| = (−𝑐1 𝛼 sinh 𝛼𝑎)(𝑐3 cos 𝛼𝑦 + 𝑐4 sin 𝛼𝑦) = 0 which implies 𝑐1 = 0
𝜕𝑥 𝑥=𝑎
Hence, there is no meaningful solution for Case 2.

Case 3:  > 0 (𝜆 = 𝛼 2 )
Differentiating Equation (8) gives:
𝜕𝑢
= (−𝑐1 𝛼 sin 𝛼𝑥 + 𝑐2 𝛼 cos 𝛼𝑥)(𝑐3 cosh 𝛼𝑦 + 𝑐4 sinh 𝛼𝑦)
𝜕𝑥
Applying Equations (2a) and (2b) to the derivative u(x, y) of case 1;
𝜕𝑢
| = (−𝑐1 𝛼 sin 0 + 𝑐2 𝛼 cos 0)(𝑐3 cosh 𝛼𝑦 + 𝑐4 sinh 𝛼𝑦) = 0 which implies 𝑐2 = 0
𝜕𝑥 𝑥=0
𝜕𝑢
| = (−𝑐1 𝛼 sin 𝛼𝑎)(𝑐3 cosh 𝛼𝑦 + 𝑐4 sinh 𝛼𝑦) = 0 which implies 𝑐1 = 0
𝜕𝑥 𝑥=𝑎
Hence, there is no meaningful solution for Case 3.

Subsequently, Case 1 gives solution in terms of y only, while Case 2 and 3 give trivial solutions.
However, a solution is needed; for non-trivial solution, let
𝑛𝜋
𝑐1 ≠ 0 for case 3 which gives sin 𝛼𝑎 = 0 which gives 𝛼𝑎 = 𝑛𝜋 and 𝛼 = 𝑎
for 𝑛 = 1, 2, …

Hence, the general solution for the PDE becomes


𝑛𝜋 𝑛𝜋 𝑛𝜋
𝑢(𝑥, 𝑦) = (𝑐1 cos 𝑎
𝑥) (𝑐3 cosh 𝑎 𝑦 + 𝑐4 sinh 𝑎
𝑦) 𝑛 = 1, 2, …

or
𝑛𝜋 𝑛𝜋 𝑛𝜋
𝑢𝑛 (𝑥, 𝑦) = (𝐴𝑛 cosh 𝑎
𝑦 + 𝐵𝑛 sinh 𝑎
𝑦) (cos 𝑎 𝑥) 𝑛 = 1, 2, …

93
Each 𝑢𝑛 (𝑥, 𝑡) is a particular solution of the PDE, and satisfies the boundary conditions of Equations
𝜕𝑢 𝜕𝑢
(2a) and (2b), ), |
𝜕𝑥 𝑥=0
= 0, |
𝜕𝑥 𝑥=𝑎
= 0. The coefficient 𝐴𝑛 and 𝐵𝑛 can be determine using the
boundary conditions of Equations 3(a) and (3b), 𝑢(𝑥, 0) = 0 and 𝑢(𝑥, 𝑏) = 𝑓(𝑥).

Firstly apply the boundary conditions of Equation (3a): 𝑢(𝑥, 0) = 0 to case 1 and case 3.

Case 1:  = 0

𝑢(𝑥, 0) = 𝐴1 + 𝐵1 (0) = 0 which implies 𝐴1 = 0

The revise solution for case 1 becomes

𝑢(𝑥, 𝑦) = 𝐵1 𝑦 (9)

Case 3:  > 0 (𝜆 = 𝛼 2 )
𝑛𝜋
𝑢𝑛 (𝑥, 0) = (𝐴𝑛 cosh 0 + 𝐵𝑛 sinh 0) (cos 𝑎
𝑥) =0 which implies 𝐴𝑛 = 0

The revise solution for case 1 becomes


𝑛𝜋 𝑛𝜋
𝑢𝑛 (𝑥, 𝑦) = (𝐵𝑛 sinh 𝑎
𝑦) (cos 𝑎 𝑥) (10)

To find the values for the coefficients Bn, apply Equation (3b), 𝑢(𝑥, 𝑏) = 𝑓(𝑥), to (10).
𝑛𝜋 𝑛𝜋
𝑢𝑛 (𝑥, 𝑦) = (𝐵𝑛 sinh 𝑎
𝑦) (cos 𝑎 𝑥) = 𝑓(𝑥)

The above does not show any possibility of a solution. Hence, the superposition principle is utilized,
namely

𝑛𝜋 𝑛𝜋
𝑢(𝑥, 𝑦) = ∑ (𝐵𝑛 sinh 𝑎
𝑦) (cos 𝑎 𝑥) = 𝑓(𝑥) (11)
𝑛=1

Equation (11) can be solved using the half range cosine series by incorporating a rewritten case 1:

𝑛𝜋 𝑛𝜋
𝑢(𝑥, 𝑦) = 𝐴0 𝑦 + ∑ (𝐵𝑛 sinh 𝑎
𝑦) (cos 𝑎 𝑥) = 𝑓(𝑥)
𝑛=1

where

1 𝑎
𝐴0 = ∫ 𝑓(𝑥)𝑑𝑥
𝑎𝑏 0

and
𝑎
2 𝑛𝜋
𝐵𝑛 = 𝑛𝜋 ∫ 𝑓(𝑥) cos 𝑥 𝑑𝑥
𝑎 sinh 𝑎 𝑏 0 𝑎

Dirichlet Problem

94
A boundary value problem in which a solution of an elliptic partial differential equation is sought
such as Laplace’s equation ∇2 𝑢 = 0, within a bounded region R (in a plane or 3 space) such that u
takes on prescribed values on the entire boundary of the region.

The solution of a Dirichlet problem for a rectangular region

𝜕2𝑢 𝜕2𝑢
+ = 0, 0 < 𝑥 < 𝑎, 0<𝑦<𝑏
𝜕𝑥 2 𝜕𝑦 2

𝑢(0, 𝑦) = 0, 𝑢(𝑎, 𝑦) = 0, 0<𝑦<𝑏

𝑢(𝑥, 0) = 0, 𝑢(𝑥, 𝑏) = 𝑓(𝑥), 0 < 𝑥<𝑎

is

𝑛𝜋 𝑛𝜋
𝑢(𝑥, 𝑦) = ∑ 𝐴𝑛 sinh 𝑦 sin 𝑥 (6)
𝑎 𝑎
𝑛=1

where
𝑎
2 𝑛𝜋
𝐴𝑛 = 𝑛𝜋 ∫ 𝑓(𝑥) sin 𝑥 𝑑𝑥 (7)
𝑎 sinh 𝑎 𝑏 0 𝑎

In a special case when f(x) = 100, a = 1, b = 1, the coefficients An in (6) are given by

1 − (−1)𝑛
𝐴𝑛 = 200
𝑛𝜋 sinh 𝑛𝜋
The isotherms, or curves in the rectangular region along the temperature u(x, y) is constant [Figure
2 (b)]. The isotherms can also be visualized as the curves of intersection (projected into the xy-plane)
of the horizontal planes u = 80, u = 60, and so on, with the surface in Figure 2 (b).

Figure 2. Surface is graph of partial sums when f(x) = 100 and a = b = 1 in (6).

95
There is a maximum principle that states a solution u of the Laplace’s equation within a bounded
region R with boundary B (such as a rectangle, circle, sphere, and so on) takes on its maximum and
minimum values on B. In addition, u can have no relative extrema (maximum or minimum) in the
region R.

Superposition Principle

A Dirichlet problem for a rectangle can be readily solved by separation of variables when
homogeneous boundary conditions are specified on two parallel boundaries. However, the method
of separation variables is not applicable to a Dirichlet problem when the boundary conditions on all
four sides of the rectangle are nonhomogeneous. To get around this difficulty, the problem

𝜕2𝑢 𝜕2𝑢
+ = 0, 0 < 𝑥 < 𝑎, 0<𝑦<𝑏
𝜕𝑥 2 𝜕𝑦 2

𝑢(0, 𝑦) = 𝐹(𝑦), 𝑢(𝑎, 𝑦) = 𝐺(𝑦), 0<𝑦<𝑏 (8)

𝑢(𝑥, 0) = 𝑓(𝑥), 𝑢(𝑥, 𝑏) = 𝑔(𝑥), 0 < 𝑥<𝑎

is broken into two problems, each of which has homogeneous boundary conditions on parallel
boundaries , as given below,

Problem 1:

𝜕 2 𝑢1 𝜕 2 𝑢1
+ = 0, 0 < 𝑥 < 𝑎, 0<𝑦<𝑏
𝜕𝑥 2 𝜕𝑦 2

𝑢1 (0, 𝑦) = 0, 𝑢1 (𝑎, 𝑦) = 0, 0<𝑦<𝑏

𝑢1 (𝑥, 0) = 𝑓(𝑥), 𝑢1 (𝑥, 𝑏) = 𝑔(𝑥), 0 < 𝑥<𝑎

Problem 2:

𝜕 2 𝑢2 𝜕 2 𝑢2
+ = 0, 0 < 𝑥 < 𝑎, 0<𝑦<𝑏
𝜕𝑥 2 𝜕𝑦 2

𝑢2 (0, 𝑦) = 𝐹(𝑦), 𝑢2 (𝑎, 𝑦) = 𝐺(𝑦), 0<𝑦<𝑏

𝑢2 (𝑥, 0) = 0, 𝑢2 (𝑥, 𝑏) = 0, 0 < 𝑥<𝑎

Suppose u1 and u2 are the solutions of Problem 1 and Problem 2, respectively. If

𝑢(𝑥, 𝑦) = 𝑢1 (𝑥, 𝑦) + 𝑢2 (𝑥, 𝑦)

is defined, it can be seen that u satisfies all boundary conditions in the original problem (8).

96
For example,

𝑢(0, 𝑦) = 𝑢1 (0, 𝑦) + 𝑢2 (0, 𝑦) = 0 + 𝐹(𝑦) = 𝐹(𝑦)

𝑢(𝑥, 𝑏) = 𝑢1 (𝑥, 𝑏) + 𝑢2 (𝑥, 𝑏) = 𝑔(𝑥) + 0 = 𝑔(𝑥)

and so on. Therefore, by solving Problems 1 and 2 and adding their solutions, the original problem is
solved. The additive property of solutions is known as the superposition principle.

Figure 3. Solution u = Solution u1 of Problem 1 + Solution u2 of Problem 2.

A solution of Problem 1 is

𝑛𝜋 𝑛𝜋 𝑛𝜋
𝑢1 (𝑥, 𝑦) = ∑ {𝐴𝑛 cosh 𝑦 + 𝐴𝑛 sinh 𝑦} sin 𝑥
𝑎 𝑎 𝑎
𝑛=1

where

2 𝑎 𝑛𝜋
𝐴𝑛 = ∫ 𝑓(𝑥) sin 𝑥 𝑑𝑥
𝑎 0 𝑎

1 2 𝑎 𝑛𝜋 𝑛𝜋
𝐵𝑛 = 𝑛𝜋 ( ∫ 𝑔(𝑥) sin 𝑥 𝑑𝑥 − 𝐴𝑛 cosh 𝑏)
sinh 𝑎 𝑏 𝑎 0 𝑎 𝑎

and that a solution of Problem 2 is



𝑛𝜋 𝑛𝜋 𝑛𝜋
𝑢2 (𝑥, 𝑦) = ∑ {𝐴𝑛 cosh 𝑥 + 𝐴𝑛 sinh 𝑥} sin 𝑦
𝑏 𝑏 𝑏
𝑛=1

where

2 𝑎 𝑛𝜋
𝐴𝑛 = ∫ 𝐹(𝑦) sin 𝑦 𝑑𝑦
𝑏 0 𝑏

1 2 𝑎 𝑛𝜋 𝑛𝜋
𝐵𝑛 = 𝑛𝜋 ( ∫ 𝐺(𝑦) sin 𝑦 𝑑𝑦 − 𝐴𝑛 cosh 𝑎)
sinh 𝑏 𝑎 𝑏 0 𝑏 𝑏

97

S-ar putea să vă placă și