Sunteți pe pagina 1din 103

Table of Contents

Table of Contents 1

Math 331: Applied Mathematics III Lecture Notes I Ordinary Differential Equations 6

1 Ordinary Differential Equations of the First Order 7


1.1 Basic Concepts and Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
By 1.2 Separable Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Semu Mitiku and Tilahun Abebaw 1.2.1 Equation Reducible to Separable Form . . . . . . . . . . . . . . . . . . 13
1.3 Exact Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Department of Mathematics
1.4 Linear First Order Differential Equations . . . . . . . . . . . . . . . . . . . . . 25
Addis Ababa University 1.5 *Nonlinear Differential Equations of the First Order . . . . . . . . . . . . . . . 29
1.5.1 The Bernoulli Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 29
November 12, 2011 1.5.2 The Riccati Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.5.3 The Clairuat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

2 Ordinary Differential Equations of The Second and Higher Order 33


2.1 Basic Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 General Solution of Homogeneous Linear ODEs . . . . . . . . . . . . . . . . . . 34
2.2.1 Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3 Homogeneous LODE with Constant Coefficients . . . . . . . . . . . . . . . . . 41
2.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.4 Nonhomogeneous Equations with Constant Coefficients . . . . . . . . . . . . . 45
2.4.1 The undetermined coefficient method . . . . . . . . . . . . . . . . . . . 46
2.4.2 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.5 The Laplace Transform Method to Solve ODEs . . . . . . . . . . . . . . . . . . 53
2.6 The Cauchy-Euler Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
CONTENTS 2 CONTENTS 3

2.7 *The Power Series Solution Method . . . . . . . . . . . . . . . . . . . . . . . . 61 5.3.1 Green’s Theorem for Multiply Connected Regions . . . . . . . . . . . . . 125
2.8 Systems of ODE of the First Order . . . . . . . . . . . . . . . . . . . . . . . . 63 5.4 Surface Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
2.8.1 Eigenvalue Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.4.1 Normal Vector and Tangent plane to a Surface . . . . . . . . . . . . . . 128
2.8.2 The Method of Elimination: . . . . . . . . . . . . . . . . . . . . . . . . 69 5.4.2 Applications of Surface Integrals . . . . . . . . . . . . . . . . . . . . . . 132
2.8.3 Reduction of higher order ODEs to systems of ODE of the first order . . 72 5.5 Divergence and Stock’s Theorems . . . . . . . . . . . . . . . . . . . . . . . . . 136
2.9 Numerical Methods to Solve ODEs . . . . . . . . . . . . . . . . . . . . . . . . 74 5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
2.9.1 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.9.2 Runge-Kutta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
III Complex Analysis 144
2.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6 COMPLEX ANALYTIC FUNCTIONS 146
3 *Nonlinear ODEs and Qualitative Analysis 76
6.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
3.1 Critical Points and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.2 Complex Functions, Differential Calculus and Analyticity . . . . . . . . . . . . . 151
3.1.1 Stability for linear systems . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.2.1 Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
3.1.2 Stability for nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . 78
6.2.2 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
3.2 Stability by Lyapunav’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3 The Cauchy - Riemann Equation . . . . . . . . . . . . . . . . . . . . . . . . . 155
3.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3.1 Test for Analyticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.4 Elementary Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
II Vector Analysis 82 6.4.1 Exponential Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.4.2 Trigonometric and Hyperbolic Functions . . . . . . . . . . . . . . . . . 160
4 Vector Differential Calculus 84
6.4.3 Polar form and Multi-Valuedness. . . . . . . . . . . . . . . . . . . . . . 162
4.1 Vector Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.4.4 The Logarithmic Functions . . . . . . . . . . . . . . . . . . . . . . . . 162
4.1.1 Vector Functions of One Variable in Space . . . . . . . . . . . . . . . . 84
6.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.1.2 Limit of A Vector Valued Function . . . . . . . . . . . . . . . . . . . . 85
4.1.3 Derivative of a Vector Function . . . . . . . . . . . . . . . . . . . . . . 87 7 COMPLEX INTEGRAL CALCULUS 166
4.1.4 Vector and Scalar Fields . . . . . . . . . . . . . . . . . . . . . . . . . . 88 7.1 Complex Integration: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.2 The Gradient Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7.2 Cauchy’s Integral Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
4.2.1 Level Surfaces, Tangent Planes and Normal Lines . . . . . . . . . . . . 91 7.3 Cauchy’s Integral Formula and The Derivative of Analytic Functions. . . . . . . 173
4.3 Curves and Arc length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 7.4 Cauchy’s Theorem for Multiply Connected Domains . . . . . . . . . . . . . . . 176
4.4 Tangent, Curvature and Torsion . . . . . . . . . . . . . . . . . . . . . . . . . . 97 7.5 Fundamental Theorem of Complex Integral Calculus . . . . . . . . . . . . . . . 178
4.5 Divergence and Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 7.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
4.5.1 Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 8 TAYLOR AND LAURENT SERIES 182
8.1 Sequence and Series of Complex Numbers . . . . . . . . . . . . . . . . . . . . 182
5 Line and Surface Integrals 110 8.2 Complex Taylor Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5.1 Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 8.3 Laurent Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5.2 Line Integrals Independent of Path . . . . . . . . . . . . . . . . . . . . . . . . 112 8.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.3 Green’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
CONTENTS 4 CONTENTS 5

9 INTEGRATION BY THE METHOD OF RESIDUE. 194 This page is left blank intensionally.
9.1 Zeros and Classification of Singularities. . . . . . . . . . . . . . . . . . . . . . . 194
9.2 The Residue Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
9.3 Evaluation of Real Integrals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
9.3.1 Improper Integrals: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Chapter 1

Ordinary Differential Equations of the


First Order
Part I
Part 1 of this material deals with equations that contain one or more derivatives of a function of

Ordinary Differential Equations a single variable and such equations are called ordinary differential equations, which can be used
to model a phenomena of interest in the sciences, engineering, economics, ecological studies, and
other areas.

In the first section we will see the basic concepts and ideas and in the remaining sections we will
consider equations which involve the first derivative of a given independent variable with respect
to an independent variable, which are called Ordinary Differential Equations of the First Order.

1.1 Basic Concepts and Ideas

The derivative 𝑑𝑦/𝑑𝑥 of a function 𝑦 = 𝑓 (𝑥) is itself another function 𝑓 ′ (𝑥) found by an appro-
2
priate rule of differentiation. For example, the function 𝑦 = 𝑒𝑥 is differentiable on the interval
2
(−∞, ∞) and by the Chain Rule its derivative is 𝑑𝑦/𝑑𝑥 = 2𝑥𝑒𝑥 . If we replace the right-hand
expression of the last equation by the symbol y, the equation becomes
𝑑𝑦
= 2𝑥𝑦. (1.1)
𝑑𝑥

In differentiation, the problem was ”Given a function 𝑦 = 𝑓 (𝑥), find its derivative.”
1.1 Basic Concepts and Ideas 8 1.1 Basic Concepts and Ideas 9

Now, the problem we face here is ”If we are given an equation such as (1.1), is there some way Classification by Order
or method by which we can find the unknown function 𝑦 = 𝑓 (𝑥) that satisfy the given equation,
The order of a differential equation (either ODE or PDE) is the order of the highest derivative
without prior knowledge how it was constructed?” These kind of problems are the ones we are
that appear in the equation. For example,
going to focus on in this part of the course.
𝑑𝑦 𝑑2 𝑦 𝑑𝑦
Definition 1.1.1. An equation involving derivatives of one or more dependent variables with 4𝑥 +𝑦 =𝑥
𝑑𝑥 𝑑𝑥2 𝑑𝑥
+ 4 − 6𝑦 = 𝑒𝑥
respect to one or more independent variables is called a Differential Equation ( DE).
are first and second-order ordinary differential equations respectively.
Example 1.1.1. The general 𝑛th −order ordinary differential equation in one dependent variable is given by the
𝑑𝑦 𝑑3 𝑥 𝑑2 𝑥 ∂𝑣 ∂𝑣 general form
+ 𝑦 = 𝑥, 3
+ 5 2 + 3𝑥 = sin 𝑡, + + 5𝑣 = 2. (1.2)
𝑑𝑥 𝑑𝑡 𝑑𝑡 ∂𝑠 ∂𝑡 𝐹 (𝑥, 𝑦, 𝑦 ′ , 𝑦 ′′ , ..., 𝑦 (𝑛) ) = 0, (1.3)
are all Differential Equations. where F is a real-valued function of 𝑛 + 2 variables 𝑥, 𝑦, 𝑦 ′ , 𝑦 ′′ , ..., 𝑦 (𝑛) .

Differential equations can be classified by their type, order, and in term of linearity. We will Remark 1.1.2. For both practical and theoretical reasons we shall also make the assumption
see these classifications before going to the solution concept. hereafter that it is possible to explicitly solve the differential equation of the form (1.3) uniquely
for the highest derivative 𝑦 (𝑛) in terms of the remaining 𝑛 + 1 variables 𝑥, 𝑦, 𝑦 ′ , 𝑦 ′′ , ..., 𝑦 (𝑛−1) .
Classification by Type Then the differential equation (1.3) becomes

𝑑𝑛 𝑦
∙ If an equation contains only ordinary derivatives of one or more dependent variables with = 𝑓 (𝑥, 𝑦, 𝑦 ′ , ..., 𝑦 (𝑛−1) ), (1.4)
𝑑𝑥𝑛
respect to a single independent variable, then it is said to be an ordinary differential
equation (ODE). where 𝑓 is a real-valued continuous function and this is referred to as the normal form of (1.3).

For example, Example 1.1.2. The normal form of the first-order equation 4𝑥𝑦 ′ + 𝑦 = 𝑥 is

𝑑𝑦 𝑑2 𝑦 𝑑𝑦 𝑑3 𝑥 𝑑2 𝑥
𝑦′ =
𝑥−𝑦
+ 𝑦 = 𝑥, 2
+ 𝑥𝑦( )2 = 0 and 3
+ 5 2 + 3𝑥 = sin 𝑡
𝑑𝑥 𝑑𝑥 𝑑𝑥 𝑑𝑡 𝑑𝑡 4𝑥

are all ordinary differential equations.


and the normal form of the second-order equation 𝑦 ′′ − 𝑦 + 6𝑦 = 0 is

∙ If a function is defined in terms of two or more independent variables, the corresponding 𝑦 ′′ = 𝑦 ′ − 6𝑦.
derivative will be a partial derivative with respect to each independent variable. An equation
The first order ordinary differential equation is generally expressed as:
involving partial derivatives of one or more dependent variables of two or more independent
variables is called a partial differential equation (PDE). 𝐹 (𝑥, 𝑦, 𝑦 ′ ) = 0 or 𝑦 ′ = 𝑓 (𝑥, 𝑦).
For example,
∂ 2𝑢 ∂ 2𝑢 ∂𝑢 ∂𝑣 ∂𝑣 For example, the differential equation 𝑦 ′ + 𝑦 = 𝑥 is equivalent to 𝑦 ′ + 𝑦 − 𝑥 = 0. If 𝐹 (𝑥, 𝑦, 𝑦 ′ ) =
and + + 5𝑣 = 2.
∂𝑥2 ∂𝑡 ∂𝑡 ∂𝑠 ∂𝑡
= 2 −2
𝑦 ′ + 𝑦 − 𝑥, then the given differential equation becomes of the form 𝐹 (𝑥, 𝑦, 𝑦 ′ ) = 0.
are both partial differential equations.

In this part we will only consider the case of ordinary differential equations.
1.1 Basic Concepts and Ideas 10 1.2 Separable Differential Equations 11

Classification by Linearity Example 1.1.3. Consider the differential equation 𝑦 ′′ + 𝑦 = 0.


Let ℎ(𝑥) = 2 sin 𝑥 + 3 cos 𝑥. Then 𝑦 = ℎ(𝑥), 𝑦 ′ = ℎ′ (𝑥) = 2 cos 𝑥 − 3 sin 𝑥, 𝑦 ′′ = ℎ′′ (𝑥) =
An 𝑛𝑡ℎ -order ordinary differential equation (1.3) is said to be linear if F is linear in 𝑦, 𝑦 ′ , ..., 𝑦 (𝑛) .
−2 sin 𝑥 − 3 cos 𝑥 and 𝑦 ′′ + 𝑦 = (−2 sin 𝑥 − 3 cos 𝑥) + (2 sin 𝑥 + 3 cos 𝑥) = 0, which means ℎ(𝑥)
This means that an 𝑛𝑡ℎ -order linear ordinary differential equation is of the form
satisfies the given DE and hence it is an explicit solution.
(𝑛) (𝑛−1) ′
𝑎𝑛 (𝑥)𝑦 + 𝑎𝑛−1 (𝑥)𝑦 + ⋅ ⋅ ⋅ + 𝑎1 (𝑥)𝑦 + 𝑎0 (𝑥)𝑦 − 𝑏(𝑥) = 0, (1.5)
Example 1.1.4. Consider the differential equation 𝑦𝑦 ′ = −𝑥.
Differentiating both sides of the equation 𝑥2 + 𝑦 2 − 1 = 0, (𝑦 > 0) with respect to 𝑥 we get
where 𝑎𝑛 (𝑥) ∕= 0.
𝑑 2 𝑑
If 𝑏(𝑥) ≡ 0, the equation (1.5) is called a homogeneous DE and otherwise it is called nonho- (0).
𝑑𝑥 𝑑𝑥
(𝑥 + 𝑦 2 − 1) =
mogeneous.
That is,
𝑑𝑦
Notation 2𝑥 + 2𝑦= 0,
𝑑𝑥
which is equivalent to the equation 𝑥 + 𝑦𝑦 ′ = 0 ⇐⇒ 𝑦𝑦 ′ = −𝑥.
We may equivalently use the notations
Hence, 𝑥2 + 𝑦 2 − 1 = 0 is an implicit solution of the DE 𝑦𝑦 ′ = −𝑥 on (−1, 1), since 𝑦 > 0.
𝑑𝑛 𝑦
and 𝑦 (𝑛)
𝑑𝑥𝑛 We are now in a position to solve some differential equations. There are different methods of
interchangeably for the 𝑛𝑡ℎ −order derivative of 𝑦 with respect to 𝑥. Using this notation, equation solving differential equations and one method that works for one DE may not work for another.
(1.5) can be equivalently written as In this chapter we will consider some of these methods for solving ordinary differential equations
of the first order.
𝑑𝑛 𝑦 𝑑𝑛−1 𝑦 𝑑𝑦
𝑎𝑛 (𝑥) 𝑛
𝑑𝑥 𝑑𝑥 𝑑𝑥
+ 𝑎𝑛−1 (𝑥) 𝑛−1 + ⋅ ⋅ ⋅ + 𝑎1 (𝑥) + 𝑎0 (𝑥)𝑦 = 𝑏(𝑥).

1.2 Separable Differential Equations


Solution Concept
Consider differential equation
Consider the equation 𝑦 ′ + 2𝑥𝑦 = 0, which is a first order differential equation for the unknown 𝑑𝑦
2 = 𝑓 (𝑥). (1.6)
function 𝑦(𝑥). One can easily check that the function 𝑦(𝑥) = 𝑒−𝑥 satisfies the given equation 𝑑𝑥
on (−∞, ∞) and we say that 𝑒−2𝑥 is a solution for the given differential equation. Then 𝑑𝑦 = 𝑓 (𝑥)𝑑𝑥 and it can be solved by integration. If 𝑓 (𝑥) is a continuous function, then
integrating both sides of (1.6) gives
Definition 1.1.3. Let ℎ(𝑥) be a real valued function defined on an interval [𝑎, 𝑏] and having 𝑛𝑡ℎ
order derivative for all 𝑥 ∈ (𝑎, 𝑏). If ℎ(𝑥) satisfies the 𝑛th order ODE (1.5) on (𝑎, 𝑏), that is, 𝑦= 𝑓 (𝑥)𝑑𝑥 = 𝐺(𝑥) + 𝑐,

1. 𝐹 (𝑥, ℎ(𝑥), ℎ′ (𝑥), ℎ′′ (𝑥), . . . , ℎ(𝑛) (𝑥)) is defined for all 𝑥 ∈ (𝑎, 𝑏) and where G(x) is an antiderivative (indefinite integral) of 𝑓 (𝑥).

2. 𝐹 (𝑥, ℎ(𝑥), ℎ′ (𝑥), ℎ′′ (𝑥), . . . , ℎ(𝑛) (𝑥)) = 0, for all 𝑥 ∈ (𝑎, 𝑏), Example 1.2.1.

then 𝑦 = ℎ(𝑥) is called a ( an Explicit) solution of the ODE on [𝑎, 𝑏]. 1. If 𝑦 ′ = 𝑥, then 𝑦(𝑥) = 0
𝑡𝑑𝑡 + 𝐶 = 12 𝑥2 + 𝐶
∫𝑥

Sometimes a solution of a differential equation may appear as an implicit function, i.e. the solution 2. If 𝑦 ′ = 𝑠𝑖𝑛(1 + 𝑥2 ), then 𝑦(𝑥) = 0
𝑠𝑖𝑛(1 + 𝑡2 )𝑑𝑡 + 𝐶. However, it is difficult to find an
can be expressed implicitly in the form: ℎ(𝑥, 𝑦) = 0, where ℎ is some continuous function of 𝑥 explicit solution formula for this problem. (In such cases one may use numerical methods
∫𝑥

and 𝑦, and such solution is called an Implicit Solution of the DE. to get approximate solutions.)
1.2 Separable Differential Equations 12 1.2 Separable Differential Equations 13

Many first-order ODEs can be reduced or transformed to the form which implies
𝑒2𝑥
= + 𝑐,
−1

𝑔(𝑦)𝑦 = 𝑓 (𝑥), 𝑦 2
where 𝑐 is a constant of integration. Then solve for 𝑦 to get
where 𝑔 and 𝑓 are continuous functions. Then, from elementary calculus we have:
𝑦(𝑥) = ,
−2
(𝑒2𝑥 + 𝑐)
𝑔(𝑦)𝑑𝑦 = 𝑓 (𝑥)𝑑𝑥. which is an explicit solution of the given first order differential equation.
Such type of equations are called separable equations. Integrating both sides we get:

𝑔(𝑦)𝑑𝑦 = 𝑓 (𝑥)𝑑𝑥 + 𝑐 Remark 1.2.1. It is recommended to write an explicit solution to the differential equation when
ever possible. However, sometimes solving for the dependent variable (in our case 𝑦) may not
� �

is the general solution of the given equation. be possible. In those cases one can represent the final solution by an implicit solution of the
Example 1.2.2. Solve the DE 6𝑦𝑦 + 4𝑥 = 0.
′ differential equation.

Solution: 1.2.1 Equation Reducible to Separable Form

The equation 6𝑦𝑦 ′ + 4𝑥 = 0 is equivalent to There are some differential equations which are not separable, but they can be transformed to
𝑑𝑦 a separable form by simple change of variables. We will see some of the possible substitutions
6𝑦
𝑑𝑥
= −4𝑥
hereunder.
and then 6𝑦𝑑𝑦 = −4𝑥𝑑𝑥. Integrating both sides,
A. Linear Substitution
6𝑦𝑑𝑦 = (−4𝑥)𝑑𝑥,
� �

Suppose we have a differential equation that can be written in the form:


gives
3𝑦 2 + 2𝑥2 = 𝐶, 𝑦 ′ = 𝑔(𝑎𝑥 + 𝑏𝑦 + 𝑐) (1.7)

which is an implicit solution of the given first order differential equation. Such an equation is not in general separable. However, if we set 𝑢 = 𝑎𝑥 + 𝑏𝑦 + 𝑐, we get
Example 1.2.3. Solve the DE 𝑦 ′ = 𝑦 2 𝑒−𝑥 . 𝑑𝑢 𝑑𝑦
=𝑎+𝑏 .
𝑑𝑥 𝑑𝑥
First rewrite the equation as Or
𝑑𝑦 𝑑𝑦 1 𝑑𝑢 𝑎
= 𝑦 2 𝑒2𝑥 . =
𝑑𝑥 𝑑𝑥 𝑏 𝑑𝑥 𝑏
− .
If 𝑦 ∕= 0, this has the differential form Thus (1.7) will be transformed into
1
d𝑦 = 𝑒2𝑥 d𝑥, 1 𝑑𝑢 𝑎
𝑦2
𝑏 𝑑𝑥 𝑏
− = 𝑔(𝑢),

where the variables have been separated. Integrating both sides we have where 𝑢 and 𝑥 can be separated.
1
𝑑𝑦 = 𝑒2𝑥 𝑑𝑥, Example 1.2.4. Solve the differential equation 𝑦 ′ = (𝑥 + 𝑦)2 .
𝑦2
� �
1.2 Separable Differential Equations 14 1.2 Separable Differential Equations 15

Solution: B. Quotient Substitution

Let 𝑢 = 𝑥 + 𝑦. Then 𝑢′ = 1 + 𝑦 ′ which implies 𝑦 ′ = 𝑢′ − 1. With this substitution the equation Suppose we have an equation that can be written in the form
2
𝑦 = (𝑥 + 𝑦) is equivalent to

𝑦
𝑦 ′ = 𝑔( ).
𝑥
𝑑𝑢
= 𝑢2 + 1. Let us substitute
𝑑𝑥
𝑢′ − 1 = 𝑢2 ⇐⇒
𝑦
𝑢= .
Then 𝑥
𝑑𝑢
= 𝑑𝑥 Then
𝑢2 + 1 𝑑𝑢 1 𝑦
=
𝑥𝑦 ′ − 𝑦
and integrate both sides, 𝑑𝑥 𝑥2 𝑥 𝑥
= 𝑦′ − 2 .
𝑑𝑢
= 𝑑𝑥 This implies,
𝑢2 + 1 𝑦
� �

𝑦 ′ = 𝑥𝑢′ + = 𝑥𝑢′ + 𝑢.
to get arctan 𝑢 = 𝑥 + 𝑐 for an arbitrary constant 𝑐. Substituting back 𝑢 = 𝑥 + 𝑦 in the last 𝑥
equation gives us the general solution of the given DE to be arctan(𝑥 + 𝑦) = 𝑥 + 𝑐. Thus, the differential equation
𝑦
𝑦 ′ = 𝑔( )
Example 1.2.5. Solve the differential equation (2𝑥 − 4𝑦 + 5)𝑦 ′ + 𝑥 − 2𝑦 + 3 = 0. 𝑥
is reduced to the equation 𝑥𝑢′ = 𝑔(𝑢) − 𝑢 which is equivalent to the differential equation

Solution 𝑑𝑥 𝑑𝑢
= .
𝑥 𝑔(𝑢) − 𝑢

Then by integrating we obtain a general solution.


Let 𝑢 = 𝑥 − 2𝑦. Then, 𝑢′ = 1 − 2𝑦 ′ which implies 𝑦 ′ = 12 (1 − 𝑢′ ). Therefore, the equation
(2𝑥 − 4𝑦 + 5)𝑦 ′ + 𝑥 − 2𝑦 + 3 = 0 becomes (2𝑢 + 5) 12 (1 − 𝑢′ ) + 𝑢 + 3 = 0. Simplifying this we
get (2𝑢 + 5) − (2𝑢 + 5)𝑢′ + 2𝑢 + 6 = 0 which implies Example 1.2.6. Solve 𝑥2 𝑦 ′ = 𝑥2 + 𝑥𝑦 + 𝑦 2 .

2𝑢 + 5 𝑑𝑢
= 1. Solution
4𝑢 + 11 𝑑𝑥
(2𝑢 + 5)𝑢′ = 4𝑢 + 11 ⇐⇒
� �

Then For 𝑥 ∕= 0, the differential equation 𝑥2 𝑦 ′ = 𝑥2 + 𝑥𝑦 + 𝑦 2 is equivalent to


4𝑢 + 10 1
)𝑑𝑢 = 2𝑑𝑥.
4𝑢 + 11 4𝑢 + 11
𝑑𝑢 = 2𝑑𝑥 ⇐⇒ (1 −
𝑦′ = 1 + + .
𝑥 𝑥
Now we integrate both sides
𝑦 � 𝑦 �2

1
)𝑑𝑢 = 2𝑑𝑥 Let 𝑢 = 𝑥𝑦 . Then 𝑔(𝑢) = 1 + 𝑢 + 𝑢2 and we get, 𝑥𝑢′ = (1 + 𝑢 + 𝑢2 ) − 𝑢 = 1 + 𝑢2 , which implies
4𝑢 + 11
(1 −
� �

𝑑𝑢 𝑑𝑥
and we get = .
1 1 + 𝑢2 𝑥
4
𝑢 − ln ∣4𝑢 + 11∣ = 2𝑥 + 𝑐1 .
We then integrate
𝑑𝑢 𝑑𝑥
But 𝑢 = 𝑥 − 2𝑦. Then substituting this in the above equation gives us =
1 + 𝑢2 𝑥
� �

1
and get arctan 𝑢 = ln ∣𝑥∣ + 𝑐.
4
𝑥 − 2𝑦 − ln ∣4𝑥 − 8𝑦 + 11∣ = 2𝑥 + 𝑐1
𝑦
Now substituting 𝑢 = 𝑥
gives us
for an arbitrary constant 𝑐1 , or equivalently 4𝑥 + 8𝑦 + ln ∣4𝑥 − 8𝑦 + 11∣ = 𝐶, where 𝐶 = −4𝑐1 .
𝑦
𝑥
arctan( ) = ln ∣𝑥∣ + 𝑐 = ln ∣𝑥∣ + ln 𝑘 = ln 𝑘∣𝑥∣, for some 𝑘 > 0.
1.2 Separable Differential Equations 16 1.2 Separable Differential Equations 17

That is, are called initial conditions (IC).


𝑦
= tan(ln 𝑘∣𝑥∣) A Differential Equation 𝐹 (𝑥, 𝑦, 𝑦 ′ , . . . , 𝑦 (𝑛) ) = 0 together with Initial Conditions is called an
𝑥
and solving for 𝑦 we get 𝑦(𝑥) = 𝑥 tan(ln 𝑘∣𝑥∣). Initial Value Problem (IVP) or Cauchy’s problem.

Example 1.2.7. Solve the DE: 2𝑥𝑦𝑦 ′ = 𝑦 2 − 𝑥2 . Remark 1.2.3. The number of initial conditions necessary to determine a unique solution equals
the order of the differential equation.
Solution:
Example 1.2.8. Solve the IVP 𝑦 ′′ + 𝑦 = 0, 𝑦(0) = 3 and 𝑦 ′ (0) = −4.
Divide both sides by 𝑥2 , for 𝑥 ∕= 0, to get
Solution:
2 𝑦′ =
𝑥 𝑥
− 1.
First find the general solution with two unknown constants. Given
�𝑦� � 𝑦 �2

Let 𝑢 = 𝑥𝑦 . Then 𝑔(𝑢) = 12 (𝑢 − 𝑢1 ) and we get

1 1 −(𝑢2 + 1) 𝑦(𝑥) = 𝑐1 cos 𝑥 + 𝑐2 sin 𝑥,


2 𝑢 2𝑢
𝑥𝑢′ = (𝑢 − ) − 𝑢 =
since 𝑦(𝑥) satisfy the Differential Equation 𝑦 ′′ +𝑦 = 0, it is a general solution. Now 𝑦(0) = 𝑐1 = 3
which implies
𝑑𝑥 and 𝑦 ′ (𝑥) = −𝑐1 sin 𝑥 + 𝑐2 cos 𝑥 implies 𝑦 ′ (0) = 𝑐2 = −4. Hence the particular solution of the
= .
−2𝑢𝑑𝑢
1 + 𝑢2 𝑥 equation is 𝑦(𝑥) = 3 cos 𝑥 − 4 sin 𝑥.
Then we integrate If, in addition, some conditions are imposed at 𝑥 = 𝑎 and at 𝑥 = 𝑏, where 𝑎 and 𝑏 are some real
𝑑𝑥
=
−2𝑢𝑑𝑢
1 + 𝑢2 𝑥 numbers, then the problem is called a Boundary-Value Problem (BVP).
� �

2
Remark 1.2.4. Total number of conditions that are required to solve the problem uniquely is
and get ln(1 + 𝑢 ) = − ln 𝑥 + 𝑐. This implies
2 (− ln 𝑥+𝑐) again equal to the order of the differential equation.
1+𝑢 =𝑒 = 𝐴𝑥, for a constant A.
𝑦 Example 1.2.9. Suppose 𝑦 ′′ + 𝑦 = 0, 𝑦(0) = 3 and 𝑦( 𝜋2 ) = 5.
Now we substitute 𝑢 = 𝑥
to get
𝑥2 + 𝑦 2 = 𝐴𝑥3 . This is a boundary value problem with 𝑦(0) = 𝑐1 which implies 𝑐1 = 3 and 𝑦( 𝜋2 ) = 𝑐2 , which
implies 𝑐2 = 5.
Notice that the solution of each of the previous examples contains arbitrary constants. To
determine the constants in these solutions we need to impose some additional conditions. For Hence, the particular solution of this BVP is

example, for the DE equation 6𝑦𝑦 ′ + 4𝑥 = 0, the equation 3𝑦 2 + 2𝑥2 = 𝐶 represents an implicit 𝑦(𝑥) = 3 cos 𝑥 + 5 sin 𝑥.
solution for an arbitrary constant 𝐶. But if 𝑦(0) = 3 is given in addition, then 𝐶 = 27 and
3𝑥2 + 2𝑥 = 27 will be a specific solution of the given DE. Two fundamental questions arise in considering an initial-value problem and these are:

Definition 1.2.2. For the differential equation


∙ Does a solution of the problem exist?
′ ′′ (𝑛)
𝐹 (𝑥, 𝑦, 𝑦 , 𝑦 , . . . , 𝑦 ) = 0,
∙ If a solution exists, is it unique?
conditions of the form:
Getting answer for these questions is crucial before we try find the solutions. The following
𝑦(𝑎) = 𝑦0 , 𝑦 ′ (𝑎) = 𝑦1 , . . . , 𝑦 (𝑛−1) (𝑎) = 𝑦(𝑛−1) theorem answers these questions.
1.3 Exact Differential Equations 18 1.3 Exact Differential Equations 19

Theorem 1.2.5 (Existence and uniqueness of a solution). If 𝑓 (𝑥, 𝑦) is continuous function on is called an exact differential equation in some domain 𝐷 (an open connected set of points)
some rectangular region 𝑅 in the 𝑥𝑦− plane containing the point (𝑎, 𝑏) in its interior , then the if there is a function 𝐹 (𝑥, 𝑦) such that
problem ∂𝐹 ∂𝐹
𝑦 ′ = 𝑓 (𝑥, 𝑦), with 𝑦(𝑎) = 𝑏 (1.8) = 𝑀 (𝑥, 𝑦) and = 𝑁 (𝑥, 𝑦),
∂𝑥 ∂𝑦
has at least one solution defined on some open interval of 𝑥 containing 𝑥 = 𝑎. for all (𝑥, 𝑦) ∈ 𝐷.
If, in addition, the function
∂𝑓 If we can find a function 𝐹 (𝑥, 𝑦) such that
∂𝑦
∂𝐹 ∂𝐹
is continuous on R, then the solution to the above equation (1.8) is unique on some open interval = 𝑀 (𝑥, 𝑦) and = 𝑁 (𝑥, 𝑦),
∂𝑥 ∂𝑦
containing 𝑥 = 𝑎.
then the differential equation 𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦 = 0 is just 𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦 =
Remark 1.2.6. The above condition for uniqueness can be eased by using a condition piecewise 𝑑𝐹 = 0. But recall that, if 𝑑𝐹 = 0, then 𝐹 (𝑥, 𝑦) = constant. The equation 𝐹 (𝑥, 𝑦) = 𝑐,
continuous instead of the condition that “ ∂𝑓
∂𝑦
is continuous”. where 𝑐 is an arbitrary constant, implicitly defines the general solution of the deferential equation
𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦 = 0.

1.3 Exact Differential Equations


Now let us ask the following two fundamental questions. Given a Differential Equation

Consider the differential equation:


sin 𝑦 𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦 = 0
𝑦′ =
2𝑦 − 𝑥 cos 𝑦
1. How can we determine the existence of such a function 𝐹 (𝑥, 𝑦) ?
or equivalently
sin 𝑦𝑑𝑥 + (𝑥 cos 𝑦 − 2𝑦)𝑑𝑦 = 0. 2. If it exists, how can we find it ?
Notice that the left hand side is the (total) differential of the function
The following theorem will answer the first question.
2
∂𝑁
Theorem 1.3.2 (Test for Exactness). Let 𝑀 (𝑥, 𝑦), 𝑁 (𝑥, 𝑦), ∂𝑀 and be all continuous func-
𝐹 (𝑥, 𝑦) = 𝑥 sin 𝑦 − 𝑦 .
∂𝑦 ∂𝑥

Recall that the total differential of a function 𝐹 (𝑥, 𝑦) of two variables is tions within a rectangle 𝑅 (or some domain) in the 𝑥𝑦-plane. Then

∂𝐹 ∂𝐹 𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦


𝑑𝐹 (𝑥, 𝑦) = 𝑑𝑥 + 𝑑𝑦,
∂𝑥 ∂𝑦
is an exact differential in 𝑅 if and only if
for all (x,y) in the domain of 𝐹 .
∂𝑀 ∂𝑁
=
∂𝑦 ∂𝑥
Thus for 𝐹 (𝑥, 𝑦) = 𝑥 sin 𝑦 − 𝑦 2 , 𝑑𝐹 (𝑥, 𝑦) = sin 𝑦𝑑𝑥 + (𝑥 cos 𝑦 − 2𝑦)𝑑𝑦 = 0. This implies that
every where in 𝑅.
𝐹 (𝑥, 𝑦) = 𝐶, that is, 𝑥 sin 𝑦 − 𝑦 2 = 𝐶, is the solution of the above DE.
Example 1.3.1. Consider the equation
Definition 1.3.1. The expression
𝑑𝑦 2𝑥𝑦 3 + 2
𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦 = 0 .
𝑑𝑥 3𝑥 𝑦 + 8𝑒4𝑦
=− 2 2
1.3 Exact Differential Equations 20 1.3 Exact Differential Equations 21

Then write Solution


3 2 2 4𝑦
(2𝑥𝑦 + 2)𝑑𝑥 + (3𝑥 𝑦 + 8𝑒 )𝑑𝑦 = 0.
3
Let 𝑀 (𝑥, 𝑦) = sin 𝑦 and 𝑁 (𝑥, 𝑦) = 𝑥 cos 𝑦 − 2𝑦. Then 𝑀𝑦 = cos 𝑦 = 𝑁𝑥 . Since 𝑀, 𝑁, 𝑀𝑦 , 𝑁𝑥
Let 𝑀 (𝑥, 𝑦) = 2𝑥𝑦 + 2 and 𝑁 (𝑥, 𝑦) = 3𝑥3 𝑦 2 + 8𝑒4𝑦 . Then
are all continuous in R2 , the given equation is exact. Thus, there exists a function 𝐹 (𝑥, 𝑦) such
∂𝑀 ∂𝑁
= 6𝑥𝑦 = . that
∂𝑦 ∂𝑥 ∂𝐹 ∂𝐹
= sin 𝑦 and
∂𝑥 ∂𝑦
= 𝑥 cos 𝑦 − 2𝑦,
Therefore, the given differential equation is exact.
∂𝐹
which implies 𝐹 (𝑥, 𝑦) = sin 𝑦𝑑𝑥 = 𝑥 sin 𝑦 + 𝐴(𝑦) and ∂𝑦
= 𝑥 cos 𝑦 + 𝐴′ (𝑦). That is, 𝑥 cos 𝑦 −
Example 1.3.2. Consider the equation ′ ′

2𝑦 = 𝑥 cos 𝑦 + 𝐴 (𝑦) which implies 𝐴 (𝑦) = −2𝑦 and hence


−𝑥𝑦 1
)𝑑𝑥 + ( + 𝑥 ln 𝑦)𝑑𝑦 = 0.
𝑦
(𝑦 ln 𝑦 − 𝑒
𝐴(𝑦) = −2𝑦𝑑𝑦 = −𝑦 2 + 𝐵.
1 ∂𝑀 ∂𝑁

Let 𝑀 (𝑥, 𝑦) = 𝑦 ln 𝑦 − 𝑒−𝑥𝑦 and 𝑦


+ 𝑥 ln 𝑦. Then ∂𝑦
= ln 𝑦 + 𝑥𝑒−𝑥𝑦 + 𝑦 and ∂𝑥
= ln 𝑦, which
∂𝑀 ∂𝑁
implies ∂𝑦
∕= ∂𝑥
. Therefore, the given differential equation is not exact. Therefore, 𝐹 (𝑥, 𝑦) = 𝑥 sin 𝑦 − 𝑦 2 + 𝐵 = constant, which implies

After knowing the exactness of a differential equation, the next question is ”How can we solve 𝑥 sin 𝑦 − 𝑦 2 = 𝐶
the given equation?” The method for this is described here below.
determines 𝑦(𝑥) implicitly.
Suppose a differential equation 𝑀 (𝑥, 𝑦)𝑑𝑥+𝑁 (𝑥, 𝑦)𝑑𝑦 = 0 is exact. Then, there exists a function
Example 1.3.4. Solve the differential equation
𝐹 (𝑥, 𝑦) such that
∂𝐹 ∂𝐹
𝑀= and 𝑁 = . (𝑥3 + 3𝑥𝑦 2 )𝑑𝑥 + (3𝑥2 𝑦 + 𝑦 3 )𝑑𝑦 = 0.
∂𝑥 ∂𝑦
∂𝐹
From 𝑀 = ∂𝑥
, we have (by integrating with respect to 𝑥)
Solution
𝐹 (𝑥, 𝑦) = 𝑀 𝑑𝑥 + 𝐴(𝑦), (1.9)
Step 1: Checking Exactness

where 𝐴(𝑦) is only a function of 𝑦 but constant with respect to 𝑥.


Now to determine 𝐴(𝑦) (the constant of integration), differentiate equation (1.9) with respect Let 𝑀 (𝑥, 𝑦) = 𝑥3 + 3𝑥𝑦 2 and 𝑁 (𝑥, 𝑦) = 3𝑥2 𝑦 + 𝑦 3 . Then
to 𝑦 to get ∂𝑀 ∂𝑁
= 6𝑥𝑦 = .
∂𝑦 ∂𝑥
∂𝐹 ∂ Therefore the given equation is exact.
= 𝑀 𝑑𝑥 + 𝐴′ (𝑦)
∂𝑦 ∂𝑦

which implies
∂𝑀 ∂𝑀 Step 2: Finding Implicit Solution
𝑁 (𝑥, 𝑦) = 𝑑𝑥
∂𝑦 ∂𝑦
𝑑𝑥 + 𝐴′ (𝑦) and hence 𝐴′ (𝑦) = 𝑁 (𝑥, 𝑦) −
� �

Then to find 𝐹 (𝑥, 𝑦) we use


by exactness. Therefore,
1 3
∂𝑀 𝐹 (𝑥, 𝑦) = 𝑀 𝑑𝑥 + 𝐴(𝑦) = (𝑥3 + 3𝑥𝑦 2 )𝑑𝑥 + 𝐴(𝑦) = 𝑥4 + 𝑥2 𝑦 2 + 𝐴(𝑦)
𝐴(𝑦) = 𝑑𝑥 𝑑𝑦. 4 2
� �

∂𝑦
𝑁 (𝑥, 𝑦) −
� � � �

where 𝐴(𝑦) is a function of 𝑦 only. To find A(y);


Example 1.3.3. Solve the differential equation
∂𝐹
= 3𝑥2 𝑦 + 𝐴′ (𝑦) = 𝑁 = 3𝑥2 𝑦 + 𝑦 3 ,
∂𝑦
sin 𝑦𝑑𝑥 + (𝑥 cos 𝑦 − 2𝑦)𝑑𝑦 = 0.
1.3 Exact Differential Equations 22 1.3 Exact Differential Equations 23

which implies that 𝐴′ (𝑦) = 𝑦 3 and then 𝐴(𝑦) = 𝑦 3 𝑑𝑦 = 14 𝑦 4 + 𝐶. (Of course 𝜇(𝑥, 𝑦) ∕= 0 so that the two equations are equivalent.)
Therefore,

Example 1.3.5. Consider the differential equation


1 4 3 2 2 1 4
𝐹 (𝑥, 𝑦) = 𝑥 + 𝑥 𝑦 + 𝑦 +𝐶
4 2 4 (3𝑦 + 4𝑥𝑦 2 )𝑑𝑥 + (2𝑥 + 3𝑥2 𝑦)𝑑𝑦 = 0.
1 4
= (𝑥 + 6𝑥2 𝑦 2 + 𝑦 4 ) + 𝐶 (1.10)
4 Let 𝑀 (𝑥, 𝑦) = 3𝑦 + 4𝑥𝑦 2 and 𝑁 (𝑥, 𝑦) = 2𝑥 + 3𝑥2 𝑦. Then

Step 3: Checking ∂𝑀 ∂𝑁
= 3 + 8𝑥𝑦 and = 2 + 6𝑥𝑦,
∂𝑦 ∂𝑥
Differentiate implicitly to check for 𝑦 ′ : which implies
∂𝑀 ∂𝑁
.
1 ∂𝑦 ∂𝑥
∕=
(4𝑥3 + 12𝑥𝑦 2 + 12𝑥2 𝑦𝑦 ′ + 4𝑦 3 𝑦 ′ ) = 0
4 Hence the DE is not exact.
which implies
But if 𝜇(𝑥, 𝑦) = 𝑥2 𝑦 then 𝜇(𝑥, 𝑦)𝑀 𝑑𝑥 + 𝜇(𝑥, 𝑦)𝑁 𝑑𝑦 = 0 is exact, since
3 2 2 3 ′
𝑥 + 3𝑥𝑦 + (3𝑥 𝑦 + 𝑦 )𝑦 = 0
∂(𝜇(𝑥, 𝑦)𝑀 ) ∂(𝜇(𝑥, 𝑦)𝑁 )
and then = 6𝑥2 𝑦 + 12𝑥3 𝑦 2 = .
∂𝑦 ∂𝑥
(𝑥3 + 3𝑥𝑦 2 )𝑑𝑥 + (3𝑥2 𝑦 + 𝑦 3 )𝑑𝑦 = 0.
Suppose we have a differential equation which is not exact but it can be made exact by an
Exercise 1.3.3. Solve each of the following differential equations. integrating factor. Then we can ask the following fundamental questions.

1. (𝑦 + 𝑒𝑦 )𝑑𝑥 + 𝑥(1 + 𝑒𝑦 )𝑑𝑦 = 0; 𝑦 = 0 when 𝑥 = 1. 1. How can we find the integrating factor 𝜇 ?
𝑑𝑦 2𝑥 + 1
2. = ; 𝑦(0) = 0. 2. Given 𝜇, how can we solve the problem?
𝑑𝑥 2𝑦 + 1
3. sin ℎ𝑥 cos 𝑦𝑑𝑥 = cos ℎ𝑥 sin 𝑦𝑑𝑦. The method is described below.
Clearly 𝜇(𝑥, 𝑦) is any (non-zero) solution of the equation
Integrating Factors
∂ ∂
(𝜇𝑁 ) = (𝜇𝑁 ) (1.11)
The differential equation 𝑦𝑑𝑥 + 2𝑥𝑑𝑦 = 0 is not exact. But if we multiply this equation by y, the ∂𝑦 ∂𝑥
equation is changed to exact equation. That is, which is equivalent to the equation

𝑦 2 𝑑𝑥 + 2𝑥𝑦𝑑𝑦 = 0 𝜇𝑦 𝑀 + 𝜇𝑀𝑦 = 𝜇𝑥 𝑁 + 𝜇𝑁𝑥 .

is exact, since
∂𝑦 2 ∂(2𝑥𝑦) This is a first-order partial differential equation in 𝜇. However the integrating factor 𝜇 can be
= 2𝑦 = .
∂𝑦 ∂𝑥 found to be a function of 𝑥 alone 𝜇(𝑥) (or a function of 𝑦 alone 𝜇(𝑦)).
Definition 1.3.4. If the differential equation 𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦 = 0 is not exact but the Then in this case equation (1.11) will be reduced to
differential equation 𝑑𝜇 𝑑𝜇
= 𝜇( ) (1.12)
𝑀 𝑦 − 𝑁𝑥
𝜇(𝑥, 𝑦)𝑀 (𝑥, )𝑑𝑥 + 𝜇(𝑥, 𝑦)𝑁 (𝑥, 𝑦)𝑑𝑦 = 0 𝜇𝑀𝑦 = 𝑁 + 𝜇𝑁𝑥 or equivalently
𝑑𝑥 𝑑𝑥 𝑁
is exact, then the multiplicative function 𝜇(𝑥, 𝑦) is called an integrating factor of the DE. which is a separable differential equation.
1.3 Exact Differential Equations 24 1.4 Linear First Order Differential Equations 25

This idea works correctly if the ratio However,


=
𝑀𝑦 − 𝑁𝑥 𝑀 𝑦 − 𝑁𝑥 0−3
𝑁 𝑀 1
= −3

is a function of x only, that is, can be considered as a function of 𝑦 alone. Therefore, it is possible to solve for 𝜇(𝑦) and is given
by
𝑝(𝑥) = = a function of 𝑥.
𝑀 𝑦 − 𝑁𝑥
𝑁 (−3)𝑑𝑦
𝜇(𝑦) = 𝑒− = 𝑒3𝑦 .

In this case Now to solve the problem in (1.13), multiplying the given equation by 𝜇(𝑦) = 𝑒3𝑦 we get the
𝑑𝜇
= 𝑑𝑥,
𝑀 𝑦 − 𝑁𝑥
𝜇 𝑁 equation
� �

which implies 𝑒3𝑦 𝑑𝑥 + (3𝑥 − 𝑒−2𝑦 )𝑒3𝑦 𝑑𝑦 = 0,


𝑝(𝑥)𝑑𝑥
𝜇(𝑥) = 𝑒 . which is an exact differential equation. Thus, there exists 𝐹 (𝑥, 𝑦) such that

If the quotient
𝑀 𝑦 − 𝑁𝑥
is not a function of 𝑥 alone, then the integrating factor 𝜇 can not be ∂𝐹 ∂𝐹
𝑁 = 𝑒3𝑦 and
∂𝑥 ∂𝑦
= (3𝑥 − 𝑒−2𝑦 )𝑒3𝑦
obtained using the above procedure, but we can try to find 𝜇 as a function of 𝑦 alone, 𝜇(𝑦).
Then when 𝜇(𝑦) is only a function of 𝑦, equation (1.11) will be reduced to which implies that
∂𝐹
𝐹 (𝑥, 𝑦) = 𝑑𝑥 = 𝑒3𝑦 𝑑𝑥 = 𝑥𝑒3𝑦 + 𝐴(𝑦).
𝑑𝜇 ∂𝑥
� �

𝑀 + 𝜇𝑀𝑦 = 𝜇𝑁𝑥
𝑑𝑦 To determine 𝐴(𝑦) we use 𝐹 (𝑥, 𝑦) which is obtained above and differentiate it with respect to 𝑦
which implies and equate the result with (3𝑥 − 𝑒−2𝑦 )𝑒3𝑦 . Hence we have
∂𝐹
𝑑𝜇 = 3𝑥𝑒3𝑦 + 𝐴′ (𝑦).
∂𝑦
(3𝑥 − 𝑒−2𝑦 )𝑒3𝑦 =
, which is a separable differential equation.
𝑀 𝑦 − 𝑁𝑥
𝑑𝑦 𝑀
= −𝜇
� �

Then 3𝑥𝑒3𝑦 − 𝑒 = 3𝑥𝑒3𝑦 + 𝐴′ (𝑦) which implies that


If the fraction
𝑀𝑦 − 𝑁𝑥
𝑀 𝐴′ (𝑦) = −𝑒𝑦 and hence 𝐴(𝑦) = − 𝑒𝑦 𝑑𝑦 = −𝑒𝑦 + 𝐵.

is a function of 𝑦 alone, then


𝑞(𝑦)𝑑𝑦 Therefore, 𝐹 (𝑥, 𝑦) = 𝑥𝑒3𝑦 − 𝑒𝑦 + 𝐵 = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡. That means 𝑥𝑒3𝑦 − 𝑒𝑦 = 𝐶, where 𝐶 is an
𝜇(𝑦) = 𝑒− .

arbitrary constant, defines the implicit solution of the Differential Equation in (1.13).
Example 1.3.6. Consider the equation

𝑑𝑥 + (3𝑥 − 𝑒−2𝑦 )𝑑𝑦 = 0. (1.13) 1.4 Linear First Order Differential Equations
∂𝑀 ∂𝑁 ∂𝑀 ∂𝑁
Let 𝑀 = 1 and 𝑁 = 3𝑥 − 𝑒−2𝑦 . Then ∂𝑦
= 0 and ∂𝑥
= 3 and hence ∂𝑦
∕= ∂𝑥
which implies Consider the general first-order linear differential equation
that the given differential equation is not exact.
Assume that the given equation has an integrating factor. But 𝑎1 (𝑥)𝑦 ′ + 𝑎0 (𝑥)𝑦 = 𝑓 (𝑥), 𝑎1 (𝑥) ∕= 0 (1.14)

= =
𝑀 𝑦 − 𝑁𝑥 0−3 −3𝑒2𝑦 By dividing both sides by 𝑎1 (𝑥) ∕= 0, we get 𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥), where
𝑁 3𝑥 − 𝑒−2𝑦 3𝑥𝑒2𝑦
𝑎0 (𝑥) 𝑓 (𝑥)
which is not a function of 𝑥 alone. Hence obtaining 𝜇(𝑥) is not possible. 𝑝(𝑥) = and 𝑞(𝑥) = .
𝑎1 (𝑥) 𝑎1 (𝑥)
1.4 Linear First Order Differential Equations 26 1.4 Linear First Order Differential Equations 27

Here we assume that 𝑝(𝑥) and 𝑞(𝑥) are continuous. 2. If (𝑥 + 2)𝑦 ′ − 𝑥𝑦 = 0, then
𝑦′ 𝑥
There is a general approach to solve linear equations. To solve for 𝑦(𝑥) from the given equation = .
𝑦 𝑥+2
we start with the simplest case, when 𝑞(𝑥) = 0. That is, (1.14) becomes
We integrate
𝑑𝑦 𝑥
′ = 𝑑𝑥
𝑦 + 𝑝(𝑥)𝑦 = 0. (1.15) 𝑦 𝑥+2
� �

𝐴
This problem is called a homogeneous version of (1.14). Now to solve (1.15) first we get 𝑦 ′ = to get 𝑦(𝑥) = 𝐴𝑒(𝑥−2 ln(𝑥+2) ), or 𝑦(𝑥) = 𝑒𝑥 which is the general solution of the
(𝑥 + 2)2
−𝑝(𝑥)𝑦 and we divide both sides by 𝑦 and get given equation.

𝑦′
Now we want to solve the general first - order linear ordinary differential equation
𝑦
= −𝑝(𝑥).

Then by integrating 𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥) (1.16)


𝑑𝑦
𝑝(𝑥)𝑑𝑥
𝑦
=−
This can be done in two steps.
� �

we get
𝑝(𝑥)𝑑𝑥 + 𝐶,
Step 1.
ln ∣𝑦∣ = −

which implies
𝑝(𝑥)𝑑𝑥
𝑝(𝑥)𝑑𝑥 𝑝(𝑥)𝑑𝑥 Consider the homogeneous version of (1.16) and find the solution to be 𝑦ℎ (𝑥) = 𝐴𝑒− ,
= 𝐵𝑒− , for 𝐵 > 0.

∣𝑦∣ = 𝑒𝑐−
∫ ∫

where ℎ indicate the general solution for the homogeneous part of the equation
Therefore,
𝑝(𝑥)𝑑𝑥
𝑦(𝑥) = 𝐴𝑒− , where 𝐴 is an arbitrary constant,
Step 2.

is a general solution of (1.15).


To get the solution for the non-homogeneous part of the equation we vary the constant 𝐴 with
Example 1.4.1. Solve the following differential equations. different value of 𝑥.
Hence we assume that
1. 𝑦 ′ + 2𝑥𝑦 = 0
𝑝(𝑥)𝑑𝑥
𝑦(𝑥) = 𝐴(𝑥)𝑒− (1.17)

2. (𝑥 + 2)𝑦 ′ − 𝑥𝑦 = 0
is a solution for (1.16). Then (1.17) must satisfy (1.16). i.e.

Solution: 𝐴(𝑥)𝑒− 𝑝(𝑥)𝑑𝑥 ′


+ 𝑝(𝑥) 𝐴(𝑥)𝑒− 𝑝(𝑥)𝑑𝑥 = 𝑞(𝑥),
∫ ∫

𝑦′
( ) ( )

1. If 𝑦 ′ + 2𝑥𝑦 = 0, then 𝑦
= −2𝑥. We integrate which implies
𝑑𝑦 𝑝(𝑥)𝑑𝑥 𝑝(𝑥)𝑑𝑥 𝑝(𝑥)𝑑𝑥
𝑥𝑑𝑥 𝐴′ (𝑥)𝑒− + 𝐴(𝑥)(−𝑝(𝑥))𝑒− + 𝑝(𝑥)𝐴(𝑥)𝑒− = 𝑞(𝑥).
𝑦
= −2
∫ ∫ ∫
� �

2
to get ln ∣𝑦∣ = 𝑥2 + 𝐶 and hence 𝑦(𝑥) = 𝐴𝑒−𝑥 is the general solution. Simplifying this gives us,
𝑝(𝑥)𝑑𝑥
𝐴′ (𝑥) = 𝑞(𝑥)𝑒 .

1.4 Linear First Order Differential Equations 28 1.5 *Nonlinear Differential Equations of the First Order 29

Now integrate both sides 𝐴′ (𝑥)𝑑𝑥 = 𝑞(𝑥)𝑒 𝑝(𝑥)𝑑𝑥 𝑑𝑥 to get Step 4. We integrate

𝑑(𝑦𝑒3𝑥 )
𝑑𝑥 = 6𝑒3𝑥
∫ ∫

𝐴(𝑥) = 𝑞(𝑥)𝑒 𝑝(𝑥)𝑑𝑥 𝑑𝑥 + 𝐶. 𝑑𝑥


� �

and get 𝑦(𝑥)𝑒3𝑥 = 2𝑒3𝑥 + 𝐶. Then solve for 𝑦(𝑥) to get the general solution 𝑦(𝑥) =
Hence the general solution of the non-homogeneous ODE (1.16) is given by: 𝐶𝑒−3𝑥 + 2 for an arbitrary constant 𝐶.

𝑦(𝑥) = 𝐴(𝑥)𝑒− 𝑝(𝑥)𝑑𝑥 = 𝑒− 𝑝(𝑥)𝑑𝑥 𝑞(𝑥)𝑒 𝑝(𝑥)𝑑𝑥 𝑑𝑥 + 𝐶


∫ ∫ ∫
�� �

1.5 *Nonlinear Differential Equations of the First Order


= 𝐶𝑒− 𝑝(𝑥)𝑑𝑥 + 𝑒− 𝑝(𝑥)𝑑𝑥 𝑞(𝑥)𝑒 𝑝(𝑥)𝑑𝑥 𝑑𝑥
∫ ∫ ∫

= 𝑦ℎ (𝑥) + 𝑦𝑝 (𝑥) Some nonlinear differential equations can be reduced to linear form. In this section we will
consider three famous nonlinear equations: Bernoulli Equation, Riccati Equation and Clairuat
Remark 1.4.1. It may not be necessary to memorize this long formula for 𝑦(𝑥). Instead, we can
Equation
carry out the following procedure.

Step 1. If the differential equation is linear, 𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥), then first compute 1.5.1 The Bernoulli Equation
𝑝(𝑥)𝑑𝑥
𝑒 . A differential equation of the form

This is called an integrating factor for the linear equation.


𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥)𝑦 𝛼 ,
Step 2. Multiply both sides of the differential equation by the integrating factor.
where 𝛼 is a constant, is called Bernoulli Equation.
Step 3. Write the left side of the resulting equation as the derivative of the product of 𝑦 and the If 𝛼 = 0, then the equation is linear and if 𝛼 = 1, then the equation is separable. We have seen
integrating factor. The integrating factor is designed to make this possible. The right side these two cases in the previous section.
is a function of just 𝑥. For 𝛼 ∕= 1, use the change of variable 𝑢 = 𝑦 1−𝛼 . Then by differentiating with respect to 𝑥, we

Step 4. Integrate both sides of this equation with respect to 𝑥 and solve the resulting equation for get 𝑢′ = (1 − 𝛼)𝑦 −𝛼 𝑦 ′ . But, from 𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥)𝑦 𝛼 , we get 𝑦 ′ = 𝑞(𝑥)𝑦 𝛼 − 𝑝(𝑥)𝑦. Then

𝑦, obtaining the general solution.


𝑢′ = (1 − 𝛼)𝑦 −𝛼 𝑦 ′
Example 1.4.2. Solve the differential equation 𝑦 ′ + 3𝑦 = 6. = (1 − 𝛼)𝑦 −𝛼 (𝑞𝑦 𝛼 − 𝑝𝑦)
The given equation is linear with 𝑝(𝑥) = 3 and 𝑞(𝑥) = 6. = (1 − 𝛼)(𝑞 − 𝑝𝑦 −1−𝛼 )

Step 1. We compute the integrating factor = (1 − 𝛼)(𝑞 − 𝑝𝑢), since 𝑢 = 𝑦 1−𝛼

𝑝(𝑥)𝑑𝑥 3𝑑𝑥
𝑒 =𝑒 = 𝑒3𝑥 This implies that 𝑢′ + (1 − 𝛼)𝑢 = (1 − 𝛼)𝑞, which is a linear differential equation of first order
∫ ∫

and hence we can solve it using one of the methods we have seen in the previous sections.
Step 2. We multiply 𝑦 ′ + 3𝑦 = 6 by 𝑒3𝑥 to get 𝑦 ′ 𝑒3𝑥 + 3𝑦𝑒3𝑥 = 6𝑒3𝑥 .
Example 1.5.1. Solve the Bernoulli Equation 𝑦 ′ − 𝐴𝑦 = −𝐵𝑦 2 , where 𝐴 and 𝐵 are positive
Step 3. The above equation is equivalent to constants. This equation is called Varhulst Equation.
3𝑥
𝑑(𝑦𝑒 )
= 6𝑒3𝑥 .
𝑑𝑥
1.5 *Nonlinear Differential Equations of the First Order 30 1.5 *Nonlinear Differential Equations of the First Order 31

Solution 1.5.3 The Clairuat Equation

Since 𝛼 = 2, let 𝑢 = 𝑦 1−2 = 𝑦 −1 . Then 𝑢′ = −𝑦 −2 𝑦 ′ and substituting 𝐴𝑦 − 𝐵𝑦 2 for 𝑦 ′ we get A nonlinear differential equation of the form
′ −2 2 −1 ′
𝑢 = −𝑦 (𝐴𝑦 −𝐵𝑦 ) = 𝐵 −𝐴𝑦 = 𝐵 −𝐴𝑢. Then we get the differential equation 𝑢 +𝐴𝑢 = 𝐵
𝑦 = 𝑥𝑦 ′ + 𝑔(𝑦 ′ ),
which is equivalent to the equation 𝑑𝑢 − (𝐵 − 𝐴𝑢)𝑑𝑥 = 0. This implies

𝑑𝑢 is called Clairuat equation.


= 𝑑𝑥
𝐵 − 𝐴𝑢
and we integrate To solve such equation, let us differentiate both sides of the equation with respect to 𝑥. Then
𝑑𝑢
= 𝑑𝑥 we get, 𝑦 ′ = 𝑦 ′ + 𝑥𝑦 ′′ + 𝑔 ′ (𝑦 ′ )𝑦 ′′ ,
� �

𝐵 − 𝐴𝑢
𝐵 which implies that 𝑦 ′′ (𝑥 + 𝑔 ′ (𝑦 ′ )) = 0 and hence 𝑦 ′′ = 0 or 𝑥 + 𝑔 ′ (𝑦 ′ ) = 0.
and get 𝑢 = 𝐴
+ 𝐶𝑒−𝐴𝑥 .

Therefore, the general solution of the original differential equation is Solving 𝑦 ′′ = 0 gives us the general solution 𝑦 = 𝑎𝑥 + 𝑏 and solving 𝑥 + 𝑔 ′ (𝑦 ′ ) = 0 gives us a
singular solution (include definition of singular solution in the basic section ).
1 1
𝑦= = 𝐵
.
𝑢 𝐴
+ 𝐶𝑒−𝐴𝑥 Example 1.5.3. Solve the Clairuat equation
1
1.5.2 The Riccati Equation. 𝑦 = 𝑥𝑦 ′ + .
𝑦′
A differential equation of the form
Solution
′ 2
𝑦 = 𝑝(𝑥)𝑦 + 𝑞(𝑥)𝑦 + 𝑟(𝑥)
Differentiating both sides with respect to 𝑥 to get
is called Riccati equation. If 𝑝(𝑥) ≡ 0, then the equation is linear.
𝑦 ′′
𝑦 ′ = 𝑦 ′ + 𝑥′′ − .
(𝑦 ′ )2
If we can obtain one particular solution 𝑠(𝑥) of the Riccati equation, then the change of variables
This implies
1 1
𝑦 = 𝑠(𝑥) + 𝑦 ′′ 𝑥 − =0
𝑧 (𝑦 ′ )2
� �

transforms the Riccati equation in to a linear equation in 𝑥 and 𝑧. Then we find the general and then solving 𝑦 ′′ = 0 gives us a general solution 𝑦 = 𝑎𝑥 + 𝑏 and solving
solution of this linear equation and we use it to write the general solution of the original Riccati 1
𝑥− =0
equation. 1 − (𝑦 ′ )2
1 1 √
gives us a singular solution. Then (𝑦 ′ )2 = 𝑥
which implies 𝑦 ′ = √ . Hence 𝑦 = 2 𝑥 + 𝑐 is a
Example 1.5.2. Solve the Riccati equation ± 𝑥
singular solution.
′ 1 1
𝑥 𝑥
𝑦 = 2 𝑦 2 − 𝑦 + 1.
Remark 1.5.1. The general solution of the Clairuat equation is 𝑦 = 𝑎𝑥 + 𝑏. Therefore, our
(Hint: 𝑦 = 𝑥 is one solution.) main focus for such equation is the singular solution.
1.6 Exercises 32

1.6 Exercises

Exercise 1.6.1. Solve each of the following differential equations.

1. 𝑥𝑦 ′ + 𝑦 = 6𝑥2

2. 𝑥𝑦 ′ + 2𝑦 = 𝑥 + 2 with inital condition 𝑦(0) = 1. Chapter 2

Ordinary Differential Equations of The


Second and Higher Order

A second-order differential equation is a differential equation containing a second derivative of


a dependent variable with respect to the independent variable but no higher derivative. The
theory of second-order differential equations is vast, and here we will focus on linear second-order
equations, which have many important uses. Most of the results are given for a higher order
ODEs and second order ODEs are special cases, but most of our examples are for second order
ODEs.

2.1 Basic Theory

In this section, we will focus on the general theory of linear ordinary differential equations before
we start to discuss about solving such problems.

Definition 2.1.1. A linear ordinary differential equation of order 𝑛 in the dependent variable 𝑦
and independent variable 𝑥 is an equation which can be expressed in the form:

𝑎𝑛 (𝑥)𝑦 (𝑛) + 𝑎𝑛−1 (𝑥)𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑎1 (𝑥)𝑦 ′ + 𝑎0 (𝑥) = 𝑓 (𝑥), (2.1)

where 𝑎𝑛 (𝑥) ∕≡ 0 and the functions 𝑎0 , . . . , 𝑎𝑛 are continuous real- valued functions of 𝑥 ∈ [𝑎, 𝑏].
The function 𝑓 (𝑥) is called the non-homogeneous term and all the points 𝑥𝜖 ∈ [𝑎, 𝑏] in which
𝑎𝑛 (𝑥𝜖 ) = 0 are called singular points of the DE (2.1).
2.2 General Solution of Homogeneous Linear ODEs 34 2.2 General Solution of Homogeneous Linear ODEs 35

If 𝑓 (𝑥) ≡ 0, then (2.1) is reduced to: Theorem 2.2.1 (Linear Combination of Solutions). If 𝑦1 , 𝑦2 , . . . , 𝑦𝑘 are solutions of the homo-
geneous linear ODE (2.1) and if 𝑐1 , 𝑐2 , . . . 𝑐𝑘 are arbitrary constants, then the linear combination
𝑎𝑛 (𝑥)𝑦 (𝑛) + 𝑎𝑛−1 (𝑥)𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑎1 (𝑥)𝑦 ′ + 𝑎0 (𝑥) = 0 (2.2)
𝑘

This equation is known as homogeneous Linear ODE of order 𝑛. 𝑦 = 𝑐1 𝑦1 + 𝑐2 𝑦2 + . . . + 𝑐𝑘 𝑦𝑘 = 𝑐𝑖 𝑦𝑖


𝑖=1

Example 2.1.1. The equation 𝑦 ′′ + 3𝑥𝑦 ′ + 𝑥3 𝑦 = 𝑒𝑥 is a non homogeneous linear ordinary is also a solution of (2.1). That is, any linear combination of solutions of a linear homogeneous
differential equation of the 2nd order, whereas 𝑦 ′′ + 3𝑥𝑦 ′ + 𝑥3 𝑦 = 0 is a homogeneous linear differential equation is also a solution.
ordinary differential equation of the 2nd order.
Definition 2.2.2 (Linearly Dependent and Linearly Independent Functions).
Theorem 2.1.2 (Basic Existence Theorem for IVP). Consider the linear ODE given in (2.1),
where 𝑎0 , 𝑎1 , . . . , 𝑎𝑛−1 , 𝑎𝑛 and 𝑓 are continuous functions on the interval [𝑎, 𝑏] and 𝑎𝑛 (𝑥) ∕= 1. The functions 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are said to be Linearly Dependent (LD) on some interval [𝑎, 𝑏]
0, ∀𝑥 ∈ [𝑎, 𝑏]. Furthermore, let 𝑥0 be any point in [𝑎, 𝑏] and let 𝑐0 , 𝑐1 . . . 𝑐𝑛−1 be arbitrary real if there are constants 𝑐1 , 𝑐2 , . . . , 𝑐𝑛 , not all zero, such that
constants. Then there exists a unique solution function 𝑔(𝑥) of (2.1) on [𝑎, 𝑏] satisfying the initial
𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) + . . . + 𝑐𝑛 𝑓𝑛 (𝑥) = 0 (2.4)
conditions,
𝑔(𝑥0 ) = 𝑐0 , 𝑔 ′ (𝑥0 ) = 𝑐1 , . . . , 𝑔 𝑛−1 (𝑥0 ) = 𝑐𝑛−1 . for all 𝑥 ∈ [𝑎, 𝑏].

2. If the relation (2.4) is true only when 𝑐1 = 𝑐2 = . . . = 𝑐𝑛 = 0, then the functions


2.2 General Solution of Homogeneous Linear ODEs 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are said to be Linearly Independent (LI) on [𝑎, 𝑏].

Example 2.2.1. Examples of LD and LI functions.


Consider the linear differential equation

𝑦 ′′ + 𝑦 = 0. (2.3) 1. The functions 𝑓1 (𝑥) = 𝑒𝑥 and 𝑓2 (𝑥) = 4𝑒𝑥 are Linearly Dependent on R since

Then, 𝑦1 = cos 𝑥 and 𝑦2 = sin 𝑥 are solutions of the differential equation (2.3). Let 𝑐1 and 𝑐2 be −4𝑓1 (𝑥) + 𝑓2 (𝑥) = −4𝑒𝑥 + 4𝑒𝑥 = 0, for all 𝑥 ∈ R.
arbitrary constants. Then
2. The functions
𝑦 = 𝑐1 𝑦1 + 𝑐2 𝑦2 = 𝑐1 cos 𝑥 + 𝑐2 sin 𝑥 𝑓1 (𝑥) = 𝑒𝑥 , 𝑓2 (𝑥) = 𝑒−𝑥 , 𝑓3 (𝑥) = sinh 𝑥

is also a solution of (2.3). Indeed, 𝑦 ′ = −𝑐1 sin 𝑥 + 𝑐2 cos 𝑥, and 𝑦 ′′ = −𝑐1 cos 𝑥 − 𝑐2 sin 𝑥 which are linearly dependent on R since
implies that 𝑒𝑥 − 𝑒−𝑥
𝑓3 (𝑥) = sinh 𝑥 =
′′
𝑦 + 𝑦 = (−𝑐1 cos 𝑥 − 𝑐2 sin 𝑥) + (𝑐1 cos 𝑥 + 𝑐2 sin 𝑥) = 0 2

for all 𝑥. Therefore, any linear combination of the functions 𝑦1 = cos 𝑥 and 𝑦2 = cos 𝑥 is a and (1)𝑓1 (𝑥) + (−1)𝑓2 (𝑥) + (−2)𝑓3 (𝑥) = 0, ∀𝑥 ∈ R.
solution for the given differential equation. 3. The two functions 𝑓1 (𝑥) = 𝑥 and 𝑓2 (𝑥) = 𝑥3 are Linearly Independent on R, since for
This condition can be generalized for any homogenous linear differential equation in the following 𝑐1 , 𝑐2 ∈ R,
theorem. 𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) = 𝑐1 𝑥 + 𝑐2 𝑥3 = 0, ∀𝑥 ∈ R∖{0}

implies 𝑐1 = 0 and 𝑐2 = 0.
2.2 General Solution of Homogeneous Linear ODEs 36 2.2 General Solution of Homogeneous Linear ODEs 37

The following theorem guarantees that any 𝑛th order Linear Homogenous Ordinary Differential There is a simple test to determine whether a given set of functions is linearly independent or
Equation has 𝑛 linearly independent solutions. dependent on an open interval 𝐼 = (𝑎, 𝑏), for some real numbers 𝑎, 𝑏, by using the idea of
determinant of a matrix.
Theorem 2.2.3 (Existence of Linearly Independent Solutions for a LHODE). The Linear Ho-
mogenous Differential Equation (LHODE) (2.2) always has 𝑛 Linearly Independent (LI) solutions. Definition 2.2.5. Let 𝑓1 (𝑥), 𝑓2 (𝑥), . . . , 𝑓𝑛 (𝑥) be 𝑛 real valued functions each of which has an
Furthermore, if 𝑓1 (𝑥), 𝑓2 (𝑥), . . . , 𝑓𝑛 (𝑥) are 𝑛 LI solutions of (2.2), then every solution of (2.2) (𝑛 − 1)𝑡ℎ derivative on the interval [𝑎, 𝑏]. The determinant:
can be expressed as a linear combination of these solution functions. i.e. If y is a solution for 𝑓2 (𝑥) ... 𝑓𝑛 (𝑥)
(2.2), then
� �

𝑛 1 𝑓2

(𝑥) . . . 𝑓𝑛′ (𝑥)
� �

. . .
� 𝑓1 (𝑥) �

𝑦(𝑥) = 𝑐𝑖 𝑓𝑖 (𝑥) .. .. ..
� �
� 𝑓 ′ (𝑥) �

𝑖=1 (𝑛−1) (𝑛−1)


� � �

(𝑥) 𝑓2 (𝑥) . . . 𝑓𝑛 (𝑥)


W[f1 , f2 . . . , fn ] = � � = W(𝑥)
� �
� �

for some 𝑐1 , ..., 𝑐𝑛 ∈ R..


� (𝑛−1) �

is called the Wronskian of these 𝑛 functions.


� 𝑓1 �

Example 2.2.2. Consider the second order linear homogenous DE


Example 2.2.4. The function 𝑦1 (𝑥) = 𝑒2𝑥 and 𝑦2 (𝑥) = 𝑥𝑒4𝑥 are solutions of the second order
′′
𝑦 + 𝑦 = 0. linear homogenous differential equation 𝑦 ′′ − 4𝑦 ′ + 4𝑦 = 0. Then the Wronskian, W(x) of 𝑦1 and
𝑦2 is
Then 𝑓1 (𝑥) = sin 𝑥, 𝑓2 (𝑥) = cos 𝑥 are LI solutions of the given equation. Then {sin 𝑥, cos 𝑥} 𝑥𝑒4𝑥
is the fundamental set of solutions of the given DE and hence the general solution of the DE is 𝑒 + 4𝑥𝑒4𝑥
� �
� 𝑒2𝑥 �
� �

given by 𝑦(𝑥) = 𝑐1 sin 𝑥 + 𝑐2 cos 𝑥, for constants 𝑐1 , 𝑐2 ∈ R.


W(x) = � 2𝑥 4𝑥 � = 𝑒4𝑥 + 2𝑥𝑒4𝑥 − 2𝑥𝑒4𝑥 = 𝑒4𝑥
� 2𝑒 �

Definition 2.2.4. If 𝑓1 (𝑥), 𝑓2 (𝑥), . . . , 𝑓𝑛 (𝑥) are 𝑛 linearly independent solutions of (2.2) on [𝑎, 𝑏], Question: Are the two functions 𝑦1 (𝑥) = 𝑒2𝑥 and 𝑦2 (𝑥) = 𝑥𝑒4𝑥 linearly independent?

The above question can be easily answered using the following theorem.
then the set {𝑓1 (𝑥), 𝑓2 (𝑥), . . . 𝑓𝑛 (𝑥)} is called the Fundamental Set of Solutions of (2.2) and
the function
𝑓 (𝑥) = 𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) + ⋅ ⋅ ⋅ + 𝑐𝑛 𝑓𝑛 (𝑥), 𝑥 ∈ [𝑎, 𝑏], Theorem 2.2.6 (Wronskian Test for Linearly Independence). The 𝑛 functions 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are
Linearly Independent on an interval [𝑎, 𝑏] if and only if the Wronskian of 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 is different
where 𝑐1 , 𝑐2 , . . . , 𝑐𝑛 are arbitrary constants is called a General Solution of (2.2) on [𝑎, 𝑏]. and from zero for some 𝑥 ∈ [𝑎, 𝑏]. That is, 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are LI if and only if there exists 𝑥 ∈ [𝑎, 𝑏]
each 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are called particular solutions. such that W(𝑥) ∕= 0.

Example 2.2.3. Consider the third order linear homogenous DE Example 2.2.5. 1. Show that 𝑥 and 𝑥2 are Linearly Independent.
′′′
𝑦 − 2𝑦 ′′ − 𝑦 ′ + 2𝑦 = 0.
Solution
𝑥 2𝑥
a) The functions 𝑒 , 𝑒 , 𝑒 −𝑥
are (particular) solutions (check!)
Consider the Wronskian of 𝑥 and 𝑥2 ,
b) 𝑒𝑥 , 𝑒−𝑥 and 𝑒2𝑥 are LI (check!)
2
� �

c) Therefore, the general solution of the given equation is given by:


� 𝑥 𝑥2 �
� �
W(x, x ) = � � = 2𝑥2 − 𝑥2 = 𝑥2
� 1 2𝑥 �

This implies W(𝑥, 𝑥2 ) = 𝑥2 ∕= 0, ∀𝑥 ∕= 0 and hence 𝑥 and 𝑥2 are LI.


𝑦(𝑥) = 𝑐1 𝑒𝑥 + 𝑐2 𝑒−𝑥 + 𝑐3 𝑒2𝑥 .
2. Show that 𝑒𝑥 , 𝑒−𝑥 , 𝑒2𝑥 are Linearly Independent.
2.2 General Solution of Homogeneous Linear ODEs 38 2.2 General Solution of Homogeneous Linear ODEs 39

Solution 2.2.1 Reduction of Order

Consider the Wronskian of 𝑒𝑥 , 𝑒−𝑥 and 𝑒2𝑥 In the preceding section we saw that the general solution of a homogeneous linear second-order
differential equation
𝑒𝑥 𝑒−𝑥 𝑒2𝑥
𝑦 ′′ + 𝑝(𝑥)𝑦 + 𝑞(𝑥)𝑦 = 0 (2.5)
� �
� �

𝑒𝑥 −𝑒−𝑥 2𝑒2𝑥
� �

𝑒𝑥 𝑒−𝑥 4𝑒2𝑥 is a linear combination 𝑦(𝑥) = 𝑐1 𝑦1 (𝑥) + 𝑐2 𝑦2 (𝑥), where 𝑦1 and 𝑦2 are linearly independent solu-
� �
�,

tions on some interval I.


W(x) = �� �
� �

which is equal to
� �

𝑒𝑥 (−4𝑒𝑥 + 2𝑒𝑥 ) − 𝑒−𝑥 (4𝑒3𝑥 − 2𝑒3𝑥 ) + 𝑒2𝑥 (1 + 1) In this method we can construct a second solution 𝑦2 of a homogeneous equation (2.5) (even
when the coefficients in (2.5) are variable) provided that we know a nontrivial solution 𝑦1 of the
= −2𝑒2𝑥 − 2𝑒2𝑥 + 2𝑒2𝑥 DE. The basic idea described in this section is that equation (2.5) can be reduced to a linear
2𝑥 first-order DE by means of a substitution involving the known solution 𝑦1 . A second solution 𝑦2
= −2𝑒 ∕= 0, ∀𝑥 ∈ R.
of (2.5) is apparent after this first-order differential equation is solved.
Hence, 𝑒𝑥 , 𝑒−𝑥 and 𝑒2𝑥 are Linearly Independent.

3. Show that the functions The method is described bellow.

𝑓1 (𝑥) = 𝑒𝑥 , 𝑓2 (𝑥) = 𝑒−𝑥 , 𝑓3 (𝑥) = sinh 𝑥


Suppose that 𝑦1 denotes a nontrivial solution of (2.5) and that 𝑦1 is defined on an interval I. We
are linearly dependent on R, since want to find a second solution 𝑦2 so that the set consisting of 𝑦1 and 𝑦2 is linearly independent on I.

.
𝑒𝑥 − 𝑒−𝑥
𝑓3 (𝑥) = sinh 𝑥 = The quotient 𝑦2 /𝑦1 is nonconstant on I, that is,
2
𝑦2 (𝑥)
Solution = 𝑢(𝑥)
𝑦1 (𝑥)

Consider the Wronskian of 𝑒𝑥 , 𝑒−𝑥 and sinh 𝑥 or 𝑦2 (𝑥) = 𝑢(𝑥)𝑦1 (𝑥). The function 𝑢(𝑥) can be found by substituting
𝑥 −𝑥
𝑒−𝑥 𝑒 −𝑒 2
𝑦2 (𝑥) = 𝑢(𝑥)𝑦1 (𝑥)
𝑥 −𝑥
� �

2
� 𝑥 �

𝑥 −𝑥 into the given differential equation.


� 𝑒 �

𝑒−𝑥 𝑒 −𝑒
� �

2
W(x) = �� 𝑒𝑥 −𝑒−𝑥 𝑒 +𝑒 � = 0, ∀𝑥 ∈ R.

Consider the derivatives 𝑦2′ = 𝑢′ 𝑦1 + 𝑢𝑦1′ and 𝑦2′′ = 𝑢′′ 𝑦1 + 2𝑢′ 𝑦1′ + 𝑢𝑦1′′ and substituting these in
𝑥
� 𝑥 �

Hence 𝑒 , 𝑒 −𝑥
and sinh 𝑥 are linearly dependent.
� 𝑒 �

(2.5) we get
Remark 2.2.7. The Wronskian of 𝑛 solutions 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 of the DE (2.2) is either identically
zero on [𝑎, 𝑏] or else is never zero on [𝑎, 𝑏]. That is, if 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are solutions of the DE (2.2), (𝑢′′ 𝑦1 + 2𝑢′ 𝑦1′ + 𝑢𝑦1′′ ) + 𝑝(𝑥)(𝑢′ 𝑦1 + 𝑢𝑦1′ ) + 𝑞(𝑥)𝑢𝑦1 = 0

and simplifying this gives us


then 𝑊 (𝑥) = 0, ∀𝑥 ∈ [𝑎, 𝑏] if 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are LD. or 𝑊 (𝑥) ∕= 0, ∀𝑥 ∈ [𝑎, 𝑏] if 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are
LI.
𝑢′′ 𝑦1 + 𝑢′ (2𝑦1′ + 𝑝(𝑥)𝑦1 ) + 𝑢(𝑦1′′ + 𝑝(𝑥)𝑦1′ + 𝑞(𝑥)𝑦1 ) = 0.
2.2 General Solution of Homogeneous Linear ODEs 40 2.3 Homogeneous LODE with Constant Coefficients 41

But 𝑦1 , by assumption, is a solution of (2.5) and hence .... Other possible reduction methods, such as Bernoli or Ricati (one of the two) should be
mentioned here and one should be indicated in the exercises .......
𝑢(𝑦1′′ + 𝑝(𝑥)𝑦1′ + 𝑞(𝑥)𝑦1 ) = 0

which implies
𝑢′′ 𝑦1 + 𝑢′ (2𝑦1′ + 𝑝(𝑥)𝑦1 ) = 0. 2.3 Homogeneous LODE with Constant Coefficients
This is a second order DE in 𝑢. Let 𝑢′ = 𝑧. Then 𝑢′′ = 𝑧 ′ . Using separation of variables we get
Definition 2.3.1. A Differential Equation
𝑧′
=
−2𝑦1′
𝑧 (2.6)
−𝑝
𝑦1 𝑏𝑛 𝑦 (𝑛) + 𝑏𝑛−1 𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑏1 𝑦 ′ + 𝑏0 𝑦 = 0,
which is a first order DE and hence the name reduction of order and integrating and taking
where 𝑏0 , 𝑏1 , . . . , 𝑏𝑛 are all real constants, is called a Homogenous Linear Differential Equation of
constant coefficients.
the constant zero gives us ln 𝑧 = −2 ln 𝑦1 − 𝑝𝑑𝑥. This implies

𝑧 = 2 𝑒− 𝑝𝑑𝑥 .
Let 𝑓 (𝑥) be any solution of (2.6) in [𝑎, 𝑏]. Then
1 ∫
𝑦1
But 𝑧 = 𝑢′ . Then
𝑢′ = 𝑒 𝑏𝑛 𝑓 (𝑛) (𝑥) + 𝑏𝑛−1 𝑓 (𝑛−1) (𝑥) + ⋅ ⋅ ⋅ + 𝑏1 𝑓 ′ (𝑥) + 𝑏0 𝑓 (𝑥) = 0 for all 𝑥 ∈ [𝑎, 𝑏].
1 − ∫ 𝑝𝑑𝑥
𝑦12
and then Hence the derivatives of 𝑓 are linearly dependent since at least one of the coefficients 𝑏0 , 𝑏1 , . . . , 𝑏𝑛
𝑦2
=𝑢= 𝑒 𝑑𝑥. is different from zero.
𝑦1
1 − ∫ 𝑝𝑑𝑥
𝑦12
� � �

Therefore, the second solution for the given equation is The simplest case with this property is a function 𝑓 such that

𝑦2 = 𝑦 1 𝑒 𝑑𝑥. 𝑓 (𝑘) (𝑥) = 𝑐𝑓 (𝑥), ∀𝑥 ∈ [𝑎, 𝑏]


1 − ∫ 𝑝𝑑𝑥
𝑦12
� � �

for some constant 𝑐.


Example 2.2.6. The function 𝑦1 (𝑥) = 𝑥 is a solution of the homogenous DE
Let 𝑓 (𝑥) = 𝑒𝜆𝑥 . Then 𝑓 𝑘 (𝑥) = 𝜆𝑘 𝑓 (𝑥) = 𝜆𝑘 𝑒𝜆𝑥 which implies 𝑐 = 𝜆𝑘 .
(𝑥2 − 1)𝑦 ′′ − 2𝑥𝑦 ′ + 2𝑦 = 0.
Thus we will look for the solution of (2.6) in the form 𝑦 = 𝑒𝜆𝑥 where the constant 𝜆 will be
Solve the given DE. chosen so that 𝑦 = 𝑒𝜆𝑥 does satisfy the equation (2.6).
Now insert 𝑦 = 𝑒𝜆𝑥 into (2.6) to get;
Solution
(𝑏𝑛 𝜆𝑛 + 𝑏𝑛−1 𝜆𝑛−1 + ⋅ ⋅ ⋅ + 𝑏1 𝜆 + 𝑏0 )𝑒𝜆𝑥 = 0.
The given equation is equivalent to 𝑦 ′′ + 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 = 0, where
Hence, if 𝑒𝜆𝑥 is a solution of the equation in (2.6), then 𝜆 should satisfy:
2
𝑝(𝑥) = and .
−2𝑥
𝑞(𝑥) = 2
𝑥2 − 1 𝑥 −1
If 𝑦2 is a second solution of the given equation then 𝑦2 (𝑥) = 𝑢(𝑥)𝑦1 (𝑥), where 𝑏0 𝜆𝑛 + 𝑏1 𝜆𝑛−1 + ⋅ ⋅ ⋅ + 𝑏𝑛−1 𝜆 + 𝑏𝑛 = 0, (2.7)
1 ln ∣𝑥2 −1∣ 1 1
𝑢(𝑥) = 2
𝑒 2 −1 𝑑𝑥
𝑑𝑥 = 2
𝑒 since 𝑒𝜆𝑥 ∕= 0 for all 𝑥 ∈ R.
1 − ∫ 𝑥−2𝑥
𝑥 𝑥 𝑥 𝑥
𝑑𝑥 = (1 − 2 )𝑑𝑥 = 𝑥 + .
� � ( ) � � � � �

Definition 2.3.2. The algebraic equation (2.7) is called an Auxiliary equation or character-
Therefore, 𝑦1 (𝑥) = 𝑥 and 𝑦2 (𝑥) = 𝑢(𝑥)𝑦1 (𝑥) = 𝑥2 + 1 are two linearly independent solutions of
istic equation of the given differential equation in (2.6).
the given equation and hence the general solution of the given equation is 𝑦(𝑥) = 𝑐1 𝑥+𝑐2 (𝑥2 +1),
where 𝑐1 and 𝑐2 are constants. There are 3 different cases of the roots of (2.7).
2.3 Homogeneous LODE with Constant Coefficients 42 2.3 Homogeneous LODE with Constant Coefficients 43

Case 1. Distinct Real Roots 2. if the characteristic equation has triple root 𝜆, then the corresponding linearly independent
solutions are 𝑒𝜆𝑥 , 𝑥𝑒𝜆𝑥 and 𝑥2 𝑒𝜆𝑥 .
Suppose that (2.7) has 𝑛 distinct roots, 𝜆1 , 𝜆2 , . . . 𝜆𝑛 where 𝜆𝑖 ∕= 𝜆𝑗 , for 𝑖 ∕= 𝑗. Then, the
solutions 𝑒𝜆1 𝑥 , 𝑒𝜆2 𝑥 , . . . , 𝑒𝜆𝑛 𝑥 are linearly independent. (Use the Wronskian to prove this.) Let us proof the first part of the above remark for a second order linear homogenous differential
equation .
If 𝜆1 , 𝜆2 , . . . , 𝜆𝑛 are the 𝑛 distinct real roots of (2.7), then the general solution of (2.6) is:
𝑛
If the given DE is 𝑎𝑦 ′′ + 𝑏𝑦 ′ + 𝑐𝑦 = 0, then its characteristic equation is 𝑎𝜆2 + 𝑏𝜆 + 𝑐 = 0 and then
−𝑏
𝑦(𝑥) = 𝑐1 𝑒𝜆1 𝑥 + 𝑐2 𝑒𝜆2 𝑥 + ⋅ ⋅ ⋅ + 𝑐𝑛 𝑒𝜆𝑛 𝑥 = 𝑐𝑖 𝑒𝜆𝑖 𝑥 , 𝜆 = 𝜆 1 = 𝜆2 = 2𝑎
. One of the solution of the given DE is 𝑦1 = 𝑒𝜆𝑥 . Then we can use the method
𝑖=1 of reduction of order to find a second solution 𝑦2 so that 𝑦1 and 𝑦2 are linearly independent.

where 𝑐1 , 𝑐2 , . . . , 𝑐𝑛 are arbitrary constants. The given equation is equivalent to


𝑏 𝑐
Example 2.3.1. 1. For the differential equation 𝑦 ′′ −3𝑦 ′ +2𝑦 = 0, the characteristic equation 𝑦 ′′ + 𝑦 ′ + 𝑦 = 0
𝑎 𝑎
is: 𝜆2 − 3𝜆 + 2 = 0, and 𝜆1 = 2 and 𝜆2 = 1 are the two distinct real roots of this
and 𝑦2 = 𝑢𝑦1 , where
characteristic equation. Hence, the general solution of the given differential equation is
2𝑥 𝑥 𝑒− 𝑎 𝑑𝑥 𝑒𝑎
𝑦(𝑥) = 𝑐1 𝑒 + 𝑐2 𝑒 . 𝑢= = 1𝑑𝑥 = 𝑥,
∫ 𝑏

𝑒 𝜆𝑥 𝑒2𝜆𝑥
� � � � −𝑏 𝑥 �

−𝑏
since 2𝜆 =
( )2 𝑑𝑥 =

and hence 𝑦2 = 𝑥𝑒𝜆𝑥 .


2. For the differential equation 𝑦 ′′′ − 4𝑦 ′′ + 𝑦 ′ + 6𝑦 = 0, the corresponding characteristic
𝑎
equation is: 𝜆3 − 4𝜆2 + 𝜆 + 6 = 0 with distinct real roots 𝜆1 = 2, 𝜆2 = 3 and 𝜆3 = −1.
The following theorem is a generalization for the above remark.
Therefore, the general solution of the give equation is 𝑦(𝑥) = 𝑐1 𝑒2𝑥 + 𝑐2 𝑒3𝑥 + 𝑐3 𝑒−𝑥 .
Theorem 2.3.4.

Case 2. Repeated Real Roots 1. If the characteristic equation (2.7) has the real root 𝜆 occurring k times (𝑖.𝑒.𝜆1 = 𝜆2 =

To understand the situation let us consider the following example.


⋅ ⋅ ⋅ = 𝜆𝑘 ) where 𝑘 ≤ 𝑛, then the part of the general solution for (2.6) corresponding to
this k fold repeated root is
Example 2.3.2. Consider the DE 𝑦 ′′ − 6𝑦 ′ + 9𝑦 = 0. Then, its characteristic equation is 𝜆2 −
(𝑐1 + 𝑐2 𝑥 + 𝑐3 𝑥2 + ⋅ ⋅ ⋅ + 𝑐𝑘 𝑥𝑘−1 )𝑒𝜆𝑥
6𝜆 + 9 = 0, which implies (𝜆 − 3)2 = 0. Therefore, 𝜆1 = 𝜆2 = 3, which is a repeated real root.
One of the solutions of the given linear differential equation is 𝑒3𝑥 . 2. If further, the remaining roots are the distinct real roots 𝜆𝑘+1 , 𝜆𝑘+2 , . . . , 𝜆𝑛 , the general
solution of (2.6) will be:
Let 𝑦1 (𝑥) = 𝑒3𝑥 . The given equation will have two linearly independent solutions and the second
solution can be found by using the method of reduction of order. Let 𝑦2 be another solution 𝑦(𝑥) = 𝑐1 𝑒𝜆𝑥 + 𝑐2 𝑥𝑒𝜆𝑥 + 𝑐3 𝑥2 𝑒𝜆𝑥 + ⋅ ⋅ ⋅ + 𝑐𝑘 𝑥𝑘−1 𝑒𝜆𝑥 + 𝑐𝑘+1 𝑒𝜆𝑘+1 𝑥 + ⋅ ⋅ ⋅ + 𝑐𝑛 𝑒𝜆𝑛 𝑥 .
so that 𝑦1 and 𝑦2 are linearly independent. Then 𝑦2 = 𝑢𝑦1 , where Example 2.3.3.
𝑒 𝑒
𝑢(𝑥) = 𝑑𝑥 = 1𝑑𝑥 = 𝑥. 1. Consider the Differential Equation
𝑒3𝑥 𝑒6𝑥
� � − ∫ −6𝑑𝑥 � � � 6𝑥 � �
( )2 𝑑𝑥 =

𝑦 (4) − 5𝑦 ′′′ + 6𝑦 ′′ + 4𝑦 ′ − 8𝑦 = 0.
Therefore 𝑦2 (𝑥) = 𝑥𝑒3𝑥 and 𝑦(𝑥) = 𝑐1 𝑒3𝑥 + 𝑐2 𝑥𝑒3𝑥 is a general solution for constants 𝑐1 and 𝑐2 .
The corresponding characteristic equation is 𝜆4 − 5𝜆3 + 6𝜆2 − 8 = 0 and the roots are
Remark 2.3.3. Given a differential equation:
𝜆1 = 𝜆2 = 𝜆3 = 2, 𝜆4 = −1.
1. if the characteristic equation has double real root 𝜆, then 𝑒𝜆𝑥 and 𝑥𝑒𝜆𝑥 are two linearly Therefore, the general solution is given by 𝑦(𝑥) = 𝑐1 𝑒2𝑥 + 𝑐2 𝑥𝑒2𝑥 + 𝑐3 𝑥2 𝑒2𝑥 + 𝑐4 𝑒−𝑥 , where
independent solutions and; 𝑐1 , 𝑐2 , 𝑐3 and 𝑐4 are constants.
2.3 Homogeneous LODE with Constant Coefficients 44 2.4 Nonhomogeneous Equations with Constant Coefficients 45
′′′
2. Consider the Differential Equation 𝑦 − 4𝑦 ′′ − 3𝑦 ′ + 18𝑦 = 0 The roots of the characteristic 2.3.1 Exercises
equation are, 𝜆1 = 𝜆2 = 3 and 𝜆3 = −2 and hence the general solution of the equation is:
Exercise 2.3.5. Solve each of the following Differential Equation.
𝑦(𝑥) = 𝑐1 𝑒3𝑥 + 𝑐2 𝑥𝑒3𝑥 + 𝑐3 𝑒−2𝑥 , where 𝑐1 , 𝑐2 and 𝑐3 are constants.
1. 𝑦 ′′ + 𝑦 = 0.
Case 3. Conjugate Complex Roots
2. 𝑦 ′′ − 6𝑦 ′ + 25𝑦 = 0.
Suppose the equation (2.7) has a complex root 𝜆 = 𝑎 + 𝑖𝑏, 𝑎, 𝑏 ∈ R. Then (we know from the
3. 𝑦 (4) − 4𝑦 ′′′ + 14𝑦 ′′ − 20𝑦 ′ + 25𝑦 = 0, where 𝜆1 = 𝜆2 = 1 + 2𝑖 and 𝜆3 = 𝜆4 = 1 − 2𝑖.
theory of algebraic equations that) the conjugate 𝜆
¯ = 𝑎 − 𝑖𝑏 is also a root of (2.7) and the
corresponding part of the general solution of (2.6) will be:
2.4 Nonhomogeneous Equations with Constant Coefficient
𝑘1 𝑒(𝑎+𝑖𝑏)𝑥 + 𝑘2 𝑒(𝑎−𝑖𝑏)𝑥 .

Consider the equation


But 𝑒𝑎+𝑖𝑏 = 𝑒𝑎 𝑒𝑖𝑏 = 𝑒𝑎 (cos 𝑏 + 𝑖 sin 𝑏), (by applying Euler’s formula) and then
𝑑2 𝑥 𝑑𝑥
𝑚 + 𝑐 + 𝑘𝑥 = 𝐹 (𝑡). (2.8)
𝑑𝑡2 𝑑𝑡
which governs the displacement 𝑥(𝑡) of a mechanical oscillator. Here 𝐹 (𝑡) is the forcing function
𝑘1 𝑒(𝑎+𝑖𝑏)𝑥 + 𝑘2 𝑒(𝑎−𝑖𝑏)𝑥 = 𝑘1 𝑒𝑎𝑥 (cos 𝑏𝑥 + 𝑖 sin 𝑏𝑥) + 𝑘2 𝑒𝑎𝑥 (cos 𝑏𝑥 − 𝑖 sin 𝑏𝑥)
= 𝑒𝑎𝑥 [(𝑘1 + 𝑘2 ) cos 𝑏𝑥 + 𝑖(𝑘1 − 𝑘2 ) sin 𝑏𝑥] and the equation is a non-homogeneous linear ODE with constant coefficients. There are several
= 𝑒𝑎𝑥 (𝑐1 cos 𝑏𝑥 + 𝑐2 sin 𝑏𝑥), practical problems which can be modeled in this form.
Recall that differential equations of the form
where 𝑐1 = 𝑘1 + 𝑘2 and 𝑐2 = 𝑖(𝑘1 − 𝑘2 ) are arbitrary constants from the set of complex numbers
C. 𝑏𝑛 (𝑥)𝑦 (𝑛) + ⋅ ⋅ ⋅ + 𝑏1 (𝑥)𝑦 ′ + 𝑏0 (𝑥)𝑦 = 𝑓 (𝑥), where 𝑓 (𝑥) ∕= 0 (2.9)
¯ are each k fold roots of (2.7), then the part of
On the other hand if 𝑎 + 𝑖𝑏 = 𝜆 and 𝑎 − 𝑖𝑏 = 𝜆
are called nonhomogeneous differential equations. In the previous sections we have seen how
the general solution that corresponds to this part is
to solve homogeneous differential equations. In this section we are going to see how to solve
𝑎𝑥
𝑒 [ 𝑐1 + 𝑐2 𝑥 + ⋅ ⋅ ⋅ + 𝑐𝑘 𝑥𝑘−1 cos 𝑏𝑥 + 𝑖 𝑐𝑘+1 + 𝑐𝑘+2 𝑥 + ⋅ ⋅ ⋅ 𝑐2𝑘 𝑥𝑘−1 sin 𝑏𝑥]. differential equations of the form
( ) ( )

Example 2.3.4. Solve 𝑦 ′′ − 2𝑦 ′ + 10𝑦 = 0.


𝑏𝑛 𝑦 (𝑛) + 𝑏𝑛−1 𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑏1 𝑦 ′ + 𝑏0 𝑦 = 𝑓 (𝑥), (2.10)

where 𝑏𝑛 , ..., 𝑏0 are constants is called a nonhomogeneous differential equation with constant
Solution
coefficients. The following theorem is very important in such cases.

Theorem 2.4.1 (Homogeneous-Nonhomogeneous Solution Relation). Consider the nonhomoge-


The characteristic equation of the given equation is 𝜆2 − 2𝜆 + 10 = 0 with roots 𝜆1 = 1 + 3𝑖

neous differential equation


and 𝜆2 = 1 − 3𝑖. Then 𝑦1 = 𝑒1+3𝑖 and 𝑦2 = 𝑒1−3𝑖 are two independent solutions of the given
equation. Therefore, 𝑦 = 𝑐1 𝑦1 + 𝑐2 𝑦2 , where 𝑐1 and 𝑐2 are constants, is a general solution of the
given equation. That means 𝑏𝑛 (𝑥)𝑦 (𝑛) + ⋅ ⋅ ⋅ + 𝑏1 (𝑥)𝑦 ′ + 𝑏0 (𝑥)𝑦 = 𝑓 (𝑥), where 𝑓 (𝑥) ∕≡ 0.

𝑦(𝑥) = 𝑒𝑥 (𝑐1 cos 3𝑥 + 𝑐2 sin 3𝑥). If 𝑓 (𝑥) ≡ 0, then the equation becomes a homogeneous equation.

1. If 𝑦1 and 𝑦2 are solutions of the nonhomogeneous equation on an interval I, then 𝑦1 − 𝑦2


is also a solution of the homogeneous equation in the interval I.
2.4 Nonhomogeneous Equations with Constant Coefficients 46 2.4 Nonhomogeneous Equations with Constant Coefficients 47

2. If 𝑦1 is a solution of the nonhomogeneous equation and 𝑦2 is a solution of the homogeneous 2. Let f be an UC function. The set S of functions consisting of 𝑓 and all the derivatives of
equation in an interval I, then 𝑦1 + 𝑦2 is a solution of the nonhomogeneous equation in the 𝑓 which are mutually LI UC functions is said to be the UC set of function f, if S is a finite
interval I. set and we shall denote it by S.

The following remark follows directly from the theorem given above. Example 2.4.1.

Remark 2.4.2. Suppose 𝑦ℎ (𝑥) denote the general solution of the homogeneous part and 𝑦𝑝 (𝑥) 1. Let 𝑓 (𝑥) = 𝑥3 . Then f is UC function.
denote the particular solution of the DE:Then the general solution of (2.10) is given by 𝑦(𝑥) =
𝑦ℎ (𝑥) + 𝑦𝑝 (𝑥). 𝑓 ′ (𝑥) = 3𝑥2 , 𝑥2 is UC function.

Theorem 2.4.3 (Superposition Principle). If 𝑦ℎ (𝑥) is a general solution of the homogeneous 𝑓 ′′ (𝑥) = 6𝑥, 𝑥 is UC function.
part of (2.10) on an interval [𝑎, 𝑏] and 𝑦𝑝1 (𝑥), 𝑦𝑝2 (𝑥), . . . , 𝑦𝑝𝑘 (𝑥) are particular solutions of (2.10)
𝑓 ′′′ (𝑥) = 6, 1 is UC function.
corresponding to 𝑓1 (𝑥), 𝑓2 (𝑥), . . . , 𝑓𝑘 (𝑥) respectively on the right hand side, then the general
solution of (2.10) where, 𝑓 (𝑥) = 𝑓1 (𝑥) + ⋅ ⋅ ⋅ + 𝑓𝑘 (𝑥) on [𝑎, 𝑏], is Therefore, 𝑆 = {1, 𝑥, 𝑥2 , 𝑥3 }.

𝑦(𝑥) = 𝑦ℎ (𝑥) + 𝑦𝑝1 (𝑥) + 𝑦𝑝2 (𝑥) + ⋅ ⋅ ⋅ + 𝑦𝑝𝑘 (𝑥). 2. Let 𝑓 (𝑥) = sin(2𝑥). Then 𝑓 is an UC function.

The above result is called a Superposition Principle. It tells us that the response 𝑦𝑝 to a 𝑓 ′ (𝑥) = 2 cos(2𝑥), cos(2𝑥) is UC function.

sin(2𝑥) = 𝑓 (𝑥).
superposition of inputs (the forcing functions 𝑓1 + 𝑓2 + ⋅ ⋅ ⋅ + 𝑓𝑘 ) is the superposition of their
𝑓 ′′ (𝑥) = −4 sin(2𝑥),
individual outputs (𝑦𝑝1 , . . . , 𝑦𝑝𝑘 ).
Therefore, 𝑆 = {sin(2𝑥), cos(2𝑥)}.
We are going to use these results in solving nonhomogeneous differential equations with constant
coefficients in the coming sections. 3. Let 𝑔(𝑥) = 2𝑥𝑒−𝑥 . 𝑔 is an UC function (as a product of UC function).

𝑔 ′ (𝑥) = 2𝑒−𝑥 − 2𝑥𝑒−𝑥 and 𝑒−𝑥 , 𝑥𝑒−𝑥 are UC functions.


2.4.1 The undetermined coefficient method
𝑔 ′′ (𝑥) = −4𝑒−𝑥 + 2𝑥𝑒−𝑥 and 𝑒−𝑥 , 𝑥𝑒−𝑥 are UC functions.
Definition 2.4.4.
Therefore, 𝑆 = {𝑒−𝑥 , 𝑥𝑒−𝑥 }.
1. A function is called an undetermined coefficient function (UC function) if it is either:
4. The function
a) a function defined by (a linear combination) of the following 1
𝑓 (𝑥) =
𝑥
i) 𝑥𝑛 , 𝑛 = 0, 1, 2, . . . , is not a UC function.
ii) 𝑒𝑎𝑥 , where 𝑎 is any non-zero constant
iii) sin(𝑏𝑥 + 𝑐), where 𝑏, 𝑐 are constants, such that 𝑏 ∕= 0. We outline the method by using the following example.

iv) cos(𝑏𝑥 + 𝑐), where 𝑏, 𝑐 are constants, such that 𝑏 ∕= 0. Example 2.4.2. Consider the differential equation
or
𝑦 (4) − 𝑦 ′′ = 3𝑥2 − sin 2𝑥 (2.11)
b) a function which is defined as a finite product of two or more functions of the above
4 types.
2.4 Nonhomogeneous Equations with Constant Coefficients 48 2.4 Nonhomogeneous Equations with Constant Coefficients 49
(4)
1) First find the general solution to the homogeneous part ∙ 𝑦𝑝1 − 𝑦𝑝′′1 (𝑥) = 3𝑥2 which implies 24𝐴 − 12𝐴𝑥2 − 6𝐵𝑥 − 2𝐶 = 3𝑥2 .
Equating the coefficients of like terms we get:
𝑦 (4) − 𝑦 ′′ = 0.

The characteristic equation of the given equation is 𝜆4 − 𝜆2 = 0. Then 𝜆2 (𝜆2 − 1) = 0 and


−6𝐵 = 0

hence 𝜆1 = 𝜆2 = 0 and 𝜆3 = 1, 𝜆4 = −1. Therefore, the general solution is:


⎨ −12𝐴 = 3

𝑦ℎ (𝑥) = 𝑐1 + 𝑐2 𝑥 + 𝑐3 𝑒𝑥 + 𝑐4 𝑒−𝑥 .

⎩ 24𝐴 − 2𝐶 = 0

−1
This implies, 𝐴 = 4
, 𝐵 = 0 and 𝐶 = −3.

2) The forcing function (non- homogeneous term) is a combination of 𝑥2 and sin 2𝑥. Therefore,
1
4
𝑦𝑝1 (𝑥) = − 𝑥4 − 3𝑥2 .
Next find the set of UC functions corresponding to the component functions
Next, we need to find 𝑦𝑝2 (𝑥) which corresponds to 𝑓2 (𝑥) = − sin 2𝑥. We seek 𝑦𝑝2 (𝑥) to be
2
𝑓1 (𝑥) = 3𝑥 a linear combination of the elements of 𝑆2 , that is,

with 𝑆1 = {𝑥2 , 𝑥, 1} and 𝑦𝑝2 (𝑥) = 𝐷 sin 2𝑥 + 𝐸 cos 2𝑥.


𝑓2 (𝑥) = − sin 2𝑥
– Check for a duplicate both in 𝑦ℎ (𝑥) and 𝑦𝑝1 (𝑥). – No duplicate.
with 𝑆2 = {sin 2𝑥, cos 2𝑥}.
– Then substitute in 𝑦 (4) − 𝑦 ′′ = − sin 2𝑥 which implies
To find a particular solution 𝑦𝑝1 (𝑥) corresponding to 𝑓1 (𝑥), tentatively we seek it to be a linear
combination of the functions in 𝑆1 , i.e. 𝑦𝑝(4)
2
(𝑥) − 𝑦𝑝′′2 (𝑥) = − sin 2𝑥.

𝑦𝑝1 (𝑥) = 𝐴𝑥2 + 𝐵𝑥 + 𝐶, Hence,

where A,B and C are called the undetermined constants. (24 𝐷 sin 2𝑥 + 24 𝐸 cos 2𝑥) − (−22 𝐷 sin 2𝑥 − 22 cos 2𝑥) = − sin 2𝑥.

∙ Check each term in 𝑦𝑝1 (𝑥) for duplication with terms in 𝑦ℎ (𝑥). Here the 𝐵𝑥 and C terms Therefore, 20𝐷 sin 2𝑥 + 20𝐸 cos 2𝑥 = − sin 2𝑥 and then
are constant multiples of 𝑐2 𝑥 and 𝑐1 respectively. 1
20
20𝐷 = −1 ⇐⇒ 𝐷 = −
∙ If there is any duplicate, then successively multiply each member of 𝑆𝑖 by the lowest positive
and 20𝐸 = 0. Therefore,
integral power of 𝑥, until (so that) the resulting revised set contains no duplicate of the 1
sin 2𝑥.
20
𝑦𝑝2 (𝑥) = −
terms in the homogeneous (and previously found particular 𝑦𝑝𝑖 ’s) solutions.

– 𝑦𝑝1 (𝑥) = 𝑥(𝐴𝑥2 + 𝐵𝑥 + 𝐶) = 𝐴𝑥3 + 𝐵𝑥2 + 𝐶𝑥 still a duplicate is there, Hence the general solution of (2.11) is:

– 𝑦𝑝1 (𝑥) = 𝑥2 (𝐴𝑥2 + 𝐵𝑥 + 𝐶) = 𝐴𝑥4 + 𝐵𝑥3 + 𝐶𝑥2 , no more duplicate. 1 1


sin 2𝑥.
4 20
𝑦(𝑥) = 𝑦ℎ (𝑥) + 𝑦𝑝1 (𝑥) + 𝑦𝑝2 (𝑥) = 𝐶1 + 𝐶2 𝑥 + 𝐶3 𝑒𝑥 + 𝐶4 𝑒−𝑥 − 𝑥4 − 3𝑥2 −

∙ Substitute the final revised form into the equation and determine the coefficients A,B and
Example 2.4.3. Solve 𝑦 ′′ − 2𝑦 ′ − 3𝑦 = 2𝑒−𝑥 − 10 sin 𝑥.
C.
Let 𝐹 (𝑥) = 2𝑒−𝑥 −10 sin 𝑥, 𝑓1 = 2𝑒−𝑥 , 𝑓2 = −10 sin 𝑥. Then 𝑆1 = {𝑒−𝑥 } and 𝑆2 = {sin 𝑥, cos 𝑥}
2.4 Nonhomogeneous Equations with Constant Coefficients 50 2.4 Nonhomogeneous Equations with Constant Coefficients 51

∙ Solution of the homogeneous part 𝑦 ′′ − 2𝑦 ′ − 3𝑦 = 0. 2.4.2 Variation of Parameters

The Undetermined Coefficient method is easier to apply, but works only for constant coefficients
Then 𝜆2 − 2𝜆 − 3 = 0 and hence 𝜆1 = 3, 𝜆2 = −1. Therefore, 𝑦ℎ (𝑥) = 𝐶1 𝑒3𝑥 + 𝐶2 𝑒−𝑥 .

−𝑥
: 𝑦𝑝1 (𝑥) = 𝐵𝑒 −𝑥
which duplicates with and certain types of non-homogeneous terms (or forcing functions). If the forcing function is, for
𝑥+1
∙ Particular solution corresponding to 𝑓1 (𝑥) = 2𝑒
−𝑥
𝐶2 𝑒 . example, of the form 𝑓 (𝑥) = tan 𝑥 or 𝑓 (𝑥) = 2 , then both of them are not UC functions.
𝑥 +1
and hence we can not employ the method of undetermined coefficients in those cases. Hence,
This implies, 𝑦𝑝1 (𝑥) = 𝐵𝑥𝑒−𝑥 , – no more duplicate.
we need another method which works for more general set of problems. In this subsection we will
′′ ′
∙ Insert this into 𝑦 − 2𝑦 − 3𝑦 = 2𝑒 −𝑥
to get consider the method of Variation of Parameters for a second order linear ordinary differential
equation.
𝑦𝑝′′1 − 2𝑦𝑝′ 1 − 3𝑦𝑝1 = 2𝑒−𝑥 .
Consider the following second order linear differential equation.
This implies, (−2𝐵𝑒−𝑥 + 𝐵𝑥𝑒−𝑥 ) − 2(𝐵𝑒−𝑥 − 𝐵𝑥𝑒−𝑥 ) − 3𝐵𝑥𝑒−𝑥 = 2𝑒−𝑥 .
Hence −4𝐵𝑒−𝑥 = 2𝑒−𝑥 ⇐⇒ 𝐵 = − 12 . Therefore, 𝑦 ′′ + 𝑏1 (𝑥)𝑦 ′ + 𝑏2 (𝑥)𝑦 = 𝑓 (𝑥), (2.12)
1
where 𝑏1 , 𝑏2 and 𝑓 are continuous functions. Suppose that the general solution for the homoge-
2
𝑦𝑝1 (𝑥) = − 𝑥𝑒−𝑥 .
neous part of (2.12) is
∙ Particular solution corresponding to 𝑓2 (𝑥) = −10 sin 𝑥. 𝑦ℎ (𝑥) = 𝑐1 𝑦1 (𝑥) + 𝑐2 𝑦2 (𝑥).
Let 𝑦𝑝2 (𝑥) = 𝐷 sin 𝑥 + 𝐸 cos 𝑥 (No duplicate both in 𝑦ℎ and 𝑦𝑝1 ).
Now we want to get a particular solution corresponding to 𝑓 (𝑥) and this can be done by varying
Then, 𝑦𝑝′′2 − 2𝑦𝑝′ 2 − 3𝑦𝑝2 = −10 sin 𝑥. This implies,
the constants, 𝑐1 and 𝑐2 with respect to 𝑥. If 𝑦𝑝 is a particular solution corresponding to 𝑓 (𝑥),
(−𝐷 sin 𝑥 − 𝐸 cos 𝑥) − 2(𝐷 cos 𝑥 − 𝐸 sin 𝑥) − 3(𝐷 sin 𝑥 + 𝐸 cos 𝑥) = −10 sin 𝑥. then
𝑦𝑝 (𝑥) = 𝑐1 (𝑥)𝑦1 (𝑥) + 𝑐2 (𝑥)𝑦2 (𝑥).
Simplifying this gives us, (2𝐸 − 4𝐷) sin 𝑥 + (−2𝐷 − 4𝐸) cos 𝑥 = −10 sin 𝑥.
We differentiate and substitute it in (2.12) to get
Therefore,
2𝐸 − 4𝐷 = −10 𝑦𝑝′′ (𝑥) + 𝑏1 (𝑥)𝑦𝑝′ + 𝑏2 (𝑥)𝑦𝑝 = 𝑓 (𝑥).

−2𝐷 − 4𝐸 = 0
20 10 20 10 But 𝑦𝑝′ = 𝑐1 𝑦1′ + 𝑐2 𝑦2′ + 𝑐′1 𝑦1 + 𝑐′2 𝑦2 .
which implies 𝐷 = and 𝐸 = . Then, 𝑦𝑝1 (𝑥) = sin 𝑥 + cos 𝑥.
3 3 3 3 Since we are going to have only one equation with two variable functions 𝑐1 and 𝑐2 , we are free
Therefore, the general solution is to choose a condition which simplifies the equation. Therefore, we take the condition
1 20 10 10
sin 𝑥 + sin 𝑥 + cos 𝑥. 𝑐′1 𝑦1 + 𝑐′2 𝑦2 = 0.
2 3 3 3
𝑦(𝑥) = 𝐶1 𝑒3𝑥 + 𝐶2 𝑒−𝑥 − 𝑥𝑒−𝑥 +

Exercise 2.4.5. Solve each of the following DEs. This will simplify the equation as

1. 𝑦 ′′ − 𝑔𝑦 = 4 + 5 sinh 3𝑥 𝑦𝑝′′ = 𝑐1 𝑦1′′ + 𝑐′1 𝑦1′ + 𝑐′2 𝑦2′ + 𝑐2 𝑦2′′

2. 𝑦 ′′ − 2𝑦 ′ + 2𝑦 = 2𝑥2 + 𝑒𝑥 + 2𝑥𝑒𝑥 + 4𝑒3𝑥 and after simplification, the equation (2.12) becomes

𝑐1 (𝑦1′′ + 𝑏1 𝑦1′ + 𝑏2 𝑦1 ) + 𝑐2 (𝑦2′′ + 𝑏1 𝑦2′ + 𝑏2 𝑦2 ) + 𝑐′1 𝑦1′ + 𝑐′2 𝑦2′ = 𝑓.


2.4 Nonhomogeneous Equations with Constant Coefficients 52 2.5 The Laplace Transform Method to Solve ODEs 53

Since 𝑦1 and 𝑦2 are linearly independent solutions for the homogeneous part of equation (2.12) the general solution of the homogeneous equation is 𝑦ℎ (𝑥) = 𝑐1 𝑒2𝑥 + 𝑐2 𝑒−2𝑥 and
we have the following system of equations:
=
� � � �
� 𝑦 � � 2𝑥 𝑒−2𝑥 �

𝑐′1 𝑦1′ + 𝑐′2 𝑦2′ = 𝑓


� 1 𝑦2 � � 𝑒 �

(2.13)
W(x) = � ′ � � � = −2 − 2 = −4,
� � ‘𝑦1 𝑦2′ � � 2𝑒2𝑥 −2𝑒−2𝑥 �

𝑐′1 𝑦1 + 𝑐′2 𝑦2 = 0,
′ −2𝑥
� � � �

which is a system of two algebraic equations in 𝑐′1 and 𝑐′2 . Then (2.13) has a unique solution if
� 0 𝑦 � � 0 𝑒2𝑥 �
� 2 � � �

the determinant of the coefficient matrix is non-zero, that is, Therefore,


W1 (x) = � �=� � = −8𝑥𝑒−2𝑥
� 8𝑥 𝑦2 � � 8𝑥 −2𝑒 �

𝑊1 (𝑥)
𝑑𝑥 = 𝑑𝑥 = 2
−8𝑥𝑒−2𝑥
𝑐1 (𝑥) =
2 𝑊 (𝑥)
𝑥𝑒−2𝑥 𝑑𝑥 = −𝑥𝑒−2𝑥 + 𝑒−2𝑥
� � �

−4
� �

and similarly
� 𝑦 (𝑥) 𝑦 (𝑥) �
� 1 �

However, the above determinant is the Wronskian of the functions 𝑦1 and 𝑦2 . Since 𝑦1 and 𝑦2
� ′ � ∕= 0.

𝑊2 (𝑥) 8𝑥𝑒2𝑥
� 𝑦1 (𝑥) 𝑦2′ (𝑥) �

are LI functions, then 𝑐2 (𝑥) = 𝑑𝑥 =


𝑊 (𝑥)
𝑑𝑥 = −2 𝑥𝑒2𝑥 𝑑𝑥 = 𝑥𝑒−2𝑥 − 𝑒−2𝑥 .
� � �

−4
W[y1 ,y2 ] (𝑥) ∕= 0.
Therefore, 𝑦𝑝 (𝑥) = 𝑐1 (𝑥)𝑒2𝑥 + 𝑐2 (𝑥)𝑒−2𝑥 a particular solution and the general solution for the
Hence by Cramer’s rule we have: problem is 𝑦(𝑥) = 𝑦ℎ (𝑥) + 𝑦𝑝 (𝑥).

2 Remark 2.4.6. This method looks easier when the integrads (or the quotients of the Wronskian)
are simple. However, it could be very difficult to get the particular solution when the integrand
� �

𝑊1 (𝑥)
� 0 𝑦 �

c′1 (x) = =
� �

𝑊 (𝑥) 𝑊 (𝑥) is complicated.


� �
� 𝑓 𝑦2′ �

and � �

𝑊2 (𝑥)
� 𝑦 0 �

c′2 (x) = = 2.5 The Laplace Transform Method to Solve ODEs


� 1 �

𝑊 (𝑥) 𝑊 (𝑥)
� �
� 𝑦1 𝑓 �

By integrating both sides we will get: In the previous sections we have discussed how to solve differential equations of the form:
W1 (𝑥) W2 (𝑥)
𝑦𝑝 (𝑥) = 𝑑𝑥 𝑦1 (𝑥) + 𝑑𝑥 𝑦2 (𝑥). (2.14)
W(𝑥) W(𝑥)
𝑎𝑛 𝑦 (𝑛) + 𝑎𝑛−1 𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑎1 𝑦 ′ + 𝑎0 𝑦 = 𝑓 (𝑥)
�� � �� �

Example 2.4.4. Solve the differential equation by finding the general solutions and then evaluate the arbitrary constants in accordance with the
given initial conditions. However, the solution methods mainly dependent on the structure of the
𝑦 ′′ − 4𝑦 = 8𝑥. forcing function 𝑓 (𝑥). Moreover, all the coefficients are assumed to be constants. To address
problems with more general forcing function and some form of variable coefficients, we discuss
Solution the use of Laplace transform as possible alternative.

First solve the homogeneous equation 𝑦 ′′ − 4𝑦 = 0. Definition 2.5.1 (Laplace Transform). The Laplace Transform of a function 𝑓 (𝑡), if it exists, is
Then the characteristic equation is 𝜆2 − 4 = 0, which implies 𝜆 = ±2. If 𝑦1 and 𝑦2 are two denoted by ℒ{𝑓 (𝑡)} is given by,

𝑒−𝑠𝑡 𝑓 (𝑡)𝑑𝑡,
linearly independent solutions of the equation 𝑦 ′′ − 4𝑦 = 0, then 𝑦1 = 𝑒2𝑥 , 𝑦2 = 𝑒−2𝑥 . Therefore,
ℒ{𝑓 (𝑡)} =
0

2.5 The Laplace Transform Method to Solve ODEs 54 2.5 The Laplace Transform Method to Solve ODEs 55

where 𝑠 is a real number called a parameter of the transform. For short we may write, From the table above we have
1
ℱ(𝑠) to denote 𝑒−𝑠𝑡 𝑓 (𝑡)𝑑𝑡. i.e., ℱ(𝑠) = 𝑒−𝑠𝑡 𝑓 (𝑡)𝑑𝑡. ℒ{𝑒𝑎𝑡 } = for 𝑠 > 𝑎.
0 0 𝑠−𝑎
� ∞ � ∞

Example 2.5.1. Find the Laplace Transform of the constant function 𝑓 (𝑡) = 1. 1
Thus the inverse operator applied on 𝑠−𝑎
will give us back the function 𝑒𝑎𝑡

∞ 1
ℒ{1} = 𝑒−𝑠𝑡 × 1𝑑𝑡. i.e., ℒ−1 { } = 𝑒𝑎𝑡 for 𝑠 > 𝑎.
0
𝑠−𝑎

In general, ℒ−1 , the inverse Laplace Operator, is given by


Solution
1
ℒ−1 {𝐹 (𝑠)} = 𝐹 (𝑠)𝑒𝑠𝑡 𝑑𝑠,
2𝜋𝑖 𝛾−𝑖∞
� 𝛾+𝑖∞


(where 𝛾 is a positive real number), which is a complex improper integral.
ℒ{1} = 𝑒−𝑠𝑡 × 1𝑑𝑡
0

= lim 𝑒−𝑠𝑡 𝑑𝑡 Properties


𝑇 →∞ 0
� 𝑇

Here below we state some important properties of the transform in a serious of theorems without
= lim
0 proof.
�𝑇
𝑒−𝑠𝑡 ��

= lim 𝑒 +
−1 −𝑠𝑇 1
𝑇 →∞ −𝑠 �

𝑇 →∞ 𝑠 𝑠 Theorem 2.5.2 (Linearity).


� �

1
𝑠
; if 𝑠 > 0
= (a) If 𝑢(𝑡), 𝑣(𝑡) are functions and 𝛼, 𝛽 are any constants, then

∞; otherwise

1 ℒ{𝛼𝑢(𝑡) + 𝛽𝑣(𝑡)} = 𝛼ℒ{𝑢(𝑡)} + 𝛽ℒ{𝑣(𝑡)}.


𝑠
Therefore, ℒ{1} = , if 𝑠 > 0.

(b) For any functions 𝑈 (𝑠), 𝑉 (𝑠) and any given scalars 𝛼, 𝛽, we have
Table of some basic Laplace Transforms
ℒ−1 {𝛼𝑈 (𝑠) + 𝛽𝑉 (𝑠)} = 𝛼ℒ−1 {𝑈 (𝑠)} + 𝛽ℒ−1 {𝑉 (𝑠)}.
Function (𝑓 (𝑡)) Laplace Trnsf. (𝐹 (𝑠))
1 Example 2.5.2. Evaluate the following transforms
1 𝑠
,𝑠 >0
𝑛!
𝑡𝑛 , 𝑛 ∈ N ,𝑠 > 0
𝑠𝑛+1
1
1. ℒ{3𝑡 + 5𝑒−2𝑡 }.
𝑒𝑡𝑘 𝑠−𝑘
,𝑠 > 𝑘
𝑛!
𝑡𝑛 𝑒𝑘𝑡 (𝑠−𝑘)𝑛+1
,𝑠 > 𝑘 2. ℒ{cos2 3𝑡}.
𝑘
sin(𝑘𝑡) 𝑠2 +𝑘2
,𝑠 > 0 𝑠
𝑠
3. ℒ−1 (𝑠+1) 3 .
cos(𝑘𝑡) 𝑠2 +𝑘2
,𝑠 > 0
� 2 �

𝑘
sinh(𝑘𝑡) 𝑠2 −𝑘2
, 𝑠 > ∣𝑘∣
𝑠
cosh(𝑘𝑡) 𝑠2 −𝑘2
, 𝑠 > ∣𝑘∣
2.5 The Laplace Transform Method to Solve ODEs 56 2.5 The Laplace Transform Method to Solve ODEs 57

Solutions Theorem 2.5.3 (Transform of the derivative). Let 𝑓 (𝑡) be continuous and 𝑓 ′ (𝑡) be piecewise
continuous on some interval [0, 𝑡𝑜 ] for every finite 𝑡𝑜 , and let ∣𝑓 (𝑡)∣ < 𝐾𝑒𝑐𝑡 for some constants
1. ℒ{3𝑡 + 5𝑒−2𝑡 }; Applying the liniarity property we get,
𝐾, 𝑇 , and 𝑐 and for all 𝑡 > 𝑇 . Then the transform ℒ{𝑓 ′ (𝑡)} exists for all 𝑠 > 𝑐 and
ℒ{3𝑡 + 5𝑒−2𝑡 } = 3ℒ{𝑡} + 5ℒ{𝑒−2𝑡 } ℒ{𝑓 ′ (𝑡)} = 𝑠ℒ{𝑓 (𝑡)} − 𝑓 (0).
1 1
= 3 2 +5
𝑠 𝑠+2 Example 2.5.3. Use the Laplace transform method to solve the initial-value problem.
� � � �

3 5 5𝑠2 + 3𝑠 + 6
= 2+ =
𝑠 𝑠+2 𝑠2 (𝑠 + 2) 𝑦 ′ + 2𝑦 = 0 with 𝑦(0) = 1.

Solution
2. ℒ{cos2 3𝑡}; Using half angle formula we can get
1 + cos 6𝑡 1 1
ℒ{cos2 3𝑡} = ℒ{
2 2 2
} = ℒ{ + cos 6𝑡}. Applying Laplace transform on both sides of the equation we have
1 1 ℒ{𝑦 ′ + 2𝑦} = ℒ{0} ⇒ ℒ{𝑦 ′ (𝑡)} + 2ℒ{𝑦(𝑡)}.
2 2
Then by linearity we have ℒ{cos2 3𝑡} = ℒ{1} + ℒ{cos 6𝑡}.
Then we are now able to read the transforms of 1 and cos 6𝑡 from the table and get, Now, letting ℒ{𝑦(𝑡)} := 𝑌 (𝑠), we get the algebraic equation,
1 1 1 𝑠 𝑠2 + 18 1
+ 2 2
= . 𝑌 (𝑠) = .
2 𝑠 2 𝑠 +6 𝑠(𝑠2 + 36)
ℒ{cos2 3𝑡} =
𝑠+2
𝑠𝑌 (𝑠) − 𝑦(0) + 2𝑌 (𝑠) = 0 ⇒
� � � �

Therefore, reading from the transform table we get


3. Using partial fractions we have
1
𝑠2 𝐴 𝐵 𝐶 𝑦(𝑡) = ℒ−1 {𝑌 (𝑠)} = ℒ−1 {
𝑠+2
} = 𝑒−2𝑡 .
= + +
(𝑠 + 1)3 𝑠 + 1 (𝑠 + 1)2 (𝑠 + 1)3
i.e., 𝑦(𝑡) = 𝑒−2𝑡 is the solution for the differential equation. ⊡
2 2 2
⇒ 𝑠 = 𝐴(𝑠 + 1) + 𝐵(𝑠 + 1) + 𝐶 = 𝐴𝑠 + (2𝐴 + 𝐵)𝑠 + (𝐴 + 𝐵 + 𝐶). We can also use the Laplace method to solve higher order equations with constant coefficients.

⇒ 𝐴 = 1, 𝐵 = −2, 𝐶 = 1. The following property of the transform, which is the continuouation of the above theorem, is
required.
Hence we can rewrite the inverse transform and apply linearity to get
Theorem 2.5.4. Let 𝑓 (𝑡) be continuous and 𝑓 (𝑛) (𝑡) be piecewise continuous on some interval
𝑠2 −1 1 2 1
= + [0, 𝑡𝑜 ] for every finite 𝑡𝑜 , and let ∣𝑓 (𝑡)∣ < 𝐾𝑒𝑐𝑡 for some constants 𝐾, 𝑇 , and 𝑐 and for all 𝑡 > 𝑇 .
(𝑠 + 1)3 𝑠 + 1 (𝑠 + 1)2 (𝑠 + 1)3
ℒ−1 ℒ −
{ } { }

Then we have
1 −1 1
+
−2
𝑠+1 (𝑠 + 1)2 (𝑠 + 1)3
= ℒ−1 + ℒ−1 ℒ
{ } { } { }

ℒ{𝑓 (𝑛) (𝑡)} = 𝑠𝑛 ℒ{𝑓 (𝑡)} − 𝑠𝑛−1 𝑓 (0) − 𝑠𝑛−2 𝑓 ′ (0) − ⋅ ⋅ ⋅ − 𝑓 (𝑛−1) (0).
1
2
= 𝑒−𝑡 − 2𝑡𝑒−𝑡 + 𝑡2 𝑒−𝑡
Theorem 2.5.5 (First shifting theorem). If ℒ{𝑓 (𝑡)} = ℱ(𝑠) for 𝑅𝑒(𝑠) > 𝑏, then ℒ{𝑒𝑎𝑡 𝑓 (𝑡)} =
1 2 −𝑡
2
= (1 − 2𝑡 + 𝑡 )𝑒 . ℱ(𝑠 − 𝑎) for 𝑅𝑒(𝑠) > 𝑎 + 𝑏.

The other important property that leads us to use the Laplace transform in solving ordinary The proof of this theorem is easy to see using the definition.
differential equation is how the transform performs on the derivative.
Example 2.5.4. Find the Laplace transform for the function 𝑓 (𝑡) = 𝑒3𝑡 cos 4𝑡.
2.5 The Laplace Transform Method to Solve ODEs 58 2.6 The Cauchy-Euler Equation 59

Solution Theorem 2.5.6 (Derivative of the transform). For a piecewise continuous function 𝑓 (𝑡) and for
𝑠 any positive integer 𝑛, it holds that
.
+ 42𝑠2
Recall that ℒ{cos 4𝑡} =
Then using the first shifting theorem we get ℒ{(−1)𝑛 𝑡𝑛 𝑓 (𝑡)} = ℱ (𝑛) (𝑠).

.
𝑠−3
ℒ{𝑒3𝑡 cos 4𝑡} = The formula in this theorem can be used to find transforms of functions of the form 𝑥𝑛 𝑓 (𝑥) when
(𝑠 − 3)2 + 42
the Laplace transform of 𝑓 (𝑡) is known.
𝑠
.
𝑠2 + 𝑠 + 1
Example 2.5.5. Find the inverse Laplace transform for the function ℱ(𝑠) =
Exercise 2.5.7. Use the Laplace transform method to solve

Solution 𝑥𝑦 ′′ + (2𝑥 + 3)𝑦 ′ + (𝑥 + 3)𝑦 = 3𝑒−𝑥 ; 𝑦(0) = 0, 𝑦 ′ (0) = 1.

First let us rewrite the function ℱ(𝑠) as Remark 2.5.8. The main idea in using Laplace transform in solving ODEs is that, it transforms
1 the differential equation into an algebraic equation. Once the transformation is completed, we
𝑠 𝑠 𝑠 + 12 2
= 3 = 1 2 1 2 3
𝑠2 + 𝑠 + 1
ℱ(𝑠) = −
(𝑠 + 12 )2 + 4
(𝑠 + 2
) + 34 (𝑠 + 2
) + 4
seek for a solution to ℒ{𝑦(𝑡)} algebraically. Then the final step will be to get back the value of
𝑦(𝑡) using the inverse Laplace transform.
and hence,

3
−1 𝑠 −1 𝑠 + 12 −1 1 2
3
√ 1 2 3 . 2.6 The Cauchy-Euler Equation
𝑠2 + 𝑠 + 1
ℒ =ℒ −ℒ
(𝑠 + 12 )2 + 4 3 (𝑠 + 2 ) + 4
{ } { } � �

Then using the first shifting theorem, we have, In this section we are going to consider linear differential equations where the coefficients are
√ √ variables with some special forms.
−1 𝑠 −𝑡/2 3𝑡 1 −𝑡/2 3𝑡
=𝑒 cos sin .
𝑠2 + 𝑠 + 1 2 2
ℒ −√ 𝑒
3 Definition 2.6.1. The linear differential equation with variable coefficient of the form:
{ }

𝑎𝑛 𝑥𝑛 𝑦 (𝑛) + 𝑎𝑛−1 𝑥𝑛−1 𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑎1 𝑥𝑦 ′ + 𝑎0 𝑦 = 𝐹 (𝑥) (2.15)

Consider the general Laplace transform formula where 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 are constants is called the Cauchy-Euler Equation.

∞ Example 2.6.1. The linear differential equation 3𝑥2 𝑦 ′′ − 11𝑥𝑦 ′ + 2𝑦 = sin 𝑥 is a Cauchy- Euler
ℱ(𝑠) = 𝑒−𝑠𝑡 𝑓 (𝑡)𝑑𝑡.
0 equation.

Taking the derivative with respect to 𝑠 on both sides we get,


To solve Cauchy-Euler DEs first we reduce the given DE into a linear differential equation with
ℱ ′ (𝑠) = (−𝑡)𝑒−𝑠𝑡 𝑓 (𝑡)𝑑𝑡 = ℒ{−𝑡𝑓 (𝑡)}. constant coefficients and solve the given equation with the methods derived in the previous
0
� ∞

sections.
By further differentiating the above equation with respect to 𝑠, we get
Theorem 2.6.2. The transformation 𝑥 = 𝑒𝑡 , 𝑡 ∈ R reduces the Cauchy-Euler DE to a linear DE
ℱ ′′ (𝑠) = ℒ{𝑡2 𝑓 (𝑡)}. with constant coefficients.

In general we have
2.6 The Cauchy-Euler Equation 60 2.7 *The Power Series Solution Method 61

Let us consider the case when 𝑛 = 2. In this case the equation is: Therefore, the differential equation 𝑦 ′′ − 3𝑦 ′ + 2𝑦 = 0 has a general solution 𝑦(𝑡) =
𝑐1 𝑒𝑡 + 𝑐2 𝑒2𝑡 and since 𝑥 = 𝑒𝑡 the DE 𝑥2 𝑦 ′′ − 2𝑥𝑦 ′ + 2𝑦 = 0 has a general solution
𝑎2 𝑥2 𝑦 ′′ + 𝑎1 𝑥𝑦 ′ + 𝑎0 𝑦 = 𝐹 (𝑥) (2.16)
𝑦(𝑥) = 𝑐1 𝑥 + 𝑐2 𝑥2 ,
𝑡 𝑡
Let 𝑥 = 𝑒 . Then by solving for 𝑡 we get 𝑡 = ln 𝑥 for 𝑥 > 0 (or 𝑥 = −𝑒 if 𝑥 < 0) and
where 𝑐1 and 𝑐2 are arbitrary constants.
𝑑𝑦 𝑑𝑦 𝑑𝑡 1 𝑑𝑦
= . =
𝑑𝑥 𝑑𝑡 𝑑𝑥 𝑥 𝑑𝑡 2. Let 𝑥 = 𝑒𝑡 . Since 𝑎2 = 3, 𝑎1 = −11, 𝑎0 = 2, we have 𝐴2 = 𝑎2 = 3, 𝐴1 = 𝑎1 − 𝑎2 = −11 −
and 3 = −14, and 𝐴0 = 𝑎0 = 2, which reduce the given equation to 3𝑦 ′′ − 14𝑦 ′ + 2𝑦 = sin(𝑒𝑡 )
2
𝑑𝑦 1 𝑑 𝑑𝑦 𝑑𝑦 𝑑 1 1 𝑑2 𝑦 𝑑𝑡 1 𝑑𝑦 1 𝑑2 𝑦 𝑑𝑦 which is a DE with constant coefficients.
= + . = . = .
𝑑𝑥2 𝑥 𝑑𝑥 𝑑𝑡 𝑑𝑡 𝑑𝑥 𝑥 𝑥 𝑑𝑡2 𝑑𝑥 𝑥2 𝑑𝑡 𝑥2 𝑑𝑡2 𝑑𝑡
− −
� � � � � �

Substituting into (2.15) we get:


3. The given equation is transformed into 𝑦 ′′ − 3𝑦 ′ + 2𝑦 = 𝑒3𝑡 , with the substitution 𝑥 = 𝑒𝑡 ,
which is a DE with constant coefficients.
2
1 𝑑 𝑦 𝑑𝑦 1 𝑑𝑦
𝑎2 𝑥2 + 𝑎1 𝑥. + 𝑎2 𝑦 = 𝐹 (𝑒𝑡 ). Example 2.6.3 (Application). Consider a mechanical oscillator.
𝑥2 𝑑𝑡2 𝑑𝑡 𝑥 𝑑𝑡

� �

This implies, Let 𝐹𝐺 = 𝑚𝑔, where 𝑚 is mass of the object on the spring and 𝑔 is the gravity and ∣𝐹ˆ𝑅 ∣ = 𝑘𝑥,
2
𝑑𝑦 𝑑𝑦 (Hook’s law) where 𝑘 is the spring stiffness constant and 𝑥 is the the distance moved by the mass
𝑎2
𝑑𝑡2 𝑑𝑡
+ (𝑎1 − 𝑎2 ) + 𝑎0 𝑦 = 𝐹 (𝑒𝑡 ).
𝑚.
Then
𝑑𝑡
� �

a damping constant.
� 𝑑𝑥 �

𝑑2 𝑦 𝑑𝑦
𝐴2
+ 𝐴1 + 𝐴0 𝑦 = 𝐺(𝑡), (2.17)
If 𝐹𝐷 is a damping force for small velocity of mass, then ∣𝐹𝐷 ∣ = 𝐶 �� �� , where 𝐶 > 0 is called

𝑑𝑡2 𝑑𝑡 Therefore, the final form of our governing equation of motion is of the form:
where 𝐴2 = 𝑎2 , 𝐴1 = 𝑎1 − 𝑎2 , 𝐴0 = 𝑎0 and 𝐹 (𝑒𝑡 ) = 𝐺(𝑡), which is a second order linear
differential equation with constant coefficients. 𝑚𝑥′′ + 𝐶𝑥′ + 𝑘𝑥 = 𝑚𝑔 = 𝐹 (𝑡).

Example 2.6.2. Solve each of the following DEs. If 𝐹 is a variable force then the problem will be a second order nonhomogeneous differential
equation, and can be solved using one of the previously discussed methods.
1. 𝑥2 𝑦 ′′ − 2𝑥𝑦 ′ + 2𝑦 = 0.

2. 3𝑥2 𝑦 ′′ − 11𝑥𝑦 ′ + 2𝑦 = sin 𝑥. 2.7 *The Power Series Solution Method


3. 𝑥2 𝑦 ′′ − 2𝑥𝑦 ′ + 2𝑦 = 𝑥3 .
Recall that, the Taylor series of a given smooth function 𝑓 about a point 𝑎𝑜 is:

Solution 𝑓 (𝑛) (𝑎𝑜 )
𝑛!
𝑇 𝑆 𝑓 ∣ 𝑎𝑜 = (𝑥 − 𝑎𝑜 )𝑛 ;
𝑛=0

1. Let 𝑥 = 𝑒𝑡 . Since 𝑎2 = 1, 𝑎1 = −2 and 𝑎0 = 2 we have 𝐴2 = 𝑎2 = 1, 𝐴1 = 𝑎1 − 𝑎2 = −3


If this series converges in some interval ∣𝑥 − 𝑎𝑜 ∣ < 𝑟, and is equal to 𝑓 , then we call 𝑓 is analytic
and 𝐴0 = 𝑎0 = 2 which reduces the given equation to 𝑦 ′′ − 3𝑦 ′ + 2𝑦 = 0 which is
at 𝑎𝑜 and 𝑟 is the radius of convergence.
a homogenous second order linear differential equation with constant coefficients. Then
the characteristic equation of the equation is 𝜆2 − 3𝜆 + 2 = 0 which has eigenvalues If 𝑓 is not analytic at 𝑎𝑜 , we call it is singular at 𝑎𝑜 .
𝜆1 = 1, 𝜆2 = 2. The method mainly uses the following theorem
2.7 *The Power Series Solution Method 62 2.8 Systems of ODE of the First Order 63

Theorem 2.7.1 (Power Series Solution). If the functions 𝑝 and 𝑞 are analytic at a point 𝑐𝑜 , then Definition 2.7.2.
every solution of the DE 𝑝(𝑥) 𝑞(𝑥)
1. A point 𝑥𝑜 is said to be an ordinary point of equation (2.20) if ℎ(𝑥𝑜 ) ∕= 0 and ,
ℎ(𝑥) ℎ(𝑥)
are
𝑦 ′′ + 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 = 0 (2.18)
analytic at 𝑥𝑜 . Otherwise, it is called a singular point of equation (2.20).
is also analytic at 𝑐𝑜 , and can be found in the form
∞ 2. A singular point 𝑥𝑜 is said to be a regular singular point of equation (2.20) if the function
𝑛
𝑦(𝑥) = 𝑎𝑛 (𝑥 − 𝑐𝑜 ) . 𝑝(𝑥) 𝑞(𝑥)
𝑛=0 ℎ(𝑥) ℎ(𝑥)
(𝑥 − 𝑥𝑜 ) and (𝑥 − 𝑥𝑜 )2

Moreover, the radius of convergence of every solution is at least as large as the smaller of the are analytic at 𝑥𝑜 . A non regular singular point is called an irregular singular point of
radii of convergence of 𝑇 𝑆 𝑝∣𝑐𝑜 and 𝑇 𝑆 𝑞∣𝑐𝑜 . equation (2.20).
Example 2.7.1. Solve the DE If equation (2.20) has a regular singular point at 𝑥𝑜 , then use the power series:
′′ ′ ∞
(𝑥 − 1)𝑦 + 𝑦 + 2(𝑥 − 1)𝑦 = 0 (2.19)
𝑦(𝑥) = 𝑎𝑛 (𝑥 − 𝑥𝑜 )𝑛+𝑟
′ 𝑛=0

on the interval [4, ∞) with initial conditions 𝑦(4) = 5 and 𝑦 (4) = 0.


and determine the values of 𝑟 and 𝑎𝑛 , 𝑛 = 0, 1, 2, . . .
Solution Procedure:
This last series is called a Frobenius series solution.
∙ Convert the problem to the form of equation (2.18) in the above Theorem Example 2.7.2. Use Frobenius method to solve

∙ Check analyticity of the coefficient functions 𝑝(𝑥) and 𝑞(𝑥) at the point 𝑥𝑜 (the given initial 𝑥2 𝑦 ′′ + 5𝑥𝑦 ′ + (𝑥 + 4)𝑦 = 0
point).

1 𝑛−2
∙ Substitute into the equation (2.19) the general solution Ans.: 𝑦(𝑥) = 𝑎𝑜 (−1)𝑛 𝑥
𝑛=0
(𝑛!)2

𝑦(𝑥) = 𝑎𝑛 (𝑥 − 𝑥𝑜 )𝑛 The general solution (also the number of different solutions) of differential equations using Frobe-
𝑛=0

nius Method depend upon the solution to the equation (which is called the indicial equation)
and determine the values of the coefficients 𝑎𝑛 , for each 𝑛 = 0, 1, 2, . . ..
𝑟(𝑟 − 1) + 𝑏𝑜 𝑟 + 𝑐𝑜 = 0,
Frobenius Method which forces the coefficient of 𝑥𝑟 to be zero.

Consider again a second order equation with variable coefficients,

ℎ(𝑥)𝑦 ′′ + 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 = 0 (2.20) 2.8 Systems of ODE of the First Order
If ℎ(𝑥) ∕= 0 for some 𝑥 we can equivalently have A system of 𝑛 linear first-order equations in the 𝑛 unknowns 𝑥1 (𝑡), 𝑥2 (𝑡), . . . , 𝑥𝑛 (𝑡) is a system
𝑝(𝑥) ′ 𝑞(𝑥) that can be written in the form:
𝑦 ′′ + 𝑦 +
ℎ(𝑥) ℎ(𝑥)
𝑦 = 0, for ℎ(𝑥) ∕= 0.
𝑥′1 = 𝑎11 (𝑡)𝑥1 + 𝑎12 (𝑡)𝑥2 + ⋅ ⋅ ⋅ + 𝑎1𝑛 (𝑡)𝑥𝑛 + 𝑓1 (𝑡)
𝑥′2 = 𝑎21 (𝑡)𝑥1 + 𝑎22 (𝑡)𝑥2 + ⋅ ⋅ ⋅ + 𝑎2𝑛 (𝑡)𝑥𝑛 + 𝑓2 (𝑡)
(2.21)
If ℎ(𝑥) ∕= 0 for all 𝑥 we can simply apply the power series solution method. But if ℎ(𝑥) = 0
..
for some 𝑥 the resulting equation will be different from the original one at those points 𝑥 where .
ℎ(𝑥) = 0. 𝑥′𝑛 = 𝑎𝑛1 (𝑡)𝑥1 + 𝑎𝑛2 (𝑡)𝑥2 + ⋅ ⋅ ⋅ + 𝑎𝑛𝑛 (𝑡)𝑥𝑛 + 𝑓𝑛 (𝑡),
2.8 Systems of ODE of the First Order 64 2.8 Systems of ODE of the First Order 65

which is called the normal form. In vector form this system becomes: Example 2.8.1. Solve each of the following systems of linear differential equations.


X′ = AX + F(𝑡),
𝑦1′ = −3𝑦1 + 𝑦2
1. 2.

𝑦 ′ = 𝑦1 + 2𝑦2 + 𝑦3
� 

where

𝑦2′ = 𝑦1 − 3𝑦2
⎨ 𝑦1 = 2𝑦1 + 𝑦2 + 𝑦3

3 1 2 3
 2

X = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 )𝑇 , A = (𝑎𝑖𝑗 )𝑛×𝑛 , and F(𝑡) = (𝑓1 (𝑡), 𝑓2 (𝑡), . . . , 𝑓𝑛 (𝑡))𝑇


⎩ 𝑦 ′ = 𝑦 + 𝑦 + 2𝑦

Solution:
The system (2.21) is called homogeneous if F(𝑡) ≡ 0, so that X′ = AX and if F(𝑡) ∕≡ 0 for
1. The system can be written as:
some 𝑡, the system is nonhomogeneous.
𝑦1′ 1 𝑦1
Definition 2.8.1. A solution vector of the system of differential equation in (2.21) over some
−3
=
1 𝑦2
� � �

𝑇 𝑦2′ −3
interval 𝐼 is a a vector (𝑥1 (𝑡), 𝑥2 (𝑡), . . . , 𝑥𝑛 (𝑡)) whose entries are differentiable functions that
satisfies the system in (2.21) on the interval 𝐼. Let y(𝑡) = x𝑒𝜆𝑡 . Then the corresponding eigenvalue problem will be:

In this section, we are going to see how to solve such systems of equations. Two methods are −3 1 𝑥1 𝑥1
Ax = 𝜆x ⇐⇒ =𝜆
going to be considered, the Eigenvalue Method and Elimination Method. 𝑥2 𝑥2
� � �

1 −3

which is equivalent to:


2.8.1 Eigenvalue Method −3 − 𝜆 1 𝑥1 0
=
1 𝑥2 0
� � �

−3 − 𝜆
A. Homogeneous Systems with Constant Coefficients
which has characteristic equation
Consider the system
y′ = Ay, (2.22)
� �
� � �−3 − 𝜆
� � � 1 ��

with A = (𝑎𝑖𝑗 )𝑛×𝑛 be a constant matrix, that is, all the entries of A are constants.
� 𝐴 − 𝜆𝐼 � = 0 ⇐⇒ � � = (3 + 𝜆)2 − 1 = 0.
� 1 −3 − 𝜆�

This implies 𝜆2 + 6𝜆 + 8 = 0 ⇐⇒ (𝜆 + 2)(𝜆 + 4) = 0. Therefore, the eigenvalues are


Recall that, in the scalar case if 𝑦 ′ = 𝑘𝑦, then 𝑦 = 𝑐𝑒𝑘𝑡 , where 𝑐 is a constant (by integration). 𝜆1 = −2 and 𝜆2 = −4
Let y = x𝑒𝜆𝑡 , where x = (𝑥1 , 𝑥2 , . . . , 𝑥𝑛 )𝑇 . Substituting this into (2.22) we get:
a) Now, let us find an eigenvector corresponding to 𝜆1 = −2.
𝜆x𝑒𝜆𝑡 = y′ = Ay = Ax𝑒𝜆𝑡 and 𝑒𝜆𝑡 ∕= 0.
−3 − (−2) 1 𝑥1 0
=
This implies, 1 𝑥2 0
� � �

−3 − (−2)
Ax = 𝜆x,
or equivalently,
which is an eigenvalue problem. Once we find the eigenvalues 𝜆𝑖 and a corresponding eigenvector
𝑥𝑖 , the general solution will be −1 1 𝑥1 0
= ⇐⇒ 𝑥1 − 𝑥2 = 0,
1 𝑥2 0
� � �

−1
y(𝑡) = 𝑐1 𝑥1 𝑒𝜆1 𝑡 + ⋅ ⋅ ⋅ + 𝑐𝑛 𝑥𝑛 𝑒𝜆𝑛 𝑡 ,
which implies 𝑥1 = 𝑥2 and hence we have (𝑥1 , 𝑥2 )𝑇 = 𝑥1 (1, 1)𝑇 . Therefore, the vector
where 𝑐1 , ..., 𝑐𝑛 are constants. (1, 1)𝑇 is an eigenvector corresponding to 𝜆1 = −2.
2.8 Systems of ODE of the First Order 66 2.8 Systems of ODE of the First Order 67

b) Next, let us find an eigenvector corresponding to 𝜆2 = −4. Eigenvector corresponding to the eigenvalue 𝜆1 = 1 can be found as follows.

−3 − (−4) 1 𝑥1 0 1 1 𝑥1 0 2−1 1 1 𝑥1 0 1 1 1 𝑥1 0
= ⇐⇒ =
⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞

1 𝑥2 0 1 1 𝑥2 0
� � � � � �

−3 − (−4) 2−1
1 1 𝑥3 0 1 1 1 𝑥3 0
⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟

2−1
⎜ 1 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜
⎝ 1 ⎠ ⎝𝑥2 ⎠ = ⎝0⎠ ⇐⇒ ⎝1 1 1 ⎠ ⎝𝑥 2 ⎠ = ⎝0 ⎟ ⎠

This implies 𝑥1 + 𝑥2 = 0 and hence 𝑥1 = −𝑥1 which implies (𝑥1 , 𝑥2 )𝑇 = 𝑥1 (1, −1)𝑇 .
Therefore, the vector (1, −1)𝑇 is an eigenvector corresponding to the eigenvalue 𝜆2 = This implies, 𝑥1 + 𝑥2 + 𝑥3 = 0.
−4 i.e., (𝑥1 , 𝑥2 , 𝑥3 )𝑇 = (−𝑥2 − 𝑥3 , 𝑥2 , 𝑥3 )𝑇 = 𝑥2 (−1, 1, 0)𝑇 + 𝑥3 (−1, 0, 1)𝑇 .
Hence, the general solution of the given system is Similarly, eigenvector corresponding to 𝜆2 = 4 is obtained to be (1, 1, 1)𝑇 .
Thus, the general solution will be
1 1
y(𝑡) = 𝑐1 𝑒−2𝑡 + 𝑐2 𝑒4𝑡
1
� �

1
−1
𝑦1 (𝑡) −1 −1
which is equivalent to
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞

𝑦1 (𝑡) = 𝑐1 𝑒−2𝑡 + 𝑐2 𝑒−4𝑡


𝑦3 (𝑡) 0 1 1
⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜𝑦2 (𝑡)⎟ = 𝑐1 ⎜ 1 ⎟ 𝑒𝑡 + 𝑐2 ⎜ 0 ⎟ 𝑡𝑒𝑡 + 𝑐3 ⎜1⎟ 𝑒4 𝑡.
⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠

𝑦2 (𝑡) = 𝑐1 𝑒−2𝑡 − 𝑐2 𝑒−4𝑡

2. The given system is equivalent to the system B. Nonhomogeneous Systems

𝑦1′ 2 1 1 𝑦1 Suppose a non homogeneous system of linear ODEs is given by:


⎛ ⎞ ⎛ ⎞⎛ ⎞

y′ = Ay + F(𝑡), with F(𝑡) ∕≡ 0.


𝑦3′ 1 1 2 𝑦3
⎜ ⎟ ⎜ ⎟⎜ ⎟
⎜𝑦 ′ ⎟ = ⎜1 2 1⎟ ⎜𝑦2 ⎟
⎝ 2⎠ ⎝ ⎠⎝ ⎠

The general solution for this system takes the form y = yℎ + y𝑝 , where yℎ is the general
That is y′ = Ay, where
solution of the corresponding homogeneous system y′ = Ay and y𝑝 is any particular solution of
𝑦′ 2 1 1 𝑦1 y′ = Ay + F.

𝑎𝑛𝑑
⎛ ⎞ ⎛ ⎞ ⎛ ⎞

Now to find a particular solution vector y𝑝 we use the method of Undetermined Coefficients.
𝑦3′ 1 1 2 𝑦3
⎜ 1⎟ ⎜ ⎟ ⎜ ⎟

As in the scalar case, first assuming that y𝑝 has the same general form as 𝐹 and then find the

y = ⎝𝑦2′ ⎟ A =⎜ ⎟ ⎜
⎠, ⎝1 2 1⎠ y = ⎝𝑦2 ⎟
⎠.

Let y(𝑡) = x𝑒𝜆𝑡 , y, x ∈ R3 . Then corresponding eigenvalue problem is AX = 𝜆X with constants. We will illustrate the method by the following example.

characteristic equation � � Example 2.8.2. Solve the following system of DEs.

which is equivalent to the equation


� �
� A − 𝜆I � = 0

𝑦1′ = 𝑦1 + 𝑓1 (𝑡)
1 𝑦2′ = 6𝑦1 − 𝑦2 + 𝑓2 (𝑡),
� �
� �

2−𝜆
�2 − 𝜆 1 �

1 where
� �
� 1
� 1 �� = 0

2 𝑓1 (𝑡) 2 𝑓1 (𝑡) 2 𝑓1 (𝑡) 2 0


� �

2𝑡
� 1 2 − 𝜆�

This equation is reduced to (𝜆 − 1) (𝜆 − 4) = 0. Therefore, 𝜆1 = 1, 𝜆2 = 4 are the a) = , b) = 𝑒 and c) = sin 𝑡 +


𝑓2 (𝑡) 1 𝑓2 (𝑡) 1 𝑓2 (𝑡) 1 1
� � � � � � �

eigenvalues.
2.8 Systems of ODE of the First Order 68 2.8 Systems of ODE of the First Order 69

Solution c) For the given DE we have


2 0
The system is y′ = Ay + F(𝑡), where F(𝑡) = sin 𝑡 + 𝑒𝑡
1 1
� �

1 0 Then we get
A= .

6 −1
𝑝1 𝑞1 𝑟1
Then, the corresponding homogeneous system 𝑦 ′ = 𝐴𝑦 has characteristic equation y𝑝 1 = sin 𝑡 + cos 𝑡, 𝑎𝑛𝑑 𝑦 𝑝2 = 𝑒𝑡
𝑝2 𝑞2 𝑟2
� � �

and solve for the constants.


� �
�1 − 𝜆
� 0 ��
� � = 1 − 𝜆2 = 0,
� 6 −1 − 𝜆�
with eigenvalues 𝜆 = ±1.
An eigenvector corresponding to the eigenvalue 𝜆 = 1 is (1, 3)𝑇 and an eigenvector corresponding
2.8.2 The Method of Elimination:
to the eigenvalue 𝜆 = −1 is (0, 1)𝑇 and hence the solution to homogenous system is given by In some cases it could be preferable to consider a higher order differential equation in stead of a
1 𝑡 0 −𝑡 system of first order equations (especially when the characteristic polynomial is easier to solve).
yℎ = 𝑐 1 𝑒 + 𝑐2 𝑒 .
3 1 It is possible to transform a system of 𝑛 first order linear ODE to an 𝑛th order linear ODE. The
� �

transformation requires the idea of the differential operators.


Then for each given case, we are going to find y𝑝
𝑑𝑛
The operator , which is denoted by 𝐷𝑛 , is called a differential operator and the natural
a) Since F(𝑡) = (2, 1)𝑇 which is a constant vector, then y𝑝 will take the form (𝑝1 , 𝑝2 )𝑇 which 𝑑𝑥𝑛
number 𝑛 is the power of the operator.
is also a constant vector. Now, substituting into the equation y′ = Ay + F we have:
Example 2.8.3.
0 1 0 𝑝1 2
= +
0 𝑝2 1
� � � �

6 −1 1. 𝐷2 (𝑥3 + 3𝑥) = 𝐷(3𝑥2 + 3) = 6𝑥.

This implies that 𝑝1 + 2 = 0 and 6𝑝1 − 𝑝2 + 1 = 0 and solving this gives us 𝑝1 = −2 2. 𝐷2 (2𝑥3 − 2𝑥2 + 3) = 12𝑥 − 4.
𝑇 𝑇
and 𝑝2 = −11. Therefore, the particular solution is yp =(−2, −11) = −(2, 11) and the
general solution is A linear combination of differential operators of the form
1 0 2
y(𝑡) = 𝐶1 𝑒 𝑡 + 𝐶2 𝑒−𝑡 − .
3 1 11
𝑎𝑛 𝐷𝑛 + 𝑎𝑛−1 𝐷𝑛−1 + ⋅ ⋅ ⋅ + 𝑎1 𝐷 + 𝐷0 ,
� � �

b) Here we have where 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 are constants is called an 𝑛th order polynomial operator and is denoted 𝑃 (𝐷)
𝑝1 and
F(𝑡) = p𝑒2𝑡 = 𝑒2𝑡 . 𝑑𝑛 𝑦 𝑑𝑦
𝑝2 + 𝑎0 𝑦

𝑑𝑥𝑛 𝑑𝑥
𝑃 (𝐷)𝑦 = (𝑎𝑛 𝐷𝑛 + ⋅ ⋅ ⋅ + 𝑎1 𝐷 + 𝑎1 )𝑦 = 𝑎𝑛 + ⋅ ⋅ ⋅ + 𝑎1
This implies Example 2.8.4.
2 2𝑝1 𝑝1 2
2p𝑒2𝑡 = Ap𝑒2𝑡 + 𝑒2𝑡 ⇐⇒ = +
1 2𝑝2 1
� � � �

1. 𝑦 ′′ + 3𝑦 ′ − 𝑦 = 0 implies (𝐷2 + 3𝐷 − 1)𝑦 = 0


6𝑝1 − 𝑝2
2. 𝑦 ′′′ − 4𝑦 ′ = cos 𝑥 implies (𝐷3 − 4𝐷)𝑦 = cos 𝑥.
Since 𝑒𝑡 ∕= 0, by simplifying the given equation we get 2𝑝1 = 𝑝1 + 2, 2𝑝2 = 6𝑝1 − 𝑝2 + 1
13
and solving for 𝑝1 and 𝑝2 gives us 𝑝1 = 2 and 𝑝2 = . Definition 2.8.2.
2
2.8 Systems of ODE of the First Order 70 2.8 Systems of ODE of the First Order 71

1. Two polynomial operators 𝑃1 (𝐷) and 𝑃2 (𝐷) are equal if and only if 𝑃1 (𝐷)𝑦 = 𝑃2 (𝐷)𝑦 for Solution
all functions 𝑦.
1. The system is equivalent to
2. The sum 𝑃1 (𝐷) + 𝑃2 (𝐷) is obtained by first expressing 𝑃1 and 𝑃2 as linear combinations
𝐷𝑦1 − 𝑦2 = 𝑥2 𝐷𝑦1 − 𝑦2 = 𝑥2
of the operator D and adding the coefficients of like powers of D ⇐⇒
𝐷𝑦2 + 4𝑦1 = 𝑥 4𝑦1 + 𝐷𝑦2 = 𝑥
} {

3. The product 𝑃1 (𝐷)𝑃2 (𝐷) is obtained by using the operator 𝑃2 (𝐷) followed by 𝑃1 (𝐷), i.e.
To eliminate 𝑦2 , apply 𝐷 on the first equation. Then the equation is equivalent to:

[𝑃1 (𝐷)𝑃2 (𝐷)]𝑦 = 𝑃1 (𝐷)[𝑃2 (𝐷)𝑦]. 𝐷2 𝑦1 − 𝐷𝑦2 = 2𝑥


4𝑦1 + 𝐷𝑦2 = 𝑥
Example 2.8.5. Let us illustrate the sum and product of operators
Adding the two equations gives 𝐷2 𝑦1 + 4𝑦1 = 3𝑥 ⇐⇒ (𝐷2 + 4)𝑦1 = 3𝑥 and the
2 3 2
1. If 𝑃1 (𝐷) = 3𝐷 + 7𝐷 − 5, 𝑃2 (𝐷) = 𝐷 + 6𝐷 − 2𝐷 − 3, then characteristic equation for the homogenous part is 𝜆2 + 4 = 0, which implies 𝜆 = ±2𝑖 and
𝑦1ℎ = 𝐶1 cos 2𝑥 + 𝐶2 sin 2𝑥
𝑃1 (𝐷) + 𝑃2 (𝐷) = 𝐷3 + 𝑔𝐷2 + 5𝐷 − 8.
Then using undetermined constants we get: 𝑦1𝑝 = 𝐴𝑥 + 𝐵 which implies
3
2. If 𝑃1 (𝐷) = 2𝐷 + 3, 𝑃2 (𝐷) = 𝐷 − 5, then 𝐴 = ,𝐵 = 0
4
𝑃1 (𝐷)𝑃2 (𝐷) = (2𝐷 + 3)(𝐷 − 5) = 2𝐷2 − 7𝐷 − 15 Therefore, 𝑦1 = 𝐶1 cos 2𝑥 + 𝐶2 sin 2𝑥 + 34 𝑥 and from 4𝑦1 + 𝐷𝑦2 = 𝑥 we have:

Basic Properties
𝑦2′ = 𝑥 − 4𝑦1 = 𝑥 − 4𝐶1 cos 2𝑥 − 4𝐶2 sin 2𝑥 − 3𝑥,

If 𝑃 (𝐷) a differential operator, 𝑦1 , 𝑦2 and 𝑦 are functions and 𝑐 is a constant then:


which implies 𝑦2′ = −4𝐶1 cos 2𝑥 − 4𝐶2 sin 2𝑥 − 2𝑥.
By integrating both sides we get
a. 𝑃 (𝐷)(𝑦1 + 𝑦2 ) = 𝑃 (𝐷)𝑦1 + 𝑃 (𝐷)𝑦2 and
𝑦2 = −2𝐶1 sin 2𝑥 + 2𝐶2 cos 2𝑥 − 𝑥2 + 𝐶3 .
b. 𝑃 (𝐷)(𝑐𝑦) = 𝑐𝑃 (𝐷)𝑦.
Then substituting 𝑦1 and 𝑦2 into the first equation we obtain 𝐶3 = 34 . Therefore,
To solve any system of linear ODE with constant coefficients by elimination method, we first 3
write each equation using polynomial operators and treat the operators as simple constants and 𝑦1 = 𝐶1 cos 2𝑥 + 𝐶2 sin 2𝑥 + 𝑥
4
solve the system using linear algebraic solution methods. and
3
Example 2.8.6. Solve the following systems. 𝑦2 = −2𝐶1 sin 2𝑥 + 2𝐶2 cos 2𝑥 − 𝑥2 +
4
Remark 2.8.3. It is possible, and may be easier, to use Crammer’s rule in solving such
𝑦1′ − 𝑦2 = 𝑥2 𝑦1′ − 2𝑦1 + 2𝑦2′ = 2 − 4𝑒2𝑥
1. 2. non homogeneous equation systems.
𝑦2′ + 4𝑦1 = 𝑥 2𝑦1′ − 3𝑦1 + 3𝑦2′ − 𝑦2 = 0.

2. The system is equivalent to

𝐷𝑦1 − 2𝑦1 + 2𝐷𝑦2 = 2 − 4𝑒2𝑥 (𝐷 − 2)𝑦1 + 2𝐷𝑦2 = 2 − 4𝑒2𝑥


⇐⇒
2𝐷𝑦1 − 3𝑦1 + 3𝐷𝑦2 − 𝑦2 = 0 (2𝐷 − 3)𝑦1 + (3𝐷 − 1)𝑦2 = 0
2.8 Systems of ODE of the First Order 72 2.8 Systems of ODE of the First Order 73

And this can be written in matrix form as There is an equivalence between an 𝑛th order linear ODE and a system of 𝑛 ODEs of first order.
In subsection 2.8.2 we have seen how to transform a system of 𝑛 first order ODEs to an 𝑛th order
𝐷−2 2𝐷 𝑦1 2 − 4𝑒2𝑥
= linear ODE.
𝑦2 0
� � �

2𝐷 − 3 3𝐷 − 1
It is also possible to convert a higher order equation into a system of first order equations using
Then by Crammer’s rule
new variable definitions. To see this, consider a homogeneous 𝑛th order linear ODE:

𝑎𝑛 𝑦 (𝑛) + 𝑎𝑛−1 𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑎1 𝑦 ′ + 𝑎0 𝑦 = 0, where 𝑡 is the independent variable.


� �
�2 − 4𝑒2𝑥

(3𝐷 − 1)(2 − 4𝑒2𝑥 )


Then redefine the variables as follows,
� 2𝐷 ��

(𝐷 2)(3𝐷
� �

− − 1) − (2𝐷 − 3)2𝐷
� 0 3𝐷 − 1�
𝑦1 = � � =
�𝐷−2 �
� 2𝐷 �

𝑥1 (𝑡) = 𝑦(𝑡)
� �
�2𝐷 − 3 3𝐷 − 1�

This implies [⟨3𝐷2 − 7𝐷 + 2⟩ − ⟨4𝐷2 − 6𝐷⟩]𝑦1 = (3𝐷 − 1)(2 − 4𝑒2𝑥 ). Simplifying this
2 2𝑥 2𝑥 2𝑥 𝑥2 (𝑡) = 𝑦 ′ (𝑡)
gives us (−𝐷 − 𝐷 + 2)𝑦1 = −12𝑥2𝑒 − 2 + 4𝑒 and then −𝑦1′′ − 𝑦1′ + 2𝑦1 = −20𝑒 − 2
which is reduced to a second order linear DE in 𝑦1 (Solve this equation.) 𝑥3 (𝑡) = 𝑦 ′′ (𝑡)
.. .
and . = ..
𝑥𝑛 (𝑡) = 𝑦 (𝑛−1) (𝑡)
� �
� 𝐷 − 2 2 − 4𝑒2𝑥 �

(2𝐷 − 3)(2 − 4𝑒2𝑥 )


Then the equation will be equivalent to the system
� �
� �

(−𝐷2 − 𝐷 + 2)
�2𝐷 − 3 0 �
𝑦2 = � � =
�𝐷−2
� 2𝐷 ��

This implies
� �

𝑥′1 (𝑡) = 𝑥2 (𝑡)


�2𝐷 − 3 3𝐷 − 1�

(−𝐷2 − 𝐷 + 2)𝑦2 = −8(2𝑒2𝑥 ) − 6 + 12𝑒2𝑥 𝑥′2 (𝑡) = 𝑥3 (𝑡)


and then −𝑦2′′ − 𝑦2′ + 2𝑦2 = −4𝑒2𝑥 − 6 which is reduced to a second order linear DE in 𝑦2 𝑥′3 (𝑡) = 𝑥4 (𝑡)
(Solve this equation.) .. .
. = ..
Therefore, the solution is: 𝑥′𝑛−1 (𝑡) = 𝑥𝑛 (𝑡)
𝑎𝑛−1 𝑎𝑛−2 𝑎1 𝑎0
𝑥′𝑛 (𝑡) = − 𝑥𝑛 − 𝑥𝑛−1 − ⋅ ⋅ ⋅ − 𝑥2 − 𝑥1
𝑎𝑛 𝑎𝑛 𝑎𝑛 𝑎𝑛
𝑦1 = 𝐶1 𝑒−2𝑥 + 𝐶2 𝑒𝑥 + 5𝑒2𝑥 − 1

Or in matrix notation:
𝑦2 = −𝐶1 𝑒−2𝑥 + 12 𝐶2 𝑒𝑥 − 𝑒2𝑥 + 3

𝑋 ′ = 𝐴𝑋,
2.8.3 Reduction of higher order ODEs to systems of ODE of the first
where the coefficient matrix is
order
0 1 0 0 ⋅⋅⋅ 0 0
0 1 0 0 0
⎛ ⎞

In the previous sections we used the characteristics equation to solve higher order ODEs. The ⋅⋅⋅
characteristic equations are polynomials of degree 𝑛, where 𝑛 is the order of the ODE. However, 0 0 1 0 0
⎜ ⎟

⋅⋅⋅
⎜ 0 ⎟

.. .. .. .. .. ..
⎜ ⎟

solving polynomials is a challenging task when the degree gets larger. Because of the techniques . . . . . .
⎜ ⎟
⎜ 0 ⎟
𝐴=⎜ ⎟,

developed in linear algebra to reduce matrices, it is preferable to solve eigenvalue problems when 0 0 0 1 0
⎜ .. ⎟

⋅⋅⋅
𝑎𝑛−3
⎜ . ⎟

the order of the ODE is higher.


⎜ ⎟

𝑎𝑛 𝑎𝑛 𝑎𝑛 𝑎𝑛
− 𝑎𝑎𝑛0 − 𝑎𝑎𝑛1 − 𝑎𝑛−4 − ⋅ ⋅ ⋅ − 𝑎𝑛−2 − 𝑎𝑛−1
⎜ 0 ⎟
⎝ ⎠
2.9 Numerical Methods to Solve ODEs 74 2.10 Exercises 75

which is the so called the companient matrix of the 𝑛th degree charachteristic equation of the 2.9.2 Runge-Kutta Method
differential equation. Such matrices have special future in matrix theory and the eigenvalue
To improve the drawback in Euler’s method, it is better to take the mid-point of Δ𝑦𝑛 and Δ𝑦𝑛+1
problem could be solved by employing Jordan form of the matrix.
as an increment for 𝑦 instead of simply Δ𝑦𝑛 alone. Hence we have
1
𝑦𝑛+1 = 𝑦𝑛 + (𝑘1 + 𝑘2 ),
2.9 Numerical Methods to Solve ODEs 2
where 𝑘1 = ℎ𝑓 (𝑥𝑛 , 𝑦𝑛 ), 𝑘2 = ℎ𝑓 (𝑥𝑛+1 , 𝑦𝑛 + 𝑘1 )
It could be impossible to analytically solve many practical problems. But an approximate solution
which is called the Runge-Kutta method of second order.
can be obtained using quantitative methods.
The Runge-Kutta method works by using a weighted average of slopes in the basic Euler formula
to estimate 𝑦(𝑥𝑜 + ℎ) [or in general 𝑦(𝑥𝑘 + ℎ).]
2.9.1 Euler’s Method
The fourth-order Runge-Kutta method is given by
Consider a first order initial - value problem: 1
𝑦𝑛+1 = 𝑦𝑛 + ℎ (𝑚1 + 2𝑚2 + 2𝑚3 + 𝑚4 )
6
𝑦 ′ = 𝑓 (𝑥, 𝑦); 𝑦(𝑎) = 𝑏. where 𝑚1 = 𝑓 (𝑥𝑛 , 𝑦𝑛 )
1 1
𝑚2 = 𝑓 (𝑥𝑛 + ℎ, 𝑦𝑛 + ℎ𝑚1 )
△𝑦
Since 𝑦 ′ (𝑥) = lim△𝑥→0 , we can approximate 𝑦 ′ by the ratio
△𝑦 2 2
△𝑥 △𝑥 1 1
𝑚3 = 𝑓 (𝑥𝑛 + ℎ, 𝑦𝑛 + ℎ𝑚2 )
Hence we have △𝑦 ≃ 𝑓 (𝑥, 𝑦)△𝑥 2 2
𝑚4 = 𝑓 (𝑥𝑛 + ℎ, 𝑦𝑛 + ℎ𝑚3 )
Let us denote the 𝑦 values at different points as 𝑦𝑜 , 𝑦1 , 𝑦2 , . . . , where 𝑦𝑜 = 𝑦(𝑥𝑜 ) is the initial
value. And let △𝑥 = ℎ denote the increment in 𝑥, called the step size. This method is surprisingly accurate for values of ℎ < 1.
Then we have:

𝑦1 = 𝑦𝑜 + 𝑓 (𝑥𝑜 , 𝑦𝑜 )ℎ 2.10 Exercises


𝑦2 = 𝑦1 + 𝑓 (𝑥1 , 𝑦1 )ℎ; 𝑥1 = 𝑥𝑜 + ℎ
..
. ...

In general, the nth iteration will be


𝑦𝑛+1 = 𝑦𝑛 + 𝑓 (𝑥𝑛 , 𝑦𝑛 )ℎ; 𝑥𝑛 = 𝑥𝑛−1 + ℎ, 𝑛 = 0, 1, 2, . . .

This iterative method is known as Euler’s method. Since Euler’s method is based on first order
approximation, it may work for very small step size ℎ, which makes the iterative scheme very
slow.
3.1 Critical Points and Stability 77

∙ Any point (𝑥𝑜 , 𝑦𝑜 ) such that both 𝑃 and 𝑄 vanishes is called a critical (or singular or
equilibrium) point of the system (3.1).

∙ At a singular point (𝑥𝑜 , 𝑦𝑜 ), since 𝑥˙ = 0 and 𝑦˙ = 0, a particular solution of equation (3.1)


is simply the constant values 𝑥(𝑡) = 𝑥𝑜 , 𝑦(𝑡) = 𝑦𝑜 .

Chapter 3 ∙ An equilibrium point 𝑋𝑜 = (𝑥𝑜 , 𝑦𝑜 ) of system (3.1) is said to be stable if motions (or
trajectories) that start sufficiently close to 𝑋𝑜 remain close to 𝑋𝑜 .
Mathematically:
*Nonlinear ODEs and Qualitative
Definition 3.1.1. Let 𝑑(𝑃1 , 𝑃2 ) denote the distance between any two points 𝑃1 = (𝑥1 , 𝑦1 )
Analysis and 𝑃2 = (𝑥2 , 𝑦2 ) and let 𝑃 (𝑡) = (𝑥(𝑡), 𝑦(𝑡)) denote the representative point in the
phase plane corresponding to system (3.1). Then a singular (or an equilibrium) point
𝑋𝑜 = (𝑥𝑜 , 𝑦𝑜 ) is stable if for any given 𝜖 > 0, there is a 𝛿 > 0 such that

∙ Many Nonlinear equations cannot be solved in closed form. 𝑑(𝑃 (0), 𝑋𝑜 ) < 𝛿 ⇒ 𝑑(𝑃 (𝑡), 𝑋𝑜 ) < 𝜖, ∀𝑡 > 0

Otherwise, the equilibrium point 𝑋𝑜 is called unstable.


∙ Hence we need to develop qualitative methods to determine properties of solutions without
having them explicitly in hand.

∙ The properties tell us the way how all the trajectories (solution curves) behave near to some ∙ A singular point 𝑋𝑜 is called asymptotically stable if motions (or trajectories) that start out

points. sufficiently close to 𝑋𝑜 not only stay close to 𝑋𝑜 but actually approach 𝑋𝑜 as 𝑡 → ∞

𝑡→∞
i.e., ∃𝛿 > 0 s.t. 𝑑(𝑃 (𝑜), 𝑋𝑜 ) < 𝛿 ⇒ lim 𝑑(𝑃 (𝑡), 𝑋𝑜 ) = 0

Read on Phase portrait and Phase plane Analysis for ODEs.


Definition 3.1.2. A singular point is called:

1) a Center if it is surrounded by closed orbits (paths) corresponding to periodic motions.


3.1 Critical Points and Stability
A center is stable but not asymptotically stable.
∙ Consider the autonomous Nonlinear system in two variables 2) a Focus (or Spiral) if all trajectories around 𝑋𝑜 “focus” towards (or outward) it as
𝑑𝑥
= 𝑃 (𝑥, 𝑦)
𝑡 → ∞.
𝑑𝑡 A fucus can be asymptotically stable or unstable.
𝑑𝑦
= 𝑄(𝑥, 𝑦) (3.1)
𝑑𝑡 3) a Node if there are infinitely many trajectories entering (or leaving) the point 𝑋𝑜 .
If we assume that 𝑦 is dependent on 𝑥, we can equivalently get There are four cases → Proper or Improper nodes with each could be stable or un-
𝑑𝑦 𝑄(𝑥, 𝑦) stable.
=
𝑑𝑥 𝑃 (𝑥, 𝑦)
3.1 Critical Points and Stability 78 3.1 Critical Points and Stability 79

4) a Saddle if all trajectories (paths) approach to 𝑋𝑜 in one direction and move away ∙ System (3.2) can be rewritten as
from it in the other direction.
𝑋˙ 𝑎 𝑏 𝑋
A saddle is always unstable. =
𝑌˙ 𝑐 𝑑 𝑌
� � �

– The two straight-line trajectories through the saddle (along which the flow is attracted
and repelled) are called the stable and unstable manifolds respectively. ∙ Clearly (0, 0) is a critical point for the linear system (3.2) [and hence the point (𝑥𝑜 , 𝑦𝑜 ) is
a critical point for the system (3.1)]

𝑎 𝑏
∙ In many practical problems we will be interested in the stability of equilibrium points. That
means, if we take an initial point near to an equilibrium point 𝑋𝑜 = (𝑥𝑜 , 𝑦𝑜 ), does the point ∙ Let 𝜆1 and 𝜆2 be the two eigenvalues of the coefficient matrix .
𝑐 𝑑

(𝑥(𝑡), 𝑦(𝑡)) on the solution curve (trajectory) remain near 𝑋𝑜 ?


∙ Then the nature of the critical point (0, 0) of the system (3.2) depends upon the nature of
∙ To study this, we approximate the nonlinear system (equation (3.1)) by its linear terms in the eigenvalues 𝜆1 and 𝜆2 .
the Taylor series expansion in the neighborhood of each singular point.
1. If 𝜆1 and 𝜆2 are real, unequal and of the same sign, then the critical point (0, 0) of the
3.1.1 Stability for linear systems linear system (3.2) is a node.

– If, in addition, both 𝜆1 and 𝜆2 are positive, then the critical point is an unstable node.
3.1.2 Stability for nonlinear systems
– If, both 𝜆1 and 𝜆2 are negative, then the critical point is a stable node.
∙ Consider again the system:
2. If 𝜆1 and 𝜆2 are real and of opposite sign, then the critical point (0, 0) of the linear system
𝑑𝑥
= 𝑃 (𝑥, 𝑦) (3.2) is a saddle point.
𝑑𝑡
𝑑𝑦
= 𝑄(𝑥, 𝑦) 3. If 𝜆1 and 𝜆2 are real and equal, then the critical point (0, 0) of the linear system (3.2) is a
𝑑𝑡
node.
∙ From the Taylor series we have
– If, in addition, 𝜆1 = 𝜆2 < 0, then it is a stable node and if 𝜆1 = 𝜆2 > 0, then it is an
𝑃 (𝑥, 𝑦) ≈ 𝑃𝑥 (𝑥𝑜 , 𝑦𝑜 )(𝑥 − 𝑥𝑜 ) + 𝑃𝑦 (𝑥𝑜 , 𝑦𝑜 )(𝑦 − 𝑦𝑜 ) unstable node.
𝑄(𝑥, 𝑦) ≈ 𝑄𝑥 (𝑥𝑜 , 𝑦𝑜 )(𝑥 − 𝑥𝑜 ) + 𝑄𝑦 (𝑥𝑜 , 𝑦𝑜 )(𝑦 − 𝑦𝑜 ) – If, 𝑎 = 𝑑 ∕= 0 and 𝑏 = 𝑐 = 0, then it is a proper node, otherwise an improper node.

∙ Now letting 𝑎 = 𝑃𝑥 (𝑥𝑜 , 𝑦𝑜 ), 𝑏 = 𝑃𝑦 (𝑥𝑜 , 𝑦𝑜 ), 𝑐 = 𝑄𝑥 (𝑥𝑜 , 𝑦𝑜 ) and 𝑑 = 𝑄𝑦 (𝑥𝑜 , 𝑦𝑜 ) we have 4. If 𝜆1 and 𝜆2 are complex conjugates with the real part not zero, then the equilibrium point
(0, 0) of the linear system (3.2) is a focus or spiral.
𝑋˙ = 𝑎𝑋 + 𝑏𝑌
– If, in addition, the real part is negative, then the critical point is a stable focus.
𝑌˙ = 𝑐𝑋 + 𝑑𝑌, (3.2)
– If, the real part is positive then it is an unstable focus.
where 𝑋 = 𝑥 − 𝑥𝑜 and 𝑌 = 𝑦 − 𝑦𝑜 .
5. If 𝜆1 and 𝜆2 are pure imaginary, then the equilibrium point (0, 0) of the linear system (3.2)
∙ The above process is called a linearization process. is a center.
A center is always stable even though it is not asymptotically stable.
3.1 Critical Points and Stability 80 3.2 Stability by Lyapunav’s Method 81

Remark: Example 3.1.1. 1. The pair of differential equations


1
1. In the linearization process it was assumed that 𝑥˙ =
2
𝑥 − 𝑥𝑦
𝑦˙ = −2𝑦 + 𝑥𝑦, 𝑥, 𝑦 ≥ 0,
– the constants 𝑎, 𝑏, 𝑐, and 𝑑 are real numbers;
– the functions 𝑃 and 𝑄 have continuous first partial derivatives in the neighborhood occur in a study of interacting populations. Find the equilibrium points and determine their
of the critical points. nature.

The above two requirements will be met, if the Jacobian


Ans.: (0, 0) is a saddle equilibrium (not stable)
∕= 0. and the equilibrium point (2, 1/2) is a cenetr (stable).

∂(𝑃, 𝑄) ��

2. The equation 𝑥¨ + 𝜖𝑥˙ 3 + 𝑥 = 0 models a harmonic oscillator with cubic damping - that
∂(𝑥, 𝑦) �(𝑥𝑜 ,𝑦𝑜 )

2. The constant terms in the linearized system are missing because 𝑃 (𝑥𝑜 , 𝑦𝑜 ) = 𝑄(𝑥𝑜 , 𝑦𝑜 ) = 0.
is, with a damping term proportional to the velocity cubed. Find the critical point(s) and
3. The nature of the equilibrium points of the nonlinear system (3.1) can be determined from determine their nature.
that of the linearized system (3.2) as in the following Theorems

Theorem 3.1.3 (Poincare’s Result). The classification of all singular points of the non-linear Ans.: the point (0, 0) is the only critical point and it is a center for the nonlinear equation.
system (3.1) correspond in both type and stability with the results obtained by considering the
linearized system (3.2) except for a center and a proper node.
In these exceptional cases 3.2 Stability by Lyapunav’s Method

(i) a center of the linearized system could be either a focus or a center for the nonlinear system.
3.3 Exercises
(ii) a proper node could also be either a spiral or a node for the nonlinear system (3.1).

To determine these exceptional cases one requires to study further the original nonlinear system
itself.

The above procedure (the linearization process) can also be used to solve a second order non-
linear ODE. This can be done by using substitution of variables 𝑦 = 𝑥,
˙ which imply that 𝑦˙ = 𝑥¨.
This will result in a nonlinear system of two first order equations.
However, if such an equation has no term in 𝑥,
˙ we need the following theorem.

Theorem 3.1.4. If the nonlinear equation 𝑥¨ + 𝑓 (𝑥) = 0 has a singular point in the 𝑥𝑥˙ plane
(phase plane), where the linearized system indicates a center or a proper node, the nonlinear
equation also has the same property.
83

This page is left blank intensionally.

Part II

Vector Analysis
4.1 Vector Calculus 85

denoted by r(𝑡) . If 𝑓 (𝑡), 𝑔(𝑡) and ℎ(𝑡) are the components of the vector r(𝑡), then 𝑓, 𝑔 and ℎ
are real-valued functions called the component functions of r and we can write

r(𝑡) = (𝑓 (𝑡), 𝑔(𝑡), ℎ(𝑡)) = 𝑓 (𝑡)𝑖 + 𝑔(𝑡)𝑗 + ℎ(𝑡)𝑘

Example 4.1.1. The function r(𝑡) = 𝑡3 𝑖 + 𝑒−𝑡 𝑗 + sin 𝑡𝑘 is a vector valued function and the
Chapter 4 component functions of r are 𝑡3 , 𝑒−𝑡 and sin 𝑡.

Remark 4.1.2. The domain of a vector valued function r consists of all values of 𝑡 for which the
expression r(𝑡) is defined, that is the values of 𝑡 for which all the component functions are defined.
Vector Differential Calculus
For example, if

r(𝑡) = 𝑡𝑖 + ln(𝑡 − 2)𝑗 + 3𝑡,
4.1 Vector Calculus √
then the domain of r(𝑡) is the set of points in R, where 𝑡, ln(𝑡 − 2) and 3𝑡 are defined. That
is, 𝑡 ≥ 0 and 𝑡 − 2 > 0 and hence the domain of r is (𝑡, ∞).
In the previous Applied Mathematics courses, specifically in the linear algebra part, we have been
discussing about constant vectors, but the most interesting applications of vectors involve also For each 𝑡, where r is defined, draw r(𝑡) as as a vector from the origin to the point (𝑓 (𝑡), 𝑔(𝑡), ℎ(𝑡)).
vector functions. The end points of these vectors traces out a curve C as t varies.

The simplest example is a position vector that depends on time. We can differentiate such a
Example 4.1.2. The function r(𝑡) = (1 + 𝑡)𝑖 + 𝑡𝑗 + (3 − 𝑡)𝑘 is a vector valued function of one
function with respect to time and the first derivative of such function is the velocity and its
variable. The curve that is traced out by the heads of the position vectors this vector valued
second derivative is the acceleration of the particle whose position is given by the position vector.
function is a line that passes through the point (1, 0, 3) and with directional vector (1, 1, −1).
In this case, the coordinates of the tip of the position vector are functions of time.

Therefore, it is worth to talk about such functions and in this course, specially in this chapter we 4.1.2 Limit of A Vector Valued Function
are going to address the calculus of vector fields (vector valued functions).
Definition 4.1.3. A vector valued function 𝑉 (𝑡) is said to have the limit 𝑙 as t approaches 𝑡0 , if
𝑣(𝑡) is defined in some neighborhood of 𝑡0 (possibly except at 𝑡0 ) and
4.1.1 Vector Functions of One Variable in Space
𝑡→𝑡0
lim ∥𝑉 (𝑡) − 𝑙∥ = 0.
First recall the definition of a function, that is, a function is a rule that assigns to each element
in the domain an element in the range. Then we write
lim 𝑉 (𝑡) = 𝑙.
Definition 4.1.1. A vector-valued function, or vector function, is a function whose domain is a 𝑡→𝑡0

set of real numbers and whose range is a set of vectors. A vector function 𝑣(𝑡) is said to be continuous at 𝑡 = 𝑡0 if it is defined in some neighborhood of
𝑡0 and
In this course, we are most interested in vector functions whose values are three-dimensional
lim 𝑉 (𝑡) = 𝑣(𝑡0 ).
vectors. This means that for every number 𝑡 in the domain of there is a unique vector in R3 𝑡→𝑡0
4.1 Vector Calculus 86 4.1 Vector Calculus 87
Z where
𝑥 = 𝑓 (𝑡), 𝑦 = 𝑔(𝑡) and 𝑧 = ℎ(𝑡) (4.1)

and 𝑡 varies in I is called a space curve.

r(t)
The equations in 4.1 are called parametric equations of the curve 𝐶 and the variable 𝑡 is called
the parameter.

Y Example 4.1.4. The components of the vector valued function r(𝑡) = (2 cos 𝑡, 2 sin 𝑡, 0) are
parametric equations of a circle with center at the origin and radius 2 in space.

X
4.1.3 Derivative of a Vector Function
Figure 4.1: Graph of the line in Example 4.1.2
Recall that, if 𝑓 is a real-valued function of one variable, then the derivative of 𝑓 at any point 𝑡
in the domain of 𝑓 is
The following theorem is used as an alternative definition of limit of a vector valued function.
𝑓 ′ (𝑡) = lim ,
𝑓 (𝑡 + ℎ) − 𝑓 (𝑡)
ℎ→0 ℎ
Theorem 4.1.4. If r(𝑡) = (𝑓 (𝑡), 𝑔(𝑡), ℎ(𝑡)), then
provided that the limit exists. Now, let us define the derivative of a vector valued function of one
lim r(𝑡) = 𝑙 variable.
𝑡→𝑡0

if and only if Definition 4.1.6. A vector function 𝑉 (𝑡) is said to be differentiable at a point 𝑡 in the domain
lim 𝑓 (𝑡), lim 𝑔(𝑡), lim ℎ(𝑡) = 𝑙. of V if the limit
𝑡→𝑡0 𝑡→𝑡0 𝑡→𝑡0
lim
𝑉 (𝑡 + ℎ) − 𝑉 (𝑡)
( )

ℎ→0 ℎ
Example 4.1.3. Find lim𝑡→0 r(𝑡), if
exists and if the limit exists then it is denoted by 𝑉 ′ (𝑡). That is,
r(𝑡) = 𝑡3 𝑖 + 𝑒−𝑡 𝑗 + 𝑘.
𝑡
𝑉 ′ (𝑡) = lim
𝑉 (𝑡 + ℎ) − 𝑉 (𝑡)
ℎ→0
( sin 𝑡 )


Solution
Remark 4.1.7. If the function 𝑉 (𝑡) = (𝑉1 (𝑡), 𝑉2 (𝑡), 𝑉3 (𝑡)) is a vector field, then 𝑉 ′ (𝑡) =
By Theorem 4.1.4 we have , (𝑣1′ (𝑡), 𝑉2′ (𝑡), 𝑉 ′ 3(𝑡))

Example 4.1.5. Consider the following functions.


lim r(𝑡) = lim 𝑡3 𝑖 + lim 𝑒−𝑡 𝑗 + lim 𝑘 = 𝑗 + 𝑘.
𝑡→0 𝑡→0 𝑡→0 𝑡→0 𝑡
( ) ( ) ( sin 𝑡 )

Remark 4.1.5. If r(𝑡) = 𝑓 (𝑡), 𝑔(𝑡), 𝑔(𝑡) = 𝑓 (𝑡)𝑖 + 𝑔(𝑡)𝑗 + ℎ(𝑡)𝑘 a vector valued function all 𝑡 1. If 𝑉 (𝑥) = (cos 𝑥, sin 𝑥) then 𝑉 ′ (𝑥) = (− sin 𝑥, cos 𝑥).
in the domain of r, then r is continuous at 𝑡0 if and only if its (three) component functions 𝑓, 𝑔
( )

2. If 𝑉 (𝑡) = (𝑎 cos 𝑡, 𝑎 sin 𝑡, 𝑐𝑡) then 𝑉 ′ (𝑡) = (−𝑎 sin 𝑡, 𝑎 cos 𝑡, 𝑐)


and ℎ are continuous at 𝑡0 .

Vector valued functions and curves in space have close connection. Suppose that 𝑓, 𝑔 and ℎ are
continuous real-valued functions on an interval I. Then the set 𝐶 of all points (𝑥, 𝑦, 𝑧) in space
4.1 Vector Calculus 88 4.2 The Gradient Field 89

Differentiation Rules Definition 4.1.10. Let 𝑉 : R𝑛 → R3 , 𝑉 = (𝑉𝑖 , 𝑉2 , 𝑉3 ) where each 𝑉𝑖 is a function of 𝑛 variables,
∂𝑉
𝑡1 , 𝑡2 , . . . , 𝑡𝑛 . Then the partial derivative of 𝑉 with respect to 𝑡𝑖 is denoted by ∂𝑡𝑖
and is defined
Let 𝑈 (𝑡) and 𝑉 (𝑡) be a vector valued functions in space and c be any constant. Then
as the vector function
∂𝑉 ∂𝑉1 ∂𝑉2 ∂𝑉3
=( , , )
a) (𝑐𝑉 ) = 𝑐𝑉
′ ′ ∂𝑡𝑖 ∂𝑡𝑖 ∂𝑡𝑖 ∂𝑡𝑖
Example 4.1.7. If 𝑓 (𝑥, 𝑦) = (𝑥2 + 𝑦), ln(𝑥2 + 𝑦 2 ), sin(𝑥 + 3𝑦) , then
b) (𝑈 + 𝑉 )′ = 𝑈 ′ + 𝑉 ′
( )

∂𝑓 2𝑥 ∂𝑓 2𝑦
c) (𝑈.𝑉 )′ = 𝑈 ′ .𝑉 + 𝑈.𝑉 ′ = 2𝑥, 2 , cos(𝑥 + 3𝑦) and = 𝑦, , 3 cos(𝑥 + 3𝑦) .
∂𝑥 𝑥 + 𝑦2 ∂𝑦 𝑥2 + 𝑦 2
� � � �

d) (𝑈 × 𝑉 ) = 𝑈 ′ × 𝑉 + 𝑈 × 𝑉 ′
4.2 The Gradient Field
Remark 4.1.8. Let V(t) be a vector function of constant norm. i.e. ∥𝑉 (𝑡)∥ = 𝑐 for a constant 𝑐
or 𝑉.𝑉 = 𝑐2 Then (𝑉.𝑉 )′ = (𝑐2 )′ = 0 which implies 2𝑉 ′ .𝑉 = 0. Then, either 𝑉 ′ = 0 or 𝑉 ′ ⊥ 𝑉. Let 𝐹 (𝑥, 𝑦, 𝑧) be a real valued functions of three variables (i.e. F is a scalar field defined from
Therefore, a nonzero vector field with constant norm is perpendicular to its derivative. 𝑋 ⊂ R3 into R.) The gradient of 𝐹 , denoted by ∇𝐹, is a vector field defined by

∂𝐹 ∂𝐹 ∂𝐹 ∂𝐹 ∂𝐹 ∂𝐹
, , )= 𝑖+ 𝑗+ 𝑘
4.1.4 Vector and Scalar Fields ∇𝐹 = (
∂𝑥 ∂𝑦 ∂𝑧 ∂𝑥 ∂𝑦 ∂𝑧

Now, let us consider vector valued functions, called vector fields, of several variables. Vector and if P is a point in the domain of F, the gradient of F evaluated at P is denoted by ∇𝐹 (𝑃 )
valued functions of one variable are also called vector fields. and also if 𝑓 is a function of two variables, then the the gradient of 𝑓 , denoted by ∇𝑓, is defined
by
∂𝑓 ∂𝑓
Definition 4.1.9. A function 𝑓 whose value is a scalar (or a real number), say 𝑓 : 𝑋 → R, + .
∂𝑥 ∂𝑦
∇𝑓 =
𝑋 ⊂ R𝑛 , is called a scalar field.
But in this section we will focus on the gradient of functions of three variables.
A function 𝑣 whose value is a vector, say 𝑣 : 𝑋 → R𝑚 , 𝑋 ⊂ R𝑛 , is called a vector field. That
is a vector field is a vector valued function. Example 4.2.1. If 𝐹 (𝑥, 𝑦, 𝑧) = 2𝑥 + 𝑥𝑦 − 𝑦𝑧 2 , then F is a scalar field and

Example 4.1.6. 1. The function 𝑇 : 𝑋 → R given by ∇𝐹 (𝑥, 𝑦, 𝑧) = (2 + 𝑦)𝑖 + (𝑥 − 𝑧 2 )𝑗 − (2𝑦𝑧)𝑘 = (2 + 𝑦, 𝑥 − 𝑧 2 , −2𝑦𝑧).


100
𝑇 (𝑥, 𝑦) =
(𝑥 + 1)2 + (𝑦 + 1)2 The point (2, −1, 0) is a point in the domain of F and ∇𝐹 (2, −1, 0) = 𝑖 + 𝑗.

where 𝑋 = {(𝑥, 𝑦)∣0 ≤ 𝑥 ≤ 1, 0 ≤ 𝑦 ≤ 1}, which is a Temperature Field of a square plate Example 4.2.2. The gradient of 𝑓 (𝑥, 𝑦) = 𝑥𝑦 + 2𝑥3 is
is a scalar field. ∂𝑓 ∂𝑓
𝑖+ 𝑗 = (𝑦 + 6𝑥2 )𝑖 + 𝑥𝑗.
∂𝑥 ∂𝑥
∇𝑓 =
3 2 2 2
2. The function 𝑓 : 𝑋 → R given by 𝑓 (𝑥, 𝑦) = (𝑥 + 𝑦, ln(𝑥 + 𝑦 ), sin(𝑥 + 3𝑦)), where
𝑋 = R2 ∖{(0, 0)} is a vector field. Remark 4.2.1. Let F and G be scalar fields of three variables and c be a constant. Then

3. If (𝑥0 , 𝑦0 , 𝑧0 ) is a point in R3 , then the function 𝑑 : R3 → R given by 𝑑(𝑥, 𝑦, 𝑧) = 1. ∇(𝐹 + 𝐺) = ∇𝐹 + ∇𝐺 and


(𝑥 − 𝑥0 )2 + (𝑦 − 𝑦0 )2 + (𝑧 − 𝑧0 )2 is a scalar field. (𝑑 is called the Euclidean Distance.)
2. ∇(𝑐𝐹 ) = 𝑐∇𝐹.

4.2 The Gradient Field 90 4.2 The Gradient Field 91

Let 𝑃 (𝑥0 , 𝑦0 , 𝑧0 ) be a point and 𝑢 = 𝑎𝑖 + 𝑏𝑗 + 𝑐𝑘 be a unit vector, i.e. 𝑎2 + 𝑏2 + 𝑐2 = 1. Then the Example 4.2.4. Let 𝐹 (𝑥, 𝑦, 𝑧) = 2𝑥𝑧 + 𝑦𝑧 2 and 𝑃 (1, 1, 2). The gradient of F is
directional derivative of a scalar field F at the point P in the direction of u, denoted by 𝐷𝑢 𝐹 (𝑃 ),
∇𝐹 (𝑥, 𝑦, 𝑧) = 2𝑧𝑖 + 𝑧 2 𝑗 + (2𝑥 + 2𝑦𝑧)𝑘
is defined by
∂𝐹 ∂𝐹 ∂𝐹 and then
𝐷𝑢 𝐹 (𝑃 ) = 𝑎 (𝑥0 , 𝑦0 , 𝑧0 ) + 𝑏 (𝑥0 , 𝑦0 , 𝑧0 ) + 𝑐
∂𝑥 ∂𝑦 ∂𝑧
(𝑥0 , 𝑦0 , 𝑧0 ) = ∇𝐹 (𝑥0 , 𝑦0 , 𝑧0 ).𝑢, ∇𝐹 (2, 1, 1) = 2𝑖 + 𝑗 + 6𝑘.

the scalar product of the vectors ∇𝐹 (𝑥0 , 𝑦0 , 𝑧0 ) and u. The maximum rate of change of F at (2, 1, 1) is in the direction of 2𝑖 + 𝑗 + 6𝑘 and this maximum
√ √
rate of change is 22 + 12 + 62 = 41.
Example 4.2.3. Given 𝐹 (𝑥, 𝑦, 𝑧) = 2𝑥 + 𝑥𝑦 − 𝑦𝑧 2 , the directional derivative of F at the point
(1, 2, 2) in the direction of the unit vector 𝑢 = ( 23 , 13 , 23 ) is
4.2.1 Level Surfaces, Tangent Planes and Normal Lines
1 ∂𝐹 2 ∂𝐹 1 ∂𝐹
(1, 2, 2) + (1, 2, 2) + (1, 2, 2) = .
−7
𝐷𝑢 𝐹 (1, 2, 2) =
3 ∂𝑥 3 ∂𝑦 3 ∂𝑧 3 The gradient of a scalar field can be use to find equations of tangent planes and equations of
� � � � � �

Remark 4.2.2. If F is a scalar field of three variables and 𝑣 is any nonzero vector then the normal lines of a level surface defined by the scalar field at a given point.
1
directional derivative of F at a point P in the direction of v is given by 𝐷𝑢 𝐹 (𝑃 ), where 𝑢 = ∥𝑣∥
𝑣.
Let F be a function of three variables and c be a number. The set of points (𝑥, 𝑦, 𝑧) such that
Let F be a scalar field and F and its partial derivatives be continuous in some sphere about a 𝐹 (𝑥, 𝑦, 𝑧) = 𝑐 is called a level surface of F.
point P and u be a unit vector. Then
Example 4.2.5. Let 𝐹 (𝑥, 𝑦, 𝑧) = 𝑥2 + 𝑦 2 + 𝑧 2 . If 𝑐 > 0, then the level surface 𝐹 (𝑥, 𝑦, 𝑧) = 𝑐 is

𝐷𝑢 𝐹 (𝑃 ) = ∇𝐹 (𝑃 ).𝑢 = ∥∇𝐹 (𝑃 )∥∥𝑢∥ cos 𝜃 = ∥∇𝐹 (𝑃 )∥ cos 𝜃, a sphere with radius 𝑐; if 𝑐 = 0, then the level surface 𝐹 (𝑥, 𝑦, 𝑧) = 𝑐 is just the point (0, 0, 0)
and if 𝑐 < 0, then the level surface 𝐹 (𝑥, 𝑦, 𝑧) = 𝑐 is empty set.
where 𝜃 is the angle between u and ∇𝐹 (𝑃 ).

For example, if 𝑐 = 9, then the surface 𝐹 (𝑥, 𝑦, 𝑧) = 9 is a sphere 𝑥2 + 𝑦 2 + 𝑧 2 = 9 with radius 3


Therefore 𝐷𝑢 𝐹 (𝑃 ) has its maximum when cos 𝜃 = 1, which occurs when 𝜃 = 0, that is, u is in and center at the origin.
the same direction as ∇𝐹 (𝑃 ) and 𝐷𝑢 𝐹 (𝑃 ) has its minimum when cos 𝜃 = −1, that is ,𝜃 = 𝜋
and hence ∇𝐹 (𝑃 ) and u are in opposite directions. Let F be a scalar function of three variables, 𝑐 be a constant and S be the level surface given by
𝐹 (𝑥, 𝑦, 𝑧) = 𝑐. Let 𝑃0 = (𝑥0 , 𝑦0 , 𝑧0 ) be a point on S. Assume that there are smooth curves on
the surface S passing through 𝑃0 . Then each such curve has a tangent vector at 𝑃0 .
Therefore we have proved the following theorem.

Theorem 4.2.3. Let F be a scalar field and F and its partial derivatives be continuous in some The plane containing these tangent vectors is called the tangent plane to the surface S at 𝑃0
sphere about a point P and suppose that ∇𝐹 (𝑃 ) ∕= 0. Then and a vector orthogonal to this tangent plane at 𝑃0 is called a normal vector, or normal, to
the surface S at 𝑃0 . The line through 𝑃0 in the direction of the normal vector is called a normal
1. At P, F has its maximum rate of change in the direction of ∇𝐹 (𝑃 ) and this maximum rate
line to the surface S at the point 𝑃0 .
of change is ∥∇𝐹 (𝑃 )∥.

2. At P, F has its minimum rate of change in the direction of −∇𝐹 (𝑃 ) and this minimum Therefore, to determine equation of the tangent plane and normal line to a surface S at a given
rate of change is −∥∇𝐹 (𝑃 )∥. point P, we need to have a normal vector to the tangent plane and for this purpose we have the
following theorem.
4.2 The Gradient Field 92 4.3 Curves and Arc length 93

which is equivalent to 𝑥 + 𝑦 + 3𝑧 = 4 and equation of the normal line is


Z
Tangent vector to C at P (𝑥, 𝑦, 𝑧) = (1, 1, 1) + 𝑡(1, 1, 3), 𝑡 ∈ R.

P C 4.3 Curves and Arc length


Normal vector to the curve at P Let
𝑥 = 𝑥(𝑡), 𝑦 = 𝑦(𝑡) and 𝑧 = 𝑧(𝑡) (4.2)
Part of tangent plane at P
be continuous functions of a real parameter 𝑡 over a closed interval [𝑎, 𝑏]. The points
Y
r(𝑡) = (𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)),

for 𝑎 ≤ 𝑡 ≤ 𝑏 are said to constitute a curve C joining the endpoints r(𝑎) and r(𝑏) and (4.2) is
called a parametrization of the curve. We call the functions 𝑥, 𝑦 and 𝑧, coordinate functions.
X
Z
Figure 4.2: Normal vector to a surface.
r(b)
Theorem 4.2.4. Let F be a function of three variables and suppose that F and its first partial
derivatives are continuous at a point P on the level surface S given by 𝐹 (𝑥, 𝑦, 𝑧) = 𝑐. Suppose
that ∇𝐹 (𝑃 ) ∕= 0. Then ∇𝐹 (𝑃 ) is normal to the level surface S at the point P.
r(t)
Example 4.2.6. Find the equation of the tangent plane and normal line to the surface
r(a) Y
4 4 4
3𝑥 + 3𝑦 + 6𝑧 = 12

at the point (1, 1, 1).


X

Figure 4.3: A curve with initial point r(a) and terminal point r(b).
Solution

Let 𝐹 (𝑥, 𝑦, 𝑧) = 3𝑥4 + 3𝑦 4 + 6𝑧 4 . Then


We call a curve C that is parameterized by r(𝑡) = (𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)), for 𝑎 ≤ 𝑡 ≤ 𝑏:

∂𝐹 ∂𝐹 ∂𝐹 ∙ continuous if each coordinate function is continuous;


(𝑥, 𝑦, 𝑧) = 12𝑥3 , (𝑥, 𝑦, 𝑧) = 12𝑦 3 and (𝑥, 𝑦, 𝑧) = 24𝑧 3
∂𝑥 ∂𝑥 ∂𝑥
∙ differentiable if each coordinate function is differentiable;
which are continuous at the point (1, 1, 1) and hence ∇𝐹 (1, 1, 1) = 12𝑖 + 12𝑗 + 24𝑘.
∙ closed if the initial and terminal points coincide, that is,

Since ∇𝐹 (1, 1, 1) is normal to the plane, equation of the plane is given by (𝑥(𝑎), 𝑦(𝑎), 𝑧(𝑎)) = (𝑥(𝑏), 𝑦(𝑏), 𝑧(𝑏))

12𝑥 + 12𝑦 + 24𝑧 = 12 + 12 + 24 = 48 and if a curve is not closed it is called an arc;


4.3 Curves and Arc length 94 4.3 Curves and Arc length 95

∙ simple if 𝑎 < 𝑡1 < 𝑡2 < 𝑏 implies that (𝑥(𝑡1 ), 𝑦)𝑡1 ), 𝑧(𝑡1 )) ∕= (𝑥(𝑡2 ), 𝑦(𝑡2 ), 𝑧(𝑡2 )), in other 1. Straight line:
words, if it does not intersect itself; A straight line L through a point P with position vector in the direction of a constant vector
A can be represented as
∙ smooth if the coordinate functions have continuous derivatives which are never all zero for
the same value of 𝑡, that is, it possesses a tangent vector that varies continuously along 𝑟(𝑡) = 𝑃 + 𝑡𝐴 = (𝑣1 + 𝑡𝑎1 , 𝑣2 + 𝑡𝑎2 , 𝑣3 + 𝑡𝑎3 ), for all 𝑡 ∈ R,
the length of C.
where 𝑃 = (𝑣1 , 𝑣2 , 𝑣3 ), 𝐴 = (𝑎1 , 𝑎2 , 𝑎3 ).
∙ piecewise smooth if it has continuous tangent at all but finitely many points. Such a curve
Z
is a curve with a finite number of corner at which there is no tangent.

If C is a curve which is divided into smooth curves 𝐶1 , 𝐶2 , ..., 𝐶𝑛 such C begins with 𝐶1 ,
𝐶2 begins where 𝐶1 ends and so on, but at the point where 𝐶𝑖 and 𝐶𝑖+1 join, there may P

A
be no tangent in the resulting curve , then C is piecewise smooth curve and we write such L

a curve as
Y
𝐶 = 𝐶1 ⊕ 𝐶2 ⊕ ... ⊕ 𝐶𝑛 .

Z X

Figure 4.5: A line passing through a point and parallel to a given vector.
C

C2 C
3
2. Ellipse, circle:
The vector function:
C1
Y 𝑟(𝑡) = (𝑎 cos 𝑡, 𝑏 sin 𝑡, 0) (4.4)

represents an ellipse and is a circle if 𝑎 = 𝑏.


X
3. Circular helix:
The twisted curve represented by the vector function:
Figure 4.4: A piecewise smooth curve.
𝑟(𝑡) = (𝑎 cos 𝑡, 𝑎 sin 𝑡, 𝑐𝑡), (4.5)
If the tail of the position vector
𝑐 ∕= 0 is a circular helix.
r(𝑡) = 𝑥(𝑡)𝑖 + 𝑦(𝑡)𝑗 + 𝑧(𝑡)𝑘 (4.3)
Consider a curve C that is parameterized by r(𝑡) = (𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)), for 𝑎 ≤ 𝑡 ≤ 𝑏. If it exists,
is fixed at the origin, then the head of 𝑟(𝑡) generates the curve as 𝑡 varies from 𝑎 to 𝑏. the derivative of r(𝑡0 ) for 𝑡0 ∈ (𝑎, 𝑏) is given by

Example 4.3.1. The following are examples of curves.


r(𝑡0 + ℎ) − r(𝑡0 )
r′ (t0 ) = lim
ℎ→0 ℎ
which is equal to
4.3 Curves and Arc length 96 4.4 Tangent, Curvature and Torsion 97

Z moves into a tangent vector to C at the point (x(𝑡0 ), y(𝑡0 ), z(𝑡0 )).

Hence the derivative 𝑟′ (𝑡) (if it exists) of the curve is called the tangent vector to the curve at
the point 𝑟(𝑡) and the equation of the tangent line to the curve C at point P is

𝑞(𝑠) = 𝑟 + 𝑠𝑟′ (4.6)


Y
Example 4.3.2. For the curve C given by 𝐹 (𝑡) = 2𝑡𝑖−𝑡2 𝑗 +4𝑡𝑘, the vector 𝐹 ′ (𝑡) = 2𝑖−2𝑡𝑘 +4𝑘
is tangent to the curve at the point (2𝑡, −𝑡2 , 4𝑡).
X
Definition 4.3.1. The length ℓ of the curve C which is given by the parametrization r(𝑡) =
Figure 4.6: A circular helix in three dimensional space. 𝑥(𝑡)𝑖 + 𝑦(𝑡)𝑗 + 𝑧(𝑡)𝑘 on [𝑎, 𝑏] is defined by

ℓ= r′ (𝑡).r′ (𝑡)𝑑𝑡.
𝑎
� 𝑏√

lim 𝑖+lim 𝑗+ lim


x(𝑡0 + ℎ) − x(𝑡0 ) y(𝑡0 + ℎ) − y(𝑡0 ) z(𝑡0 + ℎ) − z(𝑡0 )
𝑘 = 𝑥′ (𝑡0 )𝑖+𝑦 ′ (𝑡0 )𝑗+𝑧 ′ (𝑡0 )𝑘.
ℎ→0 ℎ ℎ→0 ℎ ℎ→0 ℎ If we replace 𝑏 (the fixed upper limit of integration ) by a variable 𝑡, 𝑎 ≤ 𝑡 ≤ 𝑏, the integral
becomes a function of 𝑡.
That is, r(𝑡0 ) = 𝑥′ (𝑡0 )𝑖 + 𝑦 ′ (𝑡0 )𝑗 + 𝑧 ′ (𝑡0 )𝑘. 𝑑𝑟
𝑠(𝑡) = r′ .r′ 𝑑𝜏, where r′ =
𝑎 𝑑𝜏
� 𝑡√

Recall from calculus that, the derivative of a function at a point is the slope of a tangent line to and is called the arc length function.
the graph of the function at the point. Differentiating the arc length function gives us
𝑑𝑠 √ ′ ′
𝑑𝑡
= r .r = ∥r′ (𝑡)∥ = ∥v(𝑡)∥.
Z

Example 4.3.3. Let r(𝑡) = (𝑎 cos 𝑡, 𝑎 sin 𝑡, 𝑐𝑡), 𝑐 ∕= 0. represent circular helix.
r(t0)
Then r′ (𝑡) = (−𝑎 sin 𝑡, 𝑎 cos 𝑡, 𝑐) and

r′ .r′ = (−𝑎 sin 𝑡, 𝑎 cos 𝑡, 𝑐).(−𝑎 sin 𝑡, 𝑎 cos 𝑡, 𝑐) = 𝑎 sin2 𝑡 + 𝑎 cos2 𝑡 + 𝑐2 = 𝑎2 + 𝑐2 .


r(t0+h)
r(t0+h)−r(t)
Y Hence the are length of the circular helix is:

X 𝑠(𝑡) = r′ .r′ 𝑑𝑡 = 𝑎2 + 𝑐2 𝑑𝑠 = 𝑡 𝑎2 + 𝑐2 .
𝑜 0
� 𝑡√ � 𝑡√

Figure 4.7: Derivative in diagram.


4.4 Tangent, Curvature and Torsion
Consider the figure above. As ℎ → 0, the vector r(t0 + h) − r(t0 ) moves toward r(𝑡0 ) along
the curve C and the vector Let F(𝑡) = 𝑥(𝑡)𝑖 + 𝑦(𝑡)𝑗 + 𝑧(𝑡)𝑘 be the position vector of a curve C for 𝑎 ≤ 𝑡 ≤ 𝑏. Assume that
1 the coordinate functions 𝑥, 𝑦 and 𝑧 are twice continuously differentiable.

(r(t0 + h) − r(t0 ))
4.4 Tangent, Curvature and Torsion 98 4.4 Tangent, Curvature and Torsion 99

If a particle is moving along the curve C with a position vector F(𝑡) = 𝑥(𝑡)𝑖 + 𝑦(𝑡)𝑗 + 𝑧(𝑡)𝑘, then Consider the relation
𝑑𝑇 𝑑𝑇 𝑑𝑡 𝑑𝑇 /𝑑𝑡
the the velocity v(𝑡) of the particle at time t is: = . = .
𝑑𝑆 𝑑𝑡 𝑑𝑆 𝑑𝑆/𝑑𝑡
v(𝑡) = F′ (𝑡) But 𝑑𝑆/𝑑𝑡 = ∥F′ (𝑡)∥ and hence we get
1
and the speed 𝑣(𝑡) of of the particle is the norm of the velocity, i.e 𝐾(𝑡) = ∥T′ (𝑡)∥
∥F′ (𝑡)∥

𝑣(𝑡) = ∥v(𝑡)∥ = ∥F′ (𝑡)∥, which is a function of t.

which is the rate of change of the distance covered by the particle along the curve with respect Example 4.4.1. Curvature of a line at any point is zero.
to the time and the acceleration a(𝑡) of the moving particle is the rate of change of the velocity To see this, let 𝑙 be a line that passes through a point 𝑃 (𝑥0 , 𝑦0 , 𝑧0 ) with directional vector
with respect to time, i.e. 𝐴 = (𝑎, 𝑏, 𝑐). Then the parametric equation of 𝑙 is given by
a(𝑡) = v′ (𝑡) = F′′ (𝑡).
F(𝑡) = (𝑥0 + 𝑡𝑎)𝑖 + (𝑦0 + 𝑡𝑏)𝑗 + (𝑧0 + 𝑡𝑐)𝑘, 𝑡 ∈ R.
If F′ (𝑡) ∕= 0, then the vector F′ (𝑡) is tangent to the curve C. Let T(𝑡) be a unit vector in the
Then F′ (𝑡) = 𝑎𝑖 + 𝑏𝑗 + 𝑐𝑘 and
direction of F′ (𝑡), i.e.
1 1 1
T(𝑡) = F′ (𝑡) T(𝑡) = F′ (𝑡) = √ (𝑎𝑖 + 𝑏𝑗 + 𝑐𝑘).
𝑎2 + 𝑏2 + 𝑐2
∥F′ (𝑡)∥ ∥F′ (𝑡)∥

Z This implies that, T′ (𝑡) = 0 for all t and hence 𝐾(𝑡) = 0 for all t. This is clear from the fact
that a particle moving on a straight line does not change its direction.
r(t)

Example 4.4.2 (Curvatures of ellipses and circles). Recall that the vector function
T

r(𝑡) = (𝑎 cos 𝑡, 𝑏 sin 𝑡, 0),


Y
𝑡∈R

represents an ellipse and it represents a circle if 𝑎 = 𝑏 in space. Then r′ (𝑡) = (−𝑎 sin 𝑡, 𝑏 cos 𝑡, 0)
X √
and ∥r′ (𝑡)∥ = 𝑎2 sin2 𝑡 + 𝑏2 cos2 𝑡. Therefore

Figure 4.8: Tangent unit vectors 1 1


T(𝑡) = r′ (𝑡) = √ (−𝑎 sin 𝑡, 𝑏 cos 𝑡, 0).
∥r′ (𝑡)∥ 𝑎2 sin2 𝑡 + 𝑏2 cos2 𝑡
Then as t varies, T turns with bending of the curve, but T is a unit vector and hence the length
If 𝑎 = 𝑏, then T(𝑡) = (− sin 𝑡, cos 𝑡, 0). This implies T′ (𝑡) = (− cos 𝑡, − sin 𝑡, 0) and hence the
of T remains constant.
curvature of the circle is 𝐾(𝑡) = 𝑎1 .
Definition 4.4.1. Let C be a smooth curve with parametrization F(𝑡) such that F(𝑡) is differ-
Remark 4.4.2. If C is a curve that is traced-out by a vector field r(𝑡), then the curvature K of
entiable. The norm of the rate of change of the unit vector T(𝑡) with respect to the arc length
the curve C is given by
function S is called the curvature K of the curve C. That is,
𝐾(𝑡) = .
∥r′ (𝑡) × r′′ (𝑡)∥
∥r′ (𝑡)∥3
Example 4.4.3. Find the curvature of the helix
� �
� 𝑑𝑇 �
𝐾(𝑆) = � �
� 𝑑𝑆 �.

r(𝑡) = (𝑎 cos 𝑡)𝑖 + (𝑎 sin 𝑡)𝑗 + 𝑏𝑡𝑘,


4.4 Tangent, Curvature and Torsion 100 4.4 Tangent, Curvature and Torsion 101

where 𝑎, 𝑏 ≥ 0 and 𝑎2 + 𝑏2 ∕= 0. N Z
T

First r′ (𝑡) = (−𝑎 sin 𝑡)𝑖 + (𝑎 cos 𝑡)𝑗 + 𝑏𝑘 and r′′ (𝑡) = (−𝑎 cos 𝑡)𝑖 − (𝑎 sin 𝑡)𝑗. Then

r(t)
∥r′ (𝑡)∥ = 𝑎2 cos2 𝑡 + 𝑎2 sin2 𝑡 + 𝑏2 = 𝑎2 + 𝑏2

and Y
𝑗
� �
� �
� 𝑖 𝑘�

X
� �
r′ (𝑡) × r′′ (𝑡) = �� −𝑎 sin 𝑡 𝑎 cos 𝑡 𝑏 �� = 𝑎𝑏 sin 𝑡𝑖 + 𝑎𝑏 cos 𝑡𝑗 + 𝑎2 𝑘,


� �

Figure 4.9: Unit tangent vector and principal unit normal vector to a curve.
�−𝑎 cos 𝑡 −𝑎 sin 𝑡 0�

which implies ∥r′ (𝑡) × r′′ (𝑡)∥ = 𝑎 𝑎2 + 𝑏2 . Therefore the curvature 𝐾(𝑡) of the helix is
𝑎
𝐾(𝑡) = .
∥∥r′ (𝑡) × r′′ (𝑡)∥
= 2
∥r′ (𝑡)∥3 𝑎 + 𝑏2 Definition 4.4.3. The unit vector B = 𝑇 × 𝑁 is called the binomial vector of the curve 𝐶

In the case of plane curves, that is, graph of functions of the form 𝑦 = 𝑓 (𝑥) can be considered trace out by the vector field r(𝑡).

as curves traced out by a vector field r(𝑡) = 𝑡𝑖 + 𝑓 (𝑡)𝑗. Here the 𝑘 𝑡ℎ component is considered
Now, by using the rule of differentiation we have
to be zero. Therefore r′ (𝑡) = 𝑖 + 𝑓 ′ (𝑡)𝑗 and r′′ (𝑡) = 𝑓 ′′ (𝑡)𝑗 and then the curvature of this curve
𝑑B 𝑑 𝑑T 𝑑N
is given by = .
𝑑𝑆 𝑑𝑆 𝑑𝑆 𝑑𝑆
(T × N) = ×N + T×
� � � �

𝐾(𝑡) = ,
∣𝑓 ′′ (𝑡)∣
3
(1 + (𝑓 ′ (𝑡))2 ) 2 But
𝑑T
since r′ (𝑡) × r′′ (𝑡) = 𝑓 ′′ (𝑡)𝑘.
𝑑𝑆
× N = 0,

Example 4.4.4. Find the curvature of the parabola 𝑦 = 𝑎𝑥2 + 𝑏𝑥 + 𝑐, where 𝑎 ∕= 0. since they are of the same direction. This implies
The vector field that traces out the parabola is given by r(𝑡) = 𝑡𝑖+𝑓 (𝑡)𝑗, where 𝑓 (𝑥) = 𝑥2 +𝑏𝑥+𝑐. 𝑑B 𝑑N
,
𝑑𝑆 𝑑𝑆
=T×
Then 𝑓 ′ (𝑡) = 2𝑎𝑡 + 𝑏 and 𝑓 ′′ (𝑡) = 2𝑎. Therefore
𝑑B 𝑑B
which implies 𝑑𝑆
⊥ 𝑇 and since B is vector of constant norm we get that 𝑑𝑆
⊥ B.
𝐾(𝑡) =
∣𝑎∣
3 .
(1 + (2𝑎𝑡 + 𝑏)2 ) 2
𝑑B 𝑑B
Given a curve C which is parameterized by the the position vector r(𝑡), we have a unit tangent Hence, the vector 𝑑𝑆
is parallel to N. This implies that 𝑑𝑆
= −𝜏 𝑁 for some constant 𝜏 (the

vector T at a point where the coordinate functions are differentiable. Now we are looking to get negative sign is traditional). Here the scalar 𝜏 is called the torsion along the curve and from

a unit normal vector to the curve at a point where the coordinate functions are differentiable. N. 𝑑B
𝑑𝑆
= −𝜏 N.N = −𝜏.1 we have
𝑑B
.N.
𝑑𝑆
𝜏 =−

Since T has constant length, 𝑑T


𝑑𝑠
is orthogonal to T. At a point where 𝐾(𝑆) ∕= 0, the vector Unlike 𝐾, which is always positive,𝜏 can be positive, negative or zero.

1 𝑑𝑇 𝑇 ′ (𝑡) 𝑟′′ (𝑆) Since B, T and N are mutually orthogonal, they are linearly independent. Hence any vector in
𝑁= . = =
𝐾 𝑑𝑆 ∥𝑇 ′ (𝑡)∥ ∥𝑟(𝑆)∥ R3 can be represented as a linear combination of these vectors.
is a unit vector parallel to 𝑇 ′ (𝑡) and hence normal to the curve and it is called principal unit
normal vector for a curve 𝐶. If B′ , T′ and N′ exist, then we get the following:
4.5 Divergence and Curl 102 4.5 Divergence and Curl 103

2. 𝑐𝑢𝑟𝑙𝐹 = 2𝑧𝑖 − 2𝑥𝑗 − 3𝑥𝑘, which is a vector in R3 .


T = 𝐾N

N′ = −𝐾T + 𝜏 B Let ∇ be the operator defined from the set of scalar fields of three variables into the set of vectors

B = −𝜏 N in R3 by
∂ ∂ ∂
and this formula is called Frenet formula. 𝑖+ 𝑗 + 𝑘.
∂𝑥 ∂𝑦 ∂𝑧
∇=
∂ ∂ ∂
Example 4.4.5. Let F(𝑡) = 𝑡2 𝑖 − 2𝑡𝑗 + 𝑡𝑘. Find the curvature, principal unit vector, binomial If F is a scalar field of three variables, then the products ∂𝑥
(𝐹 ), ∂𝑦 (𝐹 ) and ∂𝑧
(𝐹 ) are defined to
∂𝐹 ∂𝐹 ∂𝐹
vector of the curve C with position vector F and the torsion along the curve. be ,
∂𝑥 ∂𝑦
and ∂𝑧
respectively.

Remark 4.5.2. The ∇ operator and gradient, divergence and curl.


Solution
√ 1. The product of ∇ and a scalar field F in the given order is the gradient of F, that is,
F′ (𝑡) = 2𝑡𝑖 − 2𝑗 + 𝑘 and then ∥F′ (𝑡)∥ = 4𝑡2 + 5. This implies
∂ ∂ ∂ ∂𝐹 ∂𝐹 ∂𝐹
1 1 𝑖+ + 𝐹 = + + = 𝑔𝑟𝑎𝑑𝐹.
∂𝑥 ∂𝑦 ∂𝑧 ∂𝑥 ∂𝑦 ∂𝑧
∇𝐹 =
𝑇 (𝑡) =
� �

𝐹 ′ (𝑡) = √ (2𝑡𝑖 − 2𝑗 + 𝑘).


∥𝐹 ′ (𝑡)∥ 4𝑡2 + 5
.

4.5 Divergence and Curl 2. The product of ∇ and a vector field F in the given order is the divergence of F , that is, if
𝐹 = 𝑓 𝑖 + 𝑔𝑗 + ℎ𝑘, then
Recall that, the gradient operator produces a vector field from a scalar field. We will discuss two ∂ ∂ ∂ ∂𝑓 ∂𝑔 ∂ℎ
𝑖+ 𝑗 + 𝑘 .(𝑓 𝑖 + 𝑔𝑗 + ℎ𝑘) = + + = 𝑑𝑖𝑣𝐹.
other important vector operations. One produces a scalar field from a vector field and the other ∂𝑥 ∂𝑦 ∂𝑧 ∂𝑥 ∂𝑦 ∂𝑧
∇.𝐹 =
� �

produces a vector field from a vector field.


Here even though ∇.F = 𝑑𝑖𝑣F this notation is not directly equivalent to the scalar (dot)
3 3 product. This is because
∂𝑓 ∂𝑔 ∂ℎ
Definition 4.5.1. Let 𝐹 : R → R be a differentiable vector field given by
+ +
∂𝑥 ∂𝑦 ∂𝑧
∇.F =
𝐹 (𝑥, 𝑦, 𝑧) = 𝑓 (𝑥, 𝑦, 𝑧)𝑖 + 𝑔(𝑥, 𝑦, 𝑧)𝑗 + ℎ(𝑥, 𝑦, 𝑧)𝑘.
which is a real number depending on values of 𝑓, 𝑔 and ℎ, whereas
1. The divergence of F, denoted by 𝑑𝑖𝑣𝐹 , is the scalar field defined by ∂ ∂ ∂
F.∇ = 𝑓 +𝑔 +ℎ
∂𝑓 ∂𝑔 ∂ℎ ∂𝑥 ∂𝑦 ∂𝑧
𝑑𝑖𝑣𝐹 = + + .
∂𝑥 ∂𝑦 ∂𝑧 which is an operator.

2. The curl of F, denoted by 𝑐𝑢𝑟𝑙𝐹 , is the vector field defined by 3. The cross product of ∇ and a vector field F is the curl of F, that is, if 𝐹 = 𝑓 𝑖 + 𝑔𝑗 + ℎ𝑘,
∂ℎ ∂𝑔 ∂𝑓 ∂ℎ ∂𝑔 ∂𝑓 then
𝑐𝑢𝑟𝑙𝐹 = 𝑖+ 𝑗+ 𝑘.
∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
− − −
� � � � � �

𝑗

∂ℎ ∂𝑔 ∂𝑓 ∂ℎ ∂𝑔 ∂𝑓
𝑖 + 𝑗 + 𝑘 = 𝑐𝑟𝑢𝑙𝐹.
� �

∂𝑦
� �

∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
Example 4.5.1. Let 𝐹 (𝑥, 𝑦, 𝑧) = 3𝑥𝑦𝑖 − 2𝑦𝑧𝑗 + 𝑥2 𝑘. Then 𝑓 = 3𝑥𝑦, 𝑔 = −2𝑦𝑧, ℎ = 𝑧 and − − −
∂𝑓 ∂𝑓 ∂𝑓 ∂𝑔 ∂𝑔 ∂𝑔 ∂ℎ ∂ℎ ∂ℎ
�𝑖 𝑘� � � � � � �

hence = 3𝑦, = 3𝑥, = 0, = 0, = 2𝑥, = 0 and = 0. 𝑔 ℎ


� �

∂𝑥 ∂𝑦 ∂𝑧 ∂𝑥 ∂𝑦 ∂𝑧 ∂𝑥 ∂𝑦 ∂𝑧
= −2𝑧, = −2𝑦,
�∂ ∂ � =
� ∂𝑥 ∂𝑧 �

Therefore
� �
�𝑓 �

4. The physical significance of 𝑑𝑖𝑣F at point P is that it describes the out flow of F per unit
1. 𝑑𝑖𝑣𝐹 = 3𝑦 − 2𝑧 + 2𝑥𝑧 which is a scalar in R. volume at P.
4.5 Divergence and Curl 104 4.5 Divergence and Curl 105

Let 𝐺 be a continuous scalar field with continuous first and second partial derivatives. Then
��
∂𝐹 ∂𝐺 ∂𝐺
���� �� grad𝐺 = 𝑖+ 𝑗+ 𝑘
∂𝑥 ∂𝑦 ∂𝑧
�������
� and
𝑗
∂𝐹 ∂𝐺 ∂𝐺 ∂ ∂ ∂
𝑖 𝑗+
� �
� �

∂𝑥 ∂𝑦 ∂𝑧
∇ × (∇𝐺) = ∇ ×
� � 𝑖 𝑘��
� �
𝑘 = �� ∂𝑥 ∂𝑦 ∂𝑧 ��

2 2
∂ 𝐺 ∂ 𝐺 ∂ 𝐺 ∂ 𝐺 ∂ 𝐺 ∂ 2𝐺
� ∂𝐺 ∂𝐺 ∂𝐺 �

= 𝑖+ 𝑗+ 𝑘.
� ∂𝑥 ∂𝑦 ∂𝑧 �

∂𝑦∂𝑧 ∂𝑧∂𝑦 ∂𝑧∂𝑥 ∂𝑥∂𝑧 ∂𝑥∂𝑦 ∂𝑦∂𝑥


− − −
� 2 � � 2 � � 2 �

Figure 4.10: Geometrical Interpretation of curl.


But by assumption the function G is continuous with continuous first and second partial derivatives

Consider a particle moving around a circle. The rate of change the angular position of the particle and hence the mixed partial derivatives are equal, that is,

is called the angular speed 𝜔. ∂ 2𝐺 ∂ 2𝐺 ∂ 2𝐺 ∂ 2𝐺 ∂ 2𝐺 ∂ 2𝐺


= , = and = .
Consider the figure above. Then ∂𝑦∂𝑧 ∂𝑧∂𝑦 ∂𝑧∂𝑥 ∂𝑥∂𝑧 ∂𝑥∂𝑦 ∂𝑦∂𝑥
𝑑𝜃
𝜔= = lim .
△𝜃
Therefore ∇ × (∇𝐺) = 0.
𝑑𝑡 △𝑡→0 △𝑡
For any point (𝑥, 𝑦, 𝑧) on the rotating object, let r = 𝑥𝑖 + 𝑦𝑗 + 𝑧𝑘. Then the speed 𝑣 of the
Let F be a continuous vector field given by 𝐹 = 𝑓 𝑖+𝑔𝑗 +ℎ𝑘 such that 𝑓, 𝑔 and ℎ have continuous
particle is, by definition,
𝑅△𝜃 first and second partial derivatives. Then
𝑣 = lim = lim = 𝑅𝑤,
△𝑆
△𝑡→0
△𝑡 △𝑡→0 △𝑡
𝑗
∂ ∂ ∂
where 𝑅 is the radius of the circle. But 𝑅 = ∥r∥ sin 𝜃. and the angular velocity Ω of the moving
𝑖+ ∂ ∂
� �

particle has magnitude 𝜔 and is in the same direction as a vector drawn from the origin to the ∂𝑦
� �

∂𝑥 ∂𝑦 ∂𝑧
∇.(∇ × 𝐹 ) =
� � �𝑖 𝑘�

𝑔
� �

center of the circle, pointing in the positive direction of advance of a right-hand screw when
𝑗 + 𝑘 . �� ∂𝑥 ∂ �
∂𝑧 �

turned in the same sense as the rotation of the particle as shown above in the figure.
� �

∂ ∂ℎ ∂𝑔 ∂ ∂𝑓 ∂ℎ ∂ ∂𝑔 ∂𝑓
�𝑓 ℎ�

= + +
∂𝑥 ∂𝑦 ∂𝑧 ∂𝑦 ∂𝑧 ∂𝑥 ∂𝑧 ∂𝑥 ∂𝑦
− − −
� � � � � �

Therefore, the tangential velocity v is given by v = Ω × r. But, if Ω = 𝑎𝑖 + 𝑏𝑗 + 𝑐𝑘, then ∂ 2ℎ ∂ 2𝑔 ∂ 2𝑓 ∂ 2ℎ ∂ 2𝑔 ∂ 2𝑓


= + + = 0,
∂𝑥∂𝑦 ∂𝑥∂𝑧 ∂𝑦∂𝑧 ∂𝑦∂𝑥 ∂𝑧∂𝑥 ∂𝑧∂𝑦
− − −
v = Ω × r = (𝑏𝑧 − 𝑐𝑦)𝑖 + (𝑐𝑥 − 𝑎𝑧)𝑗 + (𝑎𝑦 − 𝑏𝑥)𝑘.
since the component functions 𝑓, 𝑔 and ℎ have continuous first and second partial derivatives.
But
𝑗
Therefore we have proved the following relationships between gradient, divergence and curl that
� �

∂/∂𝑦
� �

are fundamental to vector analysis.


� 𝑖 𝑘 �
� �
∇ × v = �� ∂/∂𝑥 ∂/∂𝑧 �� = 2𝑎𝑖 + 2𝑏𝑗 + 2𝑐𝑘 = 2Ω.

This implies Theorem 4.5.3. Let F be a continuous vector field given by 𝐹 = 𝑓 𝑖 + 𝑔𝑗 + ℎ𝑘 such that 𝑓, 𝑔
� �

1 1
�𝑏𝑧 − 𝑐𝑦 𝑐𝑥 − 𝑎𝑧 𝑎𝑦 − 𝑏𝑥�

and ℎ have continuous first and second partial derivatives and 𝐺 be a continuous scalar field
2 2
Ω = (∇ × v) = 𝑐𝑟𝑢𝑙v ⇐⇒ 𝑐𝑟𝑢𝑙v = 2Ω.
with continuous first and second partial derivatives. Then curl grad G = 0, the zero vector and
Hence, the Curl of the velocity of the particle is two times its angular velocity.
div curl F = 0, the number zero.
4.5 Divergence and Curl 106 4.5 Divergence and Curl 107

4.5.1 Potential 2. Let 𝑉 : R𝑛 → R3 , 𝑛 = 2, 3 be given by 𝑉 (𝑝) = (𝑉1 (𝑝), 𝑉2 (𝑝), 𝑉3 (𝑝)). Then V has a
potential function if and only if 𝑐𝑢𝑟𝑉 = ∇ × 𝑉 = 0 = (0, 0, 0).
Recall that, if a scalar field f is differentiable at every point D of its domain, then 𝑉 (𝑃 ) = ∇𝑓 (𝑃 )
defines a vector field V on D. Example 4.5.4. 1. Let 𝑉 (𝑥, 𝑦, 𝑧) = (2 + 𝑦, 𝑥 − 𝑧 2 , −2𝑦𝑧). Then

Example 4.5.2. If 𝑓 (𝑥, 𝑦) = 3𝑥2 + 𝑥𝑦 + 𝑦 3 , then ∇𝑓 (𝑥, 𝑦) = 𝑉 (𝑥, 𝑦) = (6𝑥 + 𝑦, 𝑥 + 3𝑦 2 ) is a


∂ ∂
∂𝑉3 ∂𝑉2 ∂𝑉1 ∂𝑉3 ∂𝑉2 ∂𝑉1
𝑖 + 𝑗+( )𝑘
� �

vector field. Here the function f is called a potential of the vector V. ∂𝑦


� �

∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
− − −
� 𝑖 𝑖 𝑘 � � � � �
� �
∇ × 𝑉 = �� ∂𝑥 ∂ � =
∂𝑧 �

However, not every vector field has a potential, that is, not every vector field ia a gradient of
� �
� 𝑉 1 𝑉 2 𝑉3 �

= (−2𝑧 − (−2𝑧))𝑖 + (0 − 0)𝑗 + (1 − 1)𝑘 = (0, 0, 0).


some scalar field.
Therefore, V has a potential.
Example 4.5.3. The velocity field
2. Let 𝑉 (𝑥, 𝑦) = (−𝑐𝑦, 𝑐𝑥), 𝑐 ∈ R∖0. Then since
𝑉 (𝑥, 𝑦) = (−𝑐𝑦, 𝑐𝑥), 𝑐 ∈ R∖{0} ∂𝑉1 ∂𝑉2
and (𝑥, 𝑦) = 𝑐,
∂𝑦 ∂𝑥
(𝑥, 𝑦) = −𝑐
is not the gradient of any function 𝑓 .
but −𝑐 ∕= 𝑐, ∀𝑐 ∕= 0 and hence 𝑉 (𝑥, 𝑦) has no potential.

Suppose the contrary, i.e. suppose that there exists a function For the second question we illustrate the procedure by the considering the following two examples.

𝑓 : R2 → R such that ∇𝑓 = 𝑉. 1. Let 𝑉 (𝑥, 𝑦) = (6𝑥 + 𝑦, 𝑥 + 3𝑦 2 ) be given. Then if there is a potential function f for V it
must satisfy
That is, (𝑓𝑥 , 𝑓𝑦 ) = 𝑉. But 𝑓𝑥 (𝑥, 𝑦) = −𝑐𝑦 and 𝑓𝑦 (𝑥, 𝑦) = 𝑐𝑥. This implies 𝑓 (𝑥, 𝑦) = −𝑐𝑥𝑦+𝐴(𝑦), 𝑓𝑥 (𝑥, 𝑦) = 6𝑥 + 𝑦 and 𝑓𝑦 (𝑥, 𝑦) = 𝑥 + 3𝑦 2 .
where A is a function of 𝑦 only and 𝑓𝑦 (𝑥, 𝑦) = −𝑐𝑥+𝐴′ (𝑦) = 𝑐𝑥. This gives us −2𝑐𝑥+𝐴′ (𝑦) = 0
This implies
and hence 𝐴′ (𝑦) = 2𝑐𝑥. But this is a contradiction since A is a function of 𝑦 only.
𝑓 (𝑥, 𝑦) = (6𝑥 + 𝑦)𝑑𝑥 = 3𝑥2 + 𝑥𝑦 + 𝐴(𝑦),

Let V be a vector field. Let us ask the following two questions. were 𝐴(𝑦) is constant with respect to x(or, it is a function of y only).
Then from 𝑓𝑦 (𝑥, 𝑦) = 𝑥 + 3𝑦 2 , we get 𝑓𝑦 (𝑥, 𝑦) = 𝑥 + 𝐴′ (𝑦) = 𝑥 + 3𝑦 2 , which implies that
1. How do we check whether V has a potential or not?
𝐴′ (𝑦) = 3𝑦 2 and hence
2. How do we determine the potential f, given V?
𝐴(𝑦) = 3𝑦 2 𝑑𝑦 = 𝑦 3 + 𝐶, where C is a constant.

Now let us answer the first question. How do we check whether a vector field has a potential or
Therefore the scalar field 𝑓 (𝑥, 𝑦) = 3𝑥2 + 𝑥𝑦 + 𝑦 3 + 𝐶 is the potential of the vector field
not? The following proposition will answer this question.
𝑉 (𝑥, 𝑦) = (6𝑥 + 𝑦, 𝑥 + 3𝑦 2 ).
Proposition 4.5.4. (Test for Existence of a Potential)
2. Let V be a vector field given by 𝑉 (𝑥, 𝑦, 𝑧) = (2 + 𝑦, 𝑥 − 𝑧 2 , −2𝑦𝑧).
1. Let 𝑉 : R2 → R2 be a vector field given by 𝑉 (𝑝) = (𝑉1 (𝑝), 𝑉2 (𝑝)). Then V has a potential
Then if f is a potential, we must have 𝑓𝑥 (𝑥, 𝑦, 𝑧) = 2 + 𝑦, 𝑓𝑦 (𝑥, 𝑦, 𝑧) = 𝑥 − 𝑧 2 and
function if and only if
𝑓𝑧 (𝑥, 𝑦, 𝑧) = −2𝑦𝑧. We integrate 𝑓𝑥 (𝑥, 𝑦, 𝑧) = 2 + 𝑦 with respect to 𝑥 and get
∂𝑉1 ∂𝑉2
(𝑝) = (𝑝) for all p in the Domain of V.
∂𝑦 ∂𝑥 𝑓 (𝑥, 𝑦, 𝑧) = (2 + 𝑦)𝑑𝑥 = 2𝑥 + 𝑦𝑥 + 𝐴(𝑦, 𝑧).

4.6 Exercises 108 4.6 Exercises 109

But 𝑓𝑦 (𝑥, 𝑦, 𝑧) = 𝑥 − 𝑧 2 and This page is left blank intensionally.

∂𝐴
𝑓𝑦 (𝑥, 𝑦, 𝑧) = 𝑥 + (𝑦, 𝑧)
∂𝑦
which implies that
∂𝐴
𝑥+
∂𝑦
(𝑦, 𝑧) = 𝑥 − 𝑧 2

and hence
∂𝐴(𝑦, 𝑧)
∂𝑦
= −𝑧 2 .

We integrate this with respect to 𝑦 to get 𝐴(𝑦, 𝑧) = −𝑧 2 𝑦 + 𝐵(𝑧), where 𝐵 is a function


of 𝑧 only and hence 𝑓 (𝑥, 𝑦, 𝑧) = 2𝑥 + 𝑦 − 𝑧 2 𝑦 + 𝐵(𝑧).
Since 𝑓𝑧 (𝑥, 𝑦, 𝑧) = −2𝑦𝑧, we get 𝑓𝑧 (𝑥, 𝑦, 𝑧) = −2𝑧𝑦 + 𝐵 ′ (𝑧) = −2𝑦𝑧 which implies that
𝐵 ′ (𝑧) = 0, which means 𝐵(𝑧) = 𝐶, where C is a constant.
Therefore, 𝑓 (𝑥, 𝑦, 𝑧) = 2𝑥 + 𝑦𝑥 − 𝑧 2 𝑦 + 𝐶.

4.6 Exercises
5.1 Line Integrals 111

Chapter 5 Figure 5.1: Oriented Curves.

is continuous and nonzero at every point of C.


Line and Surface Integrals Definition 5.1.1. The Line Integral of a vector valued function 𝐹 (𝑟) over a curve C parame-
terized by r(𝑡) = x(𝑡)𝑖 + y(𝑡)𝑗 + z(𝑡)𝑘 for 𝑎 ≤ 𝑡 ≤ 𝑏 is defined by
𝑑𝑟
𝐹 (𝑟).𝑑𝑟 = 𝐹 (𝑟(𝑡)). 𝑑𝑡, where 𝑑𝑟 = (𝑑𝑥, 𝑑𝑦, 𝑑𝑧).
𝐶 𝑎 𝑑𝑡
� � 𝑏

5.1 Line Integrals


When we write it componentwise, that is, if 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 ), then it becomes:

Recall that, the integral


𝑏 𝐹 (𝑟).𝑑𝑟 = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) = (𝐹1 𝑥′ + 𝐹2 𝑦 ′ + 𝐹3 𝑧 ′ )𝑑𝑡.
𝐶 𝐶 𝑎
𝑓 (𝑥)𝑑𝑥
� � � 𝑏

𝑎 The curve C is called path of integration.


of a continuous function 𝑓 represents the definite integral of the function 𝑓 over a closed interval
[𝑎, 𝑏] = {𝑥 : 𝑎 ≤ 𝑥 ≤ 𝑏}. Example 5.1.1. Let 𝐹 (𝑟) = (𝑦, 𝑥, 𝑧) and C be the helix 𝑟(𝑡) = (cos 𝑡, sin 𝑡, 3𝑡) for 0 ≤ 𝑡 ≤ 2𝜋.
Then 𝑟′ (𝑡) = (− sin 𝑡, cos 𝑡, 3) and
2𝜋
In the line integral we shall integrate a given function F along a curve, say C. So we need some 𝐹 (𝑟).𝑑𝑟 = 𝐹 (𝑟).𝑟′ (𝑡)𝑑𝑡 = (sin 𝑡, cos 𝑡, 3𝑡)(− sin 𝑡, cos 𝑡, 3)𝑑𝑡
preliminaries about curves. First, recall the points that we have raised about curves in Section 4.3. 𝐶 0 0
� � 2𝜋 �

2𝜋 2𝜋 2𝜋 2𝜋
2 2 2 2
= (− sin 𝑡 cos 𝑡 + 9𝑡)𝑑𝑡 = − sin 𝑡𝑑𝑡 + cos 𝑡𝑑𝑡 + 9𝑡𝑑𝑡
0 0 0 0
� � � �

Suppose C is a curve with parametrization


1 1 2𝜋 1 1 2𝜋 9 2 2𝜋 2
=
sin 2𝑡∣2𝜋 sin 2𝑡∣2𝜋
x = x(𝑡), y = y(𝑡), z = z(𝑡)) for 𝑎 ≤ 𝑡 ≤ 𝑏. 4 2
0 − 𝑡∣0 +
4 2 2
0 + 𝑡∣0 + 𝑡 ∣0 = 18𝜋

Example 5.1.2. Evaluate


Here the functions x, y and z are the coordinate functions, r(𝑎) is the initial point and r(𝑏) is the (𝑥𝑦𝑧𝑑𝑥 − cos(𝑥𝑦)𝑑𝑦 + 𝑦𝑑𝑧),
terminal point of C. In this case C is known as an oriented curve and the orientation is indicated 𝐿

by putting arrows along the path on the curve.


where L is the line segment from (0, 1, 1) to (2, 1, −3).

A curve C with parametrization


Solution
r(𝑡) = (x(𝑡), y(𝑡), z(𝑡)) = x(𝑡)𝑖 + y(𝑡)𝑗 + z(𝑡)𝑘
Parametric equation of L is given by 𝑥 = 2𝑡, 𝑦 = 1, 𝑧 = 1 − 4𝑡 for 0 ≤ 𝑡 ≤ 1 and 𝑑𝑥 = 2𝑑𝑡,
is called a smooth curve, if it has a unique tangent at each of its points. i.e. if r(𝑡) is differentiable 𝑑𝑦 = 0, 𝑑𝑧 = −4𝑑𝑡. Then the line integral is
and
𝑑𝑟
r′ (𝑡) = (𝑥𝑦𝑧𝑑𝑥 − cos(𝑥𝑦)𝑑𝑦 + 𝑦𝑑𝑧) = 2𝑡(1 − 4𝑡)(2) − cos(2𝑡)(0) + 1(−4) 𝑑𝑡
𝑑𝑡 𝐿 0
� � 1
( )
5.2 Line Integrals Independent of Path 112 5.2 Line Integrals Independent of Path 113

2 16 22 where C is the path of integration.


=
3 3 3
−4=− .
0 0
� 1 �
( 2 16 3 )�1

Remark 5.1.2. Let C be a curve with parametrization r on the closed interval [𝑎, 𝑏] and F be a The line integral (5.1) is said to be independent of path of integration in a domain D if
(4𝑡 − 16𝑡 − 4)𝑑𝑡 = 2𝑡 − 𝑡 − 4𝑡 �� = 2 −

vector valued function defined on C. for every pair of endpoints A and B in D the integral (5.1) has the same value for all paths in D
that begin at A and end at B.
1. The integrand in the line integral is a scalar not a vector, because we take a dot (scalar)
product of two vectors, 𝐹 (𝑟(𝑡)).𝑟′ (𝑡).

2. If the integrand function F is a scalar valued function, the line integral will take the following Question: Which line integrals are independent of paths?
form.
The following theorem has an answer for this question.
𝑓 (𝑥, 𝑦, 𝑧, )𝑑𝑆 = 2
𝑓 (𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)) (𝑟 (𝑡)) .𝑑𝑡 =
′ 𝑓 (𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)) 𝑟′ (𝑡).𝑟′ (𝑡)𝑑𝑡
𝐶 𝑎 𝑎 Theorem 5.2.1. Let 𝐹1 , 𝐹2 and 𝐹3 be continuous functions in a set D and let 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 ).
� � 𝑏 √ � 𝑏 √

A line integral (5.1) is independent of path in D if and only if 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 ) is the gradient of
3. If the path of integration C is a closed curve, that is, if 𝑟(𝑎) = 𝑟(𝑏), then the line integral
some potential function f in D, i.e., if there exists a function f in D such that 𝐹 = ∇𝑓, which is
will be denoted by
equivalent to saying that
instead of .
𝐶 𝐶 ∂𝑓 ∂𝑓 ∂𝑓
� �

𝐹1 = , 𝐹2 = and 𝐹3 = .
Example 5.1.3 (Mass of a Helical wire). Determine the mass M of a wire that is in the shape ∂𝑥 ∂𝑦 ∂𝑧

Let F be a conservative vector valued function with potential function 𝑓 and let C be a curve
of a helical curve 𝐶 : 𝑟(𝑡) = (𝑎 cos 𝑡, 𝑎 sin 𝑡, 𝑏𝑡) 0 ≤ 𝑡 ≤ 2𝑛𝜋, 𝑛 ∈ N and that has a mass density
𝜎 = 𝑐𝑡 that varies along C.
with coordinate function 𝑥 = 𝑥(𝑡), 𝑦 = 𝑦(𝑡), 𝑧 = 𝑧(𝑡) for 𝑎 ≤ 𝑡 ≤ 𝑏. Then

∂𝑓 ∂𝑓 ∂𝑓
Solution 𝐹.𝑑𝑟 = 𝑑𝑥 + 𝑑𝑦 + 𝑑𝑧
𝐶 𝐶 ∂𝑥 ∂𝑦 ∂𝑧
� � � �

Recall that the mass M of the wire is given by ∂𝑓 𝑑𝑥 ∂𝑓 𝑑𝑦 ∂𝑓 𝑑𝑧


= + + 𝑑𝑡
𝑎 ∂𝑥 𝑑𝑡 ∂𝑦 𝑑𝑡 ∂𝑧 𝑑𝑡
� 𝑏� �

𝑀= 𝜎𝑑𝑆, where 𝑑𝑆 = 𝑟′ (𝑡).𝑟′ (𝑡)𝑑𝑡.


𝐶 𝑑

= 𝑓 (𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡) 𝑑𝑡


But 𝑟′ (𝑡) = (−𝑎 sin 𝑡, 𝑎 cos 𝑡, 𝑏) and 𝑟′ (𝑡).𝑟′ (𝑡) = 𝑎2 sin2 𝑡 + 𝑎2 cos2 𝑡 + 𝑏2 = 𝑎2 + 𝑏2 . Therefore, 𝑎 𝑑𝑡
� 𝑏 � �

= 𝑓 (𝐵) − 𝑓 (𝐴),

𝑀= 𝜎𝑑𝑆 = 𝑐𝑡. 𝑟′ (𝑡).𝑟′ (𝑡)𝑑𝑡 = 𝑐𝑡 𝑎2 + 𝑏2 𝑑𝑡 = 2𝑐𝑛2 𝜋 2 𝑎2 + 𝑏2 . where 𝑓 (𝐵) = 𝑓 (𝑥(𝑎), 𝑦(𝑏), 𝑧(𝑏)), 𝑓 (𝐴) = 𝑓 (𝑥(𝑎), 𝑦(𝑎), 𝑧(𝑎)), the end points of the curve C.
𝐶 0 0
� � 2𝑛𝜋 √ � 2𝑛𝜋 √

Therefore, we have the following theorem and some times it is called Fundamental Theorem
of Line Integrals
5.2 Line Integrals Independent of Path
Theorem 5.2.2. If the integral (5.1) is independent of path in D, then
Consider the line integral
𝐵
(𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) = 𝑓 (𝐵) − 𝑓 (𝐴), where 𝐹 = ∇𝑓.
𝐹 (𝑟).𝑑𝑟 = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧), (5.1) 𝐴

𝑐 𝐶
� �

Example 5.2.1. Evaluate:


5.2 Line Integrals Independent of Path 114 5.2 Line Integrals Independent of Path 115

1. The integral 2. Let 𝐹1 = 𝑒𝑧 , 𝐹2 = 2𝑦 and 𝐹3 = 𝑥𝑒𝑧 . 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 ) and


(2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 + 4𝑧𝑑𝑧),
𝐶 𝐹 (𝑟).𝑑𝑟 = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) = (𝑒𝑧 𝑑𝑥 + 2𝑦𝑑𝑦 + 𝑥𝑒𝑧 𝑑𝑧),

if C is a curve with initial point 𝐴 = (0, 0, 0) and terminal point 𝐵 = (2, 2, 2). 𝐶 𝐶 𝐶
� � �

where C is a curve with initial point 𝐴 = (0, 1, 0) and terminal point 𝐵 = (−2, 1, 0).
2. The integral
(𝑒𝑧 𝑑𝑥 + 2𝑦𝑑𝑦 + 𝑥𝑒𝑧 𝑑𝑧), But But
𝐶

if C is a curve with with initial point 𝐴 = (0, 1, 0) and terminal point 𝐵 = (−2, 1, 0). 𝑖 𝑘
∂ ∂ ∂
� �

∂𝑦 ∂𝑧
� �

∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
� 𝑖 � � � � �
� �

Solution
∇ × 𝐹 = �� ∂𝑥 � = ∂𝐹3 − ∂𝐹2 𝑖 + ∂𝐹1 − ∂𝐹3 𝑗 + ( ∂𝐹2 − ∂𝐹1 )𝑘 = 0.

� �
� 𝐹1 𝐹 2 𝐹3 �

1. Let 𝐹1 = 2𝑥, 𝐹2 = 2𝑦 and 𝐹3 = 4𝑧. Then 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 ) and Then there exists a function 𝑓 such that 𝐹 = ∇𝑓 which implies
∂𝑓 ∂𝑓 ∂𝑓
𝐹1 = , 𝐹2 = and 𝐹3 =
𝐹 (𝑟).𝑑𝑟 = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) = (2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 + 4𝑧𝑑𝑧), ∂𝑥 ∂𝑦 ∂𝑧
𝐶 𝐶 𝐶
� � �

and hence by Fundamental Theorem of Line Integrals, we have


where C is a curve with initial point 𝐴 = (0, 0, 0) and terminal point 𝐵 = (2, 2, 2).
But (2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 + 4𝑧𝑑𝑧) = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) = 𝑓 (𝐵) − 𝑓 (𝐴).
𝐶 𝐶
� �

𝑖 𝑘
∂ ∂ ∂
Now, by using the same procedure as in Section ?? of the previous chapter, we can get
� �

∂𝑦 ∂𝑧
� �

∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦 𝑓 (𝑥, 𝑦, 𝑧) = 𝑥𝑒𝑧 + 𝑦 2 + 𝑘, where 𝑘 is a constant.


� 𝑖 � � � � �
� �
∇ × 𝐹 = �� ∂𝑥 � = ∂𝐹3 − ∂𝐹2 𝑖 + ∂𝐹1 − ∂𝐹3 𝑗 + ( ∂𝐹2 − ∂𝐹1 )𝑘 = 0.

� �
� 𝐹1 𝐹 2 𝐹3 �

Then there exists a function 𝑓 such that 𝐹 = ∇𝑓 which implies Therefore,


∂𝑓 ∂𝑓 ∂𝑓
𝐹1 = , 𝐹2 = and 𝐹3 =
(2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 + 4𝑧𝑑𝑧) = 𝑓 (−2, 1, 0) − 𝑓 (0, 1, 0) = −2.
∂𝑥 ∂𝑦 ∂𝑧 𝐶

and hence by Fundamental Theorem of Line Integrals, we have The following remark is an immediate consequence of the Fundamental Theorem of Line Integrals.

(2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 + 4𝑧𝑑𝑧) = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) = 𝑓 (𝐵) − 𝑓 (𝐴). Remark 5.2.3. 1. The line integral (5.1) is independent of path in a domain D if and only if
𝐶 𝐶
its value around every closed path in D is zero.
� �

Now, by using the same procedure as in Section ?? of the previous chapter, we can get B

2 2 2
𝑓 (𝑥, 𝑦, 𝑧) = 𝑥 + 𝑦 + 2𝑧 + 𝑘, where 𝑘 is a constant.
C1
C 2

Therefore,

A
(2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 + 4𝑧𝑑𝑧) = 𝑓 (2, 2, 2) − 𝑓 (0, 0, 0) = 24.
𝐶

Figure 5.2: Two different paths between two points


5.2 Line Integrals Independent of Path 116 5.2 Line Integrals Independent of Path 117

2. The line integral usually represents the work done by force F in the displacement of a body Remark 5.2.6. In the plane, R2 , the line integral 𝐶
𝐹 (𝑟).𝑑𝑟 = 𝐶
(𝐹1 𝑑𝑥+𝐹2 𝑑𝑦) and 𝑐𝑢𝑟𝑙𝐹 = 0
along path C. Hence if F has a potential function f, the line integral of F for displacement means
∫ ∫

∂𝐹2 ∂𝐹1
around any closed path is zero. = .
∂𝑥 ∂𝑦
In this case, the vector field F is called conservative, otherwise it is called nonconserva- Example 5.2.2. Show that thee following integrands are exact and evaluate the integrals.
tive.
1.
(3, 𝜋2 )
Another way of checking the independence of path of a line integral is using exactness of a 𝑒𝑥 (cos 𝑦𝑑𝑥 − sin 𝑦𝑑𝑦)
differential form. Recall that a differential form (0,𝜋)

2. 𝐵
𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧
2𝑥𝑦𝑧 2 𝑑𝑥 + (𝑥2 𝑥2 + 𝑧 cos 𝑦𝑧)𝑑𝑦 + (2𝑥2 𝑦𝑧 + 𝑦 cos 𝑦𝑧)𝑑𝑧 ,
𝐴
� � �

is said to be exact in a domain D if there is a differentiable function f such that where 𝐴 = (0, 0, 1, ) and 𝐵 = (1, 𝜋4 , 2).
∂𝑓 ∂𝑓 ∂𝑓
𝐹1 = , 𝐹2 = and 𝐹3 = in D.
∂𝑥 ∂𝑦 ∂𝑧 Solution
Definition 5.2.4. A domain D is said to be simply connected if every closed curve in D can be
shrunk to any point in D. 1. Let 𝐹1 = 𝑒𝑥 cos 𝑦 and 𝐹2 = −𝑒𝑥 sin 𝑦. Then
∂𝐹1 ∂𝐹2
∂𝑦 ∂𝑥
= −𝑒𝑥 sin 𝑦 =

and hence the differential in the integral is exact.


Then let us find the function f.

MULTIPLY CONNECTED SIMPLY CONNECTED


𝑓 (𝑥, 𝑦) = 𝐹2 𝑑𝑦 = 𝑒𝑥 cos 𝑦 + 𝐴(𝑥)

Figure 5.3: Multiply and Simply connected regions. and 𝑓𝑥 = 𝑒𝑥 cos 𝑦 + 𝐴𝑥 = 𝑒𝑥 cos 𝑦 = 𝐹1 , which implies that 𝐴 = 𝐶, a constant.
Therefore, the potential function is 𝑓 (𝑥, 𝑦) = 𝑒𝑥 cos 𝑦 + 𝐶 and hence

2 𝜋
Theorem 5.2.5. Suppose 𝐹1 , 𝐹2 and 𝐹3 are continuous and having continuous first order partial
2
𝑒𝑥 (cos 𝑦𝑑𝑥 − sin 𝑦𝑑𝑦) = 𝑓 (3, ) − 𝑓 (0, 𝜋) = 0 + 𝜋 = 𝜋.
derivatives in a domain D and consider the line integral (0,𝜋)
� (3, 𝜋 )

𝐹 (𝑟).𝑑𝑟 = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) (5.2) 2. Let 𝐹1 = 2𝑥𝑦𝑧 2 , 𝐹2 = 𝑥2 𝑧 2 + 𝑧 cos 𝑦𝑧 and 𝐹3 = 2𝑥2 𝑦𝑧 + 𝑦 cos 𝑦𝑧. Then since we have
𝑐 𝑐 (𝐹3 )𝑦 = 2𝑥2 𝑧 + cos 𝑦𝑧 sin 𝑦𝑧 = (𝐹2 )𝑧 , (𝐹1 )𝑧 = 4𝑥𝑦𝑧 = (𝐹3 )𝑥 and (𝐹2 )𝑥 = 2𝑥𝑧 2 = (𝐹1 )𝑦 ,
� �

1. If the line integral (5.2) is independent of path in D, then 𝐶𝑢𝑟𝑙𝐹 = 0. i.e. the differential in the integral is exact.
Then let us find the function f.
∂𝐹3 ∂𝐹2 ∂𝐹1 ∂𝐹3 ∂𝐹2 ∂𝐹1
= , = and = .
∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦 𝑓 (𝑥, 𝑦, 𝑧) = 𝐹2 𝑑𝑦 = (𝑥2 𝑧 2 + 𝑧 cos 𝑦𝑧)𝑑𝑦 = 𝑥2 𝑧 2 𝑦 + 𝑧 sin 𝑦𝑧 + 𝐴(𝑥, 𝑧)
� �

2. If 𝐶𝑢𝑟𝑙𝐹 = 0 in D and if D is simply connected then the line integral (5.2) is independent
and 𝑓𝑥 = 2𝑥𝑧 2 𝑦 + 𝐴𝑥 (𝑥, 𝑧) = 2𝑥𝑦𝑧 2 = 𝐹1 , which implies 𝐴𝑥 = 0 and hence 𝐴 = 𝐵(𝑧).
of path in D.
Therefore 𝑓 (𝑥, 𝑦, 𝑧) = 𝑥2 𝑧 2 𝑦 +sin 𝑦𝑧 +𝐵(𝑥) which means 𝑓𝑧 = 2𝑥2 𝑦𝑧 +𝑦 cos 𝑦𝑧 +𝐵 ′ (𝑧) =
5.2 Line Integrals Independent of Path 118 5.3 Green’s Theorem 119

2𝑥2 𝑦𝑧 + 𝑦 cos 𝑦𝑧 that implies 𝐵 ′ (𝑧) = 0 and hence 𝐵 = 𝐶, a constant. Write 𝐶 = 𝐶1 ⊕ 𝐶2 , where 𝐶1 is the portion of the parabola and 𝐶2 is the line segment.
2 2
Therefore the potential function is 𝑓 (𝑥, 𝑦, 𝑧) = 𝑥 𝑧 𝑦 + sin 𝑦 + 𝐶 and hence Parameterize 𝐶1 by 𝑥 = 𝑡, 𝑦 = 𝑡2 for 0 ≤ 𝑡 ≤ 2 and on 𝐶1 , 𝑑𝑥 = 𝑑𝑡, 𝑑𝑦 = 2𝑡𝑑𝑡. Therefore,
𝐵
32 72
(2𝑥𝑦𝑧 2 𝑑𝑥 + (𝑥2 𝑧 2 + 𝑧 cos 𝑦𝑧)𝑑𝑦 + (2𝑥2 𝑦𝑧 + 𝑦 cos 𝑦𝑧)𝑑𝑧) (𝑦 2 𝑑𝑥 + 𝑥2 𝑑𝑦) = +8= .
𝐴 𝐶1 0 5 2 0 5 5
� � � 2 �2
𝑡5 𝑡4 �

𝜋 𝜋
(𝑡4 + 2𝑡3 )𝑑𝑡 = ( + )�� =

Parameterize 𝐶2 by 𝑥 = 𝑡, 𝑦 = 2 for 2 ≤ 𝑡 ≤ 4 and on 𝐶2 , 𝑑𝑥 = 𝑑𝑡, 𝑑𝑦 = 0. Therefore,


4 4
= 𝑓 (𝐵) − 𝑓 (𝐴) = 1. .22 + (sin( × 2) − 0 + sin 0

= 𝜋 + 1 − 0 = 1 + 𝜋. (𝑦 2 𝑑𝑥 + 𝑥2 𝑑𝑦) =
𝐶2 2 2
� � 4 �4

General Properties of Line integrals


(4 + 0)𝑑𝑡 = 4𝑡�� = 8.

Hence
72 112
(𝑦 2 𝑑𝑥 + 𝑥2 𝑑𝑦) = +8= .
Let F and G be continuous vector fields, C be a path joining points A and B. Furthermore, 𝐶 5 5

suppose that C is subdivided into two arcs 𝐶1 and 𝐶2 that have the same orientation as C. Then
4. If C’ has an opposite orientation to that of C, then
1.
𝐹.𝑑𝑟 = − 𝐹.𝑑𝑟.
𝑘𝐹.𝑑𝑟 = 𝑘 𝐹.𝑑𝑟 𝐶 𝐶′
� �

𝐶 𝐶
� �

for any constant k.


5.3 Green’s Theorem
2.
(𝐹 + 𝐺).𝑑𝑡 = 𝐹.𝑑𝑟 + 𝐺.𝑑𝑟 Over a plane region, double integrals can be transformed into line integrals over the boundary
𝐶 𝐶 𝐶
� � �

of the regions and conversely. This can be done using Green’s Theorem which is stated stated
3.
below.
𝐹.𝑑𝑟 = 𝐹.𝑑𝑟 + 𝐹.𝑑𝑟
𝐶 𝐶1 𝐶2
� � �

Theorem 5.3.1 (Green’s Theorem). Let R be a closed bounded region in the 𝑥𝑦−plane whose
B boundary C consists of finitely many smooth curves. Let 𝐹1 (𝑥, 𝑦) and 𝐹2 (𝑥, 𝑦) be functions that
are continuous and have continuous partial derivatives every where in some domain containing
C
C1 C 2 R, (i.e.(𝐹1 )𝑦 and (𝐹2 )𝑥 are continuous in R.) Then

∂𝐹2 ∂𝐹1
( )𝑑𝑥𝑑𝑦 = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦), (5.3)
∂𝑦

A 𝑅 ∂𝑥 𝐶
�� �

where C is the boundary of R.


Figure 5.4: Subdivisions of a curve

Example 5.2.3. Let C be a curve consisting of portion of a parabola 𝑦 = 𝑥2 in the Example 5.3.1. 1. Use Green’s Theorem to evaluate
𝑥𝑦−plane from (0, 0) to (2, 4) and a horizontal line from (2, 4) to (4, 4). Evaluate
(𝑥2 𝑦𝑑𝑥 + 𝑥𝑑𝑦)
𝐶

(𝑦 2 𝑑𝑥 + 𝑥2 𝑑𝑦).
𝐶 over the triangular path in the figure.

5.3 Green’s Theorem 120 5.3 Green’s Theorem 121

Solution
Y

Clearly the work W done by the field F is given by


R C1
C 2
𝑊 = 𝐹.𝑑𝑟 = (𝑒 − 𝑦 3 )𝑑𝑥 + (cos 𝑦 + 𝑥3 )𝑑𝑦 .
𝐶 𝐶
� �
( 𝑥 )

But since F is not conservative (verify) W need not be zero,though C is a simple closed

X
curve. If we parameterize the circle by 𝑥 = cos 𝑡, 𝑦 = sin 𝑡 on 0 ≤ 𝑡 ≤ 2𝜋, the integral will
be complicated and difficult to solve.
However if we use Green’s Theorem we have:
Figure 5.5: Two boundaries of a closed region
𝑊 = (𝑒𝑥 − 𝑦 3 )𝑑𝑥 + (cos 𝑦 + 𝑥3 )𝑑𝑦
Y

2 2
(1, 2) =
2𝑦
[ (cos 𝑦 + 𝑥3 ) − (𝑒𝑥 − 𝑦 3 )]𝑑𝐴.
𝑅 2𝑥
𝐶 ��

C3
C2

= (3𝑥2 + 3𝑦 2 )𝑑𝐴 = 3 (𝑥2 + 𝑦 2 )𝑑𝐴.


1 X
C1 𝑅 𝑅
�� ��

=3 𝑟2 𝑟𝑑𝑟𝑑𝜃 (using polar coordinates)


0 0
� 2𝜋 � 1

Figure 5.6: Triangular path


3 2𝜋 3𝜋
= 𝑑𝜃 = .
4 0 2

Solution
Here R is region bounded by a unit circle.
The curves are parameterized as: 𝐶1 : 𝑟(𝑡) = (𝑡, 0) for 0 ≤ 𝑡 ≤ 1, 𝐶2 : 𝑟(𝑡) = (1, 2𝑡) for
Remark 5.3.2. Green’s Theorem can be used to find areas of a plane region.
0 ≤ 𝑡 ≤ 1 and 𝐶3 : 𝑟(𝑡) = (1 − 𝑡, 2 − 2𝑡) for 0 ≤ 𝑡 ≤ 𝑡.
Let R be a plane region with boundary C.
Since 𝐹1 = 𝑥2 𝑦 and 𝐹2 = 𝑥, we have from Green’s Theorem that:

∂ ∂ 2 1. Area of the region R in cartesian coordinates.


(𝑥2 𝑦𝑑𝑥 + 𝑥𝑑𝑦) = (𝑥 𝑦) 𝑑𝐴 =
∂𝑥 ∂𝑦
(𝑥) − (1 − 𝑥2 )𝑑𝑦𝑑𝑥
𝐶 𝑅 0 0 First choose 𝐹1 = 0 and 𝐹2 = 𝑥. Then, as in 5.3.1 above,
� �� � � � 1 � 2𝑥

1
1 𝑑𝑥𝑑𝑦 = 𝑥𝑑𝑦
= = .
𝑅 𝐶
(1 − 𝑥2 )(2𝑥)𝑑𝑥 = 𝑥2 −
0
� � ��1 �� �
𝑥4 ��
2 �0 2

2. Find the work done by the force field 𝐹 (𝑥, 𝑦) = (𝑒𝑥 − 𝑦 3 )𝑖 + (cos 𝑦 + 𝑥3 )𝑗 on a particle and then choose 𝐹1 = −𝑦 and 𝐹2 = 0 to get
2 2
that travels once around the unit circle 𝑥 + 𝑦 = 1 in the counterclockwise direction.
𝑑𝑥𝑑𝑦 = − 𝑦𝑥𝑑𝑦
𝑅 𝐶
� � �

By adding up the two we get

2 𝑑𝑥𝑑𝑦 = 𝑥𝑑 − 𝑦𝑑𝑦 = (𝑥𝑑𝑦 − 𝑦𝑑𝑥)


𝑅 𝑐 𝑐 𝑐
�� � � �
5.3 Green’s Theorem 122 5.3 Green’s Theorem 123

Therefore, the area 𝐴(𝑅) of the region bounded by the curve C is given by: �

1
𝐴(𝑅) = 𝑑𝑥𝑑𝑦 = (𝑥𝑑𝑦 − 𝑦𝑑𝑥). (5.4)
𝑅 2 𝑐
� � �

For example, to find the area of an ellipse �

2 2
𝑥 𝑦
+ 2 = 1,
𝑎2 𝑏
we write 𝑥 = 𝑎 cos 𝑡, 𝑦 = 𝑏 sin 𝑡, 0 ≤ 𝑡 ≤ 2𝜋. Then 𝑥′ = −𝑎 sin 𝑡, 𝑦 ′ = 𝑏 cos 𝑡 Then area of
region bounded by the ellipse is: Figure 5.7: A cardioid 𝑟 = 𝑎(1 − cos 𝜃), where 0 ≤ 𝜃 ≤ 2𝜋 and 𝑎 is a positive constant.

1 1 2𝜋
𝐴(𝑅) = 1 2 𝑎2 2𝜋 2
2 2 𝐴(𝑅) = 𝑟 𝑑𝜃 =
(𝑥𝑑𝑦 − 𝑦𝑑𝑥) = (𝑥𝑦 ′ − 𝑦𝑥′ )𝑑𝑡
2
[𝑎(1 − cos 𝜃)] 𝑑𝜃 = (1 − 2 cos 𝜃 + 𝑐𝑜𝑠2 𝜃)𝑑𝜃
� �

𝑐 2 0
� � �

1 2𝜋
= 𝑎2 2𝜋 𝑎2 2𝜋
2 = cos 𝜃 + (cos 2𝜃 + 1)𝑑𝜃
(𝑎 cos 𝑡)(𝑏 cos 𝑡) − (𝑏 sin 𝑡)(−𝑎 sin 𝑡)𝑑𝑡

𝑑𝜃 − 𝑎2
2 0 0 4 0
� � 2𝜋 �

1 2 2
= (𝑎𝑏 cos2 𝑡 + 𝑎𝑏 sin2 𝑡)𝑑𝑡 𝑎 𝑎 𝜋 3𝑎2 𝜋
2 0 = 𝑥2𝜋 + 𝑥2𝜋 + 0 = 𝑎2 𝜋 + 𝑎2 =
2 4 2 2
� 2𝜋

1 2𝜋
= 3𝑎2 𝜋
2 0 2 0 Therefore 𝐴(𝑅) = 2
.
� �2𝜋
𝑎𝑏 ��
(𝑎𝑏)𝑑𝑡 = 𝑡� = 𝑎𝑏𝜋

2. Area of a plane region in polar coordinates.


Let 𝑥 = 𝑟 cos 𝜃 and 𝑦 = 𝑟 sin 𝜃, where (𝑟, 𝜃) is the polar coordinate of point (𝑥, 𝑦). Then
Graphs in Polar Coordinates
𝑑𝑥 = cos 𝜃𝑑𝑟 − 𝑟 sin 𝜃𝑑𝜃, 𝑑𝑦 = sin 𝜃𝑑𝑟 + 𝑟 cos 𝜃𝑑𝜃. Hence (5.4) becomes 1. Circles
1 A circle of radius 𝑎 that is centered at the origin consists of all point of the form 𝑃 (𝑎, 𝜃).
𝐴(𝑅) = (𝑥𝑑𝑦 − 𝑢𝑑𝑥)
2 𝑐
Thus such circles in polar coordinate has the equation 𝑟 = 𝑎.

1
= [𝑟 cos 𝜃(sin 𝜃𝑑𝑟 + 𝑟 cos 𝜃𝑑𝜃) − 𝑟 sin 𝜃(cos 𝜃𝑑𝑟 − 𝑟 sin 𝜃𝑑𝜃)]
2 𝑐 The equation of a circle that is centered on the 𝑥 − 𝑎𝑥𝑖𝑠 and passes through the origin has

1 an equation of the form 𝑟 = 2𝑎 cos 𝜃 or 𝑟 = −2𝑎 cos 𝜃.


= [(𝑟 cos 𝜃𝑑𝑟 − 𝑟 cos 𝜃 sin 𝜃𝑑𝑟) + (𝑟2 cos2 𝜃𝑑𝜃 + 𝑟2 sin2 𝜃𝑑𝜃)]
2 𝑐

� �
1 ������
= 𝑟2 𝑑𝜃.
2 𝑐

Therefore, the area 𝐴(𝑅) of the region bounded by the curve C is given in polar form by: �
� �
1
𝐴(𝑅) = 𝑟2 𝑑𝜃.
2 𝐶
�������� ���������

Figure 5.8: Circles centered on 𝑥−axis.


For example, to find the area of the region bounded by the cardioid 𝑟 = 𝑎(1 − cos 𝜃), where
0 ≤ 𝜃 ≤ 2𝜋 and 𝑎 is a positive constant.
5.3 Green’s Theorem 124 5.3 Green’s Theorem 125

The equation of a circle that is centered on the 𝑦 − 𝑎𝑥𝑖𝑠 and passes through the origin has
an equation of the form 𝑟 = 2𝑎 sin 𝜃 or 𝑟 = −2𝑎 sin 𝜃.




���������
������� ���
��� ����������
Figure 5.9: Circles centered on 𝑦−axis. �����������

Figure 5.11: Rose Curves.


2. Cardioid and Limacons
Equations of the form
5.3.1 Green’s Theorem for Multiply Connected Regions
𝑟 = 𝑎 + sin 𝜃, 𝑟 = 𝑎 − 𝑏 sin 𝜃𝑟 = 𝑎 + 𝑏 cos 𝜃 or 𝑟 = 𝑎 − 𝑏 cos 𝜃
Recall that, a region in R2 is called simply connected if it is connected and has no holes, and is
produce polar curves called limacons.
called multiply connected if it is connected but it has finitely many holes.
𝑎
Depending on the ratio 𝑐 = 𝑏
we get four categories.
Now consider the vector field 𝐹 = (𝐹1 , 𝐹2 ) which is continuously differentiable over the plane
region R, which is multiply connected as shown in Figure 5.3.1.
������
������ ��

���������
�������������������������
��
��

��������
���������������������������������
��

��������������� ��������������

Figure 5.10: Cardioid and Limacons. Figure 5.12: Dividing a region in to regions

3. Rose Curves. Equations of the form First divide R into simply connected regions 𝑅1 and 𝑅2 . Then

𝑟 = 𝑎 sin 𝑛𝜃 and 𝑟 = 𝑎 cos 𝑛𝜃 ∂𝐹2 ∂𝐹1 ∂𝐹2 ∂𝐹1 ∂𝐹2 ∂𝐹1


𝑑𝐴 = 𝑑𝐴 + 𝑑𝐴.
∂𝑥 ∂𝑦 ∂𝑥 ∂𝑦 ∂𝑥 ∂𝑦
− − −
𝑅 𝑅1 𝑅2
represents a flower shaped curves called roses.
�� � � �� � � �� � �

When we graph 𝑟 verses 𝜃 in the cartesian (𝑟, 𝜃) plane, we ignore the points where 𝑟 is = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦) + (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦),
𝐶1 𝐶2
� �

imaginary but plot positive and negative parts from the points where 𝑟2 is positive.
5.3 Green’s Theorem 126 5.4 Surface Integrals 127

where the curves 𝐶1 and 𝐶2 are the boundaries of the regions 𝑅1 and 𝑅2 respectively. The and hence
orientation of the curves should be in such a way that when traveling along the curves the region = .
−𝑦𝑑𝑥 + 𝑥𝑑𝑦 −𝑦𝑑𝑥 + 𝑥𝑑𝑦
𝐶 𝑥2 + 𝑦 2 𝐶𝑎 𝑥2 + 𝑦 2
� � � � � �

should be to the left.


Now let 𝑥 = 𝑎 cos 𝑡 and 𝑦 = 𝑎 sin 𝑡 for 0 ≤ 𝑡 ≤ 2𝜋 on 𝐶𝑎 implies 𝑑𝑥 = −𝑎 sin 𝑡𝑑𝑡 and
Example 5.3.2. Evaluate the integral 𝑑𝑦 = 𝑎 cos 𝑡𝑑𝑡.
2𝜋
(−𝑎 sin 𝑡)(−𝑎 sin 𝑡)𝑑𝑡 + (𝑎 cos 𝑡)(𝑎 cos 𝑡)𝑑𝑡
=
−𝑦𝑑𝑥 + 𝑥𝑑𝑦 −𝑦𝑑𝑥 + 𝑥𝑑𝑦
𝐶 𝑥2 + 𝑦 2 𝐶 𝑥2 + 𝑦 2 0 (𝑎 cos 𝑡)2 + (𝑎 sin 𝑡)2
� � � � �

(𝑎 sin 𝑡 + 𝑎2 cos2 𝑡)𝑑𝑡


if C is a piecewise smooth simply closed curve oriented counterclockwise such that C incloses the = 𝑑𝑡 = 2𝜋
0 (𝑎2 cos2 𝑡 + 𝑎2 sin2 𝑡) 0
� 2𝜋 2 2 � 2𝜋

origin. Consider Figure 5.3.2.


for any small radius 𝑎 and hence

= 2𝜋.
−𝑦𝑑𝑥 + 𝑥𝑑𝑦
� 𝐶 𝑥2 + 𝑦 2
� � �

5.4 Surface Integrals


�� �
In the previous sections we have been working on integral of vector fields over curves. Now are
going to consider integrals of vector fields over surfaces. Let us start by discussing some facts
Figure 5.13: A curve that encloses the origin about surfaces.
In the case of line integrals we represented a curve in R3 perimetrically as

Solution 𝑥 = 𝑥(𝑡), 𝑦 = 𝑦(𝑡), 𝑧 = 𝑧(𝑡),

−𝑦 𝑥
Let 𝐹 = (𝐹1 , 𝐹2 ) such that 𝐹1 = 𝑥2 +𝑦 2
and 𝐹1 = 𝑥2 +𝑦 2
. Since 𝐹1 and 𝐹2 are undefined at the for 𝑎 ≤ 𝑡 ≤ 𝑏. That is, a curve is given by coordinate functions of one variable, where as, a

origin, we can not apply Green’s Theorem. surface is defined by parametric functions of two variables

𝑥 = 𝑥(𝑢, 𝑣), 𝑦 = 𝑦(𝑢, 𝑣), 𝑧 = 𝑧(𝑢, 𝑣),


Thus construct a circle 𝐶𝑎 with sufficiently small radius a and oriented counterclockwise as C.
Thus for (𝑢, 𝑣) in some set in the 𝑢𝑣−plane and the variables 𝑢 and 𝑣 are called parameters.
∂𝐹1 ∂𝐹2
+ = 𝑑𝐴,
−𝑦𝑑𝑥 + 𝑥𝑑𝑦 −𝑦𝑑𝑥 + 𝑥𝑑𝑦
𝑥2 + 𝑦 2 𝑥2 + 𝑦 2 ∂𝑦

∂𝑥 Example 5.4.1. Parametrization of some surfaces.
𝐶 −𝐶𝑎 𝑅
� � � � � � �� � �

by Green’s Theorem, where R is the region bounded by the curves −𝐶𝑎 and 𝐶. But
1. The parametric representation of a cylinder
∂𝐹1 ∂𝐹2 ∂𝐹1 ∂𝐹2
= which implies that 𝑑𝐴 = 0.
𝑦 2 − 𝑥2
= 2
∂𝑦 ∂𝑥 𝑥 + 𝑦2 ∂𝑦 ∂𝑥 is 𝑟(𝑢, 𝑣) = 𝑎 cos 𝑢𝑖 + 𝑎 sin 𝑢𝑗 + 𝑣𝑘.

𝑅
�� � �

𝑥2 + 𝑦 2 = 𝑎2 , −1 ≤ 𝑧 ≤ 1

This implies
2. The parametric representation of a sphere
+ =0
−𝑦𝑑𝑥 + 𝑥𝑑𝑦 −𝑦𝑑𝑥 + 𝑥𝑑𝑦
𝐶 𝑥2 + 𝑦 2 −𝐶𝑎 𝑥2 + 𝑦 2
� � � � � �

𝑥2 + 𝑦 2 + 𝑧 2 = 𝑎 2 is 𝑟(𝑢, 𝑣) = 𝑎 cos 𝑣 cos 𝑢𝑖 + 𝑎 cos 𝑣 sin 𝑢𝑗 + 𝑎 sin 𝑣𝑘


5.4 Surface Integrals 128 5.4 Surface Integrals 129

3. The parametric representation of a cone is a tangent vector to the curve Ω𝑣 with coordinate functions

𝑧= 𝑥2 + 𝑦 2 , 0 ≤ 𝑧 ≤ 𝑇 is 𝑟(𝑢, 𝑣) = 𝑢 cos 𝑣𝑖 + 𝑢 sin 𝑣𝑗 + 𝑢𝑘 𝑥(𝑢0 , 𝑣), 𝑦(𝑢0 , 𝑣), 𝑧(𝑢0 , 𝑣).


where 0 ≤ 𝑢 ≤ 𝑇 and 0 ≤ 𝑣 ≤ 2𝜋. Assume that these two vectors are not zero. Then these two vectors lie in a plane tangent to the
surface Ω at the point 𝑃0 and hence the vectors
For a surface we write a position vector as
𝑁 (𝑃0 ) = 𝑇𝑢0 × 𝑇𝑣0
𝑟(𝑢, 𝑣) = 𝑥(𝑢, 𝑣)𝑖 + 𝑦(𝑢, 𝑣)𝑗 + 𝑧(𝑢, 𝑣)𝑘
is a normal vector to the tangent plane and hence the surface to at the point 𝑃0 . But
and 𝑟(𝑢, 𝑣) can be considered as a vector in R3 with initial point the origin and terminal point
𝑥(𝑢, 𝑣), 𝑦(𝑢, 𝑣), 𝑧(𝑢, 𝑣) which is on the surface. 𝑖 𝑗 𝑘
∂𝑥 ∂𝑦 ∂𝑧
� �

(𝑢0 , 𝑣0 ) ∂𝑢 (𝑢0 , 𝑣0 ) ∂𝑢
( ) � �

∂𝑧
� �
� �

∂𝑣
A surface with parametrization r is simple if it does not fold over and intersect itself. This means (𝑢0 , 𝑣0 ) ∂𝑣
𝑇𝑢0 × 𝑇𝑣0 = �� ∂𝑢 (𝑢0 , 𝑣0 )��
� ∂𝑥 �

𝑟(𝑢1 , 𝑣1 ) = 𝑟(𝑢2 , 𝑣2 ) can occur only when 𝑢1 = 𝑢2 and 𝑣1 = 𝑣2 . ∂𝑦 ∂𝑧 ∂𝑧 ∂𝑦 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑧 ∂𝑥 ∂𝑦 ∂𝑦 ∂𝑥


� ∂𝑣 (𝑢0 , 𝑣0 ) ∂𝑦 (𝑢0 , 𝑣0 )�

= 𝑖+ 𝑗+ 𝑘,
∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣
− − −
� � � � � �

5.4.1 Normal Vector and Tangent plane to a Surface in which all the partial derivatives are evaluated at (𝑢0 , 𝑣0 ).
From the previous courses, recall that, the Jacobian of two functions 𝑓 and 𝑔 is defined to be
Recall that: if C is a curve with coordinate functions 𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡), then
∂𝑢 ∂𝑓 ∂𝑔 ∂𝑔 ∂𝑓
.
𝑇 = 𝑥′ (𝑡0 ) + 𝑦 ′ (𝑡0 )𝑗 + 𝑧 ′ (𝑡0 )𝑘

� �

∂𝑢 ∂𝑣
∂𝑓 �
∂(𝑓, 𝑔) �� ∂𝑓 ∂𝑣 �
=� �=

is a vector that is tangent to the curve at a point 𝑃0 = 𝑥(𝑡0 ), 𝑦(𝑡0 ), 𝑧(𝑡0 ) .


∂(𝑢, 𝑣) � ∂𝑔 ∂𝑔 � ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣

Then the normal vector


( )

∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
𝑁 (𝑃0 ) = 𝑗+ 𝑗+ 𝑘,
Let Ω be a surface in R3 with coordinate functions 𝑥(𝑢, 𝑣), 𝑦(𝑢, 𝑣), 𝑧(𝑢, 𝑣) and let 𝑃0 be the ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣
point 𝑥(𝑢0 , 𝑣0 ), 𝑦(𝑢0 , 𝑣0 ), 𝑧(𝑢0 , 𝑣0 ) on the surface Ω. We want to find a normal vector N to where the partial derivatives are evaluated at (𝑢0 , 𝑣0 ).
the surface at 𝑃0 .
( )

For an arbitrary point (𝑢, 𝑣) on the surface, the normal line to the tangent plane is given by
Let Ω𝑢 be the curve with coordinate functions 𝑁 = 𝑟𝑢 × 𝑟𝑣 and we denote the corresponding unit vector in the direction of N by n and it is

𝑥(𝑢, 𝑣0 ), 𝑦(𝑢, 𝑣0 ), 𝑧(𝑢, 𝑣0 ). given by


1 1
n= 𝑁= 𝑟𝑢 × 𝑟𝑣 .
∥𝑁 ∥ ∥𝑟𝑢 × 𝑟𝑣 ∥
Then the tangent vector
( )

Example 5.4.2. Find the equation of the tangent plane to the surface given by
∂𝑥 ∂𝑦 ∂𝑧
𝑇𝑣0 = (𝑢, 𝑣0 )𝑖 + (𝑢, 𝑣0 )𝑗 + (𝑢, 𝑣0 )𝑘
∂𝑢 ∂𝑢 ∂𝑢
𝑟(𝑢, 𝑣) = 𝑢𝑖 + (𝑢 + 𝑣)𝑗 + (𝑢 + 𝑣 2 )𝑘
is a tangent vector to the curve Ω𝑢 at 𝑃0 . Similarly, the vector
∂𝑥 ∂𝑦 ∂𝑧 at a point(2, 4, 6).
𝑇𝑣0 = (𝑢0 , 𝑣))𝑖 + (𝑢0 , 𝑣)𝑗 + (𝑢0 , 𝑣)𝑘
∂𝑢 ∂𝑢 ∂𝑢
5.4 Surface Integrals 130 5.4 Surface Integrals 131

Solution 2. A cube is a piecewise smooth since all the six faces are smooth, but the eight sides do not
have tangents.
Here 𝑥(𝑢, 𝑣) = 𝑢, 𝑦(𝑢, 𝑣) = 𝑢 + 𝑣 and 𝑧(𝑢, 𝑣) = 𝑢 + 𝑣 2 . First let us find the normal vector
𝑁 (2, 4, 6) to the plane tangent to the surface at the given point which is Now we are in a position to define the surface integral of a vector field over a piecewise smooth
∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦 surface.
𝑁 (2, 4, 6) = 𝑖+ 𝑗+ 𝑘 = 8𝑖 + 𝑘.
∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣
Definition 5.4.3. Suppose S is a smooth surface parameterized by r(𝑢, 𝑣) with normal vector
Therefore, equation of the plane Π tangent to the given surface at the point (2, 4, 6) is
𝑁 (𝑢, 𝑣) = 𝑟𝑢 × 𝑟𝑣 . Let F be a continuous function on S. Then the surface integral of F over S
Π : 8𝑥 + 𝑧 = 22. is denoted by
𝐹 (𝑥, 𝑦, 𝑧)𝑑𝜎
Remark 5.4.1. If the surface Ω is a surface represented by the equation 𝑔(𝑥, 𝑦, 𝑧) = 0, then the 𝑆
��

unit normal vector is given by and is defined by


1 𝐹 (𝑥, 𝑦, 𝑧)𝑑𝜎 = 𝐹 (𝑟(𝑢, 𝑣))∥𝑁 (𝑢, 𝑣)∥𝑑𝑢𝑑𝑣
n=
𝑆 𝑅
∇𝑔.
�� ��

∥∇𝑔∥
and if F is a vector field then the surface integral of F over S
Example 5.4.3. If Ω is the sphere 𝑓 (𝑥, 𝑦, 𝑧) = 𝑥2 + 𝑦 2 + 𝑧 2 − 𝑎2 = 0 and 𝑎 ∕= 0, then
1 1 𝐹 (𝑥, 𝑦, 𝑧)𝑑𝜎
𝑛= (2𝑥, 2𝑦, 2𝑧) 𝑆
��

∇𝑓 = √
∥∇𝑓 ∥ ∇𝑓.∇𝑓
is defined by
1
(2𝑥, 2𝑦, 2𝑧) 𝐹 (𝑥, 𝑦, 𝑧)𝑑𝜎 = 𝐹 (𝑟(𝑢, 𝑣)).𝑁 (𝑢, 𝑣)𝑑𝑢𝑑𝑣.
4(𝑥2 + 𝑦 2 𝑧 2 ) 𝑆 𝑅
�� ��

1 𝑥 𝑦 𝑧
=√

= (2𝑥, 2𝑦, 2𝑧) = ( , , ) Example 5.4.5. Evaluate


2𝑎 𝑎 𝑎 𝑎
(𝑥 + 𝑦)𝑑𝜎
Recall the following from calculus. 𝑆
��

Suppose Ω represents a surface in R3 with equation 𝑧 = 𝑔(𝑥, 𝑦) and let R be its projection on where S is the portion of the cylinder 𝑥2 + 𝑦 2 = 3 between the planes 𝑧 = 0 and 𝑧 = 6.
the 𝑥𝑦−plane. If g has continuous first partial derivatives on R, then the surface area of Ω is
Solution
2
∂𝑔 ∂𝑧
Area of Ω = + + 1 𝑑𝐴.
𝑅 ∂𝑥 ∂𝑦 The parametrization of the cylinder is
��� � � �2 �
��

But the integrant is the norm of the normal vector 𝑁 (𝑥, 𝑦) to the surface, that is, √ √
𝑟(𝑢, 𝑣) = 3 cos 𝑢𝑖 + 3 sin 𝑢𝑗 + 𝑣𝑘,
Area of Ω = ∣∣𝑁 (𝑥, 𝑦)∣∣𝑑𝐴.
𝑅 for 0 ≤ 𝑢 ≤ 2𝜋 and 0 ≤ 𝑣 ≤ 6.
��

√ √
Definition 5.4.2. A surface S is called a smooth surface if the unit normal vector n is continuous Then 𝑟𝑢 = − 3 sin 𝑢𝑖 + 3 cos 𝑢𝑗 and 𝑟𝑣 = 𝑘 and
on S and surface S is called piecewise smooth if it consists of finitely many smooth portions. 𝑖 𝑗
√ √
� �

Example 5.4.4. Examples of smooth and piecewise smooth surfaces.


� �
� 𝑘�

0 0
� √ � √

1. A sphere is a smooth surface, since at any point on the sphere, there is a continuous tangent
𝑟𝑢 × 𝑟𝑣 = ��− 3 sin 𝑢 3 cos 𝑢 0�� = 3 cos 𝑢𝑖 + 3 sin 𝑢𝑗
� �
� 1�

normal.
5.4 Surface Integrals 132 5.4 Surface Integrals 133

which implies ∥𝑟𝑢 × 𝑟𝑣 ∥ = 3. We denote the corresponding unit vector in the direction of N by n,
Therefore, 1 1
𝑛= 𝑁= 𝑟𝑢 × 𝑟𝑣 .
∥𝑁 ∥ ∥𝑟𝑢 × 𝑟𝑣 ∥
(𝑥 + 𝑦)𝑑𝜎 =
( )

If a surface S is represented by a the equation 𝑔(𝑥, 𝑦, 𝑧) = 0, then


( 3(cos 𝑢 + sin 𝑢))∥𝑟𝑢 × 𝑟𝑣 ∥𝑑𝐴
𝑆
�� �� √

1
= 3 (cos 𝑢 + sin 𝑢)𝑑𝑣𝑑𝑢 𝑛= ∇𝑔.
0 0
�𝑅2𝜋 � 6

∥∇𝑔∥
= 18 (cos 𝑢 + sin 𝑢)𝑑𝑢 Example 5.4.6. Let S be the portion of the surface 𝑧 = 1 − 𝑥2 − 𝑦 2 that lie above the 𝑥𝑦-plane,
0
and suppose that S is oriented upward(i.e. n is in the upward direction at all points of S).
� 2𝜋

= 18 sin 𝑢 − cos 𝑢 = 0. Find the flux Φ of the flow field 𝐹 (𝑥, 𝑦, 𝑧) = (𝑥, 𝑦, 𝑧) across S.
0
� �2𝜋

Hence, 𝑆
(𝑥 + 𝑦)𝑑𝜎 = 0.
Solution
∫∫

5.4.2 Applications of Surface Integrals Here S is described by 𝑔(𝑥, 𝑦, 𝑧) = 𝑧 − 1 + 𝑥2 + 𝑦 2 and 𝑁 = ∇𝑔 = (2𝑥, 2𝑦, 1).
Returning back to the definition of surface integrals:
Flux of A fluid Across a Surface
𝐹.𝑛𝑑𝐴 = 𝐹 (𝑟(𝑢, 𝑣)).𝑁 (𝑢, 𝑣)𝑑𝑢𝑑𝑣,
Suppose a fluid moves in some region of the space with velocity. The volume of fluid crossing a 𝑆 𝑅
�� ��

certain surface S per unit time is known as the flux across the surface S and the surface integral 𝑁 = 𝑛∥𝑁 ∥ and 𝑛 = (cos 𝛼, cos 𝛽, cos 𝛾), the direction cosines of N, 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 ).
of a vector function F over a surface S describes the flux across S, when 𝐹 = 𝜌𝑣, 𝜌 the density Then
of fluid, v velocity of the flow.
𝐹.𝑛𝑑𝐴 = 𝐹1 cos 𝛼 + 𝐹2 cos 𝛽 + 𝐹3 cos 𝛾 𝑑𝐴
𝑆 𝑆
�� ��
( )

Hence the above surface integral is known as the flux integral and if 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 ) and = 𝐹1 𝑑𝑦𝑑𝑧 + 𝐹2 𝑑𝑥𝑑𝑧 + 𝐹3 𝑑𝑥𝑑𝑦
𝑆
��

𝑁 = (𝑁1 , 𝑁2 , 𝑁3 ), then
( )

(This is similar to the formulation in line integrals.)


𝐹.𝑛𝑑𝐴 = (𝐹1 𝑁1 + 𝐹2 𝑁2 + 𝐹3 𝑁3 )𝑑𝑢𝑑𝑣.
𝑠 𝑅
�� ��

Similarly, for surface S in surface integrals we parameterize the surfaces. But since surfaces are Surface Area
two dimensional; S can be represented as
If Ω is a piecewise smooth surface, then the area of the surface Ω is given by
Υ(𝑢, 𝑣) = 𝑥(𝑢, 𝑣)𝑖 + 𝑦(𝑢, 𝑣)𝑗 + 𝑧(𝑢, 𝑣)𝑘, (𝑢, 𝑣) ∈ 𝑅,

where R is some region in uv-plane. Area of Ω = 𝑑𝐴.


Ω
��

A normal vector N of a surface S whose parametric form is But ∥𝑁 ∥ = ∥𝑟𝑢 × 𝑟𝑣 ∥ represents the area of a parallelogram with adjacent side vectors 𝑟𝑢 and

𝑟(𝑢, 𝑣) = 𝑥(𝑢, 𝑣)𝑖 + 𝑦(𝑢, 𝑣)𝑗 + 𝑧(𝑢, 𝑣)𝑘


𝑟𝑣 . Therefore, we can write 𝑑𝐴 as 𝑑𝐴 = ∥𝑟𝑢 × 𝑟𝑣 ∥𝑑𝑢𝑑𝑣. Hence

Area of Ω = 𝑑𝐴 = ∥𝑟𝑢 × 𝑟𝑣 ∥𝑑𝑢𝑑𝑣,


at the point P is Ω 𝑅
�� ��

𝑁 = 𝑟𝑢 × 𝑟𝑣 . where R is the projection on the 𝑢𝑣−plane of the surface Ω.


5.4 Surface Integrals 134 5.4 Surface Integrals 135

Example 5.4.7. Find the area of the surface of the torus given Figure 5.4.7. Mass and Center of Mass of a Shell

Consider a shell of negligible thickness in the shape of piecewise smooth surface Ω. Let 𝛿(𝑥, 𝑦, 𝑧)
be the density of the material of the shell at point (𝑥, 𝑦, 𝑧).
Let 𝑥(𝑢, 𝑣), 𝑦(𝑢, 𝑣) and 𝑧(𝑢, 𝑣) be the coordinate functions of Ω for (𝑢, 𝑣) ∈ 𝑅, where R is the
� � projection of the surface in the 𝑥𝑦−plane. Then the mass of Ω is given by


Mass of Ω = 𝛿(𝑥, 𝑦, 𝑧)𝑑𝜎
Ω

��

and the center of mass of the shell is (¯


𝑥, 𝑦¯, 𝑧¯), where
Figure 5.14: A torus
1 1 1
𝑥¯ = 𝑥𝛿(𝑥, 𝑦, 𝑧)𝜎, 𝑦¯ = 𝑦𝛿(𝑥, 𝑦, 𝑧)𝜎 and 𝑧¯ = 𝑧𝛿(𝑥, 𝑦, 𝑧)𝜎,
𝑚 Ω 𝑚 Ω 𝑚 Ω
�� �� ��

Here 𝛾(𝑢, 𝑣) = (𝑎 + 𝑏 cos 𝑣) cos 𝑢𝑖 + (𝑎 + 𝑏 cos 𝑣) sin 𝑢𝑗 + 𝑏 sin 𝑣𝑘.


where 𝑚 is the mass of the shell.
Thus 𝑟𝑢 = −(𝑎 + 𝑏 cos 𝑣) sin 𝑢𝑖 + (𝑎 + 𝑏 cos 𝑣) cos 𝑢𝑗 + 0𝑘,
𝑟𝑣 = −𝑏 sin 𝑣 cos 𝑢𝑖 − 𝑏 sin 𝑣 sin 𝑢𝑗 + 𝑏 cos 𝑣𝑘 and and hence
If the surface is given by 𝑧 = 𝑓 (𝑥, 𝑦) for (𝑥, 𝑦) ∈ 𝑅, then the mass is given by
𝑖 𝑗
� �

∂𝑓 ∂𝑓
� �

𝑚= 𝛿(𝑥, 𝑦, 𝑧) 1 + + 𝑑𝑦𝑑𝑥.
� 𝑘 �

Ω ∂𝑥 ∂𝑥
� � � � �2 � �2
��

−𝑏 sin 𝑣 sin 𝑢
𝑟𝑢 × 𝑟𝑣 = ��−(𝑎 + 𝑏 cos 𝑣) sin 𝑢 (𝑎 + 𝑏 cos 𝑣) cos 𝑢 0 ��

= 𝑏(𝑎 + 𝑏 cos 𝑣)(cos 𝑢 cos 𝑣𝑖 + sin 𝑢 cos 𝑣𝑗 + sin 𝑣𝑘). Example 5.4.8. Find the center of mass of the sphere Ω, 𝑥2 + 𝑦 2 + 𝑧 2 = 𝑎2 , in the first octant,
� �
� −𝑏 sin 𝑣 cos 𝑢 𝑏 cos 𝑣 �

if it has constant density 𝜇0 .


Which implies ∥𝑟𝑢 × 𝑟𝑣 ∥ = 𝑏(𝑎 + 𝑏𝑏 cos 𝑣) and hence

𝐴(𝑆) = 𝑏(𝑎 + 𝑏 cos 𝑣)𝑑𝑢𝑑𝑣
∥𝑟𝑢 × 𝑟𝑣 ∥𝑑𝑢𝑑𝑣 =

� � � 2𝜋 � 2𝜋

= 𝑏𝑎𝑑𝑢𝑑𝑣 + 𝑏2 cos 𝑣𝑑𝑢𝑑𝑣.


0 0 0 0
�𝑅 2𝜋 � 2𝜋 � 02𝜋 � 02𝜋

= 4𝜋 2 𝑎𝑏 + 𝑏2 𝑜𝑑𝑣
0
� 2𝜋

� �
= 4𝜋 2 𝑎𝑏

Then �
Φ= 𝐹.𝑛𝑑𝐴 = (2𝑥2 + 2𝑦 2 + 𝑧)𝑑𝐴. �
𝑆 𝑅
�� ��

But since 𝑧 = 1 − 𝑥2 − 𝑦 2 , we have Figure 5.15: A sphere of radius 𝑎 in the first octant.

Φ = (𝑥2 + 𝑦 2 + 1)𝑑𝐴
𝑅
� �

Solution
= (𝛾 2 + 1)𝛾𝑑𝛾𝑑𝜃
0 0
� 2𝜋 � 1

3 3𝜋 The mass 𝑚 of the sphere is


= 𝑑𝜃 =
0 4 2 𝑚= 𝜇0 𝑑𝜎.
� 2𝜋 � �

Ω
��
5.5 Divergence and Stock’s Theorems 136 5.5 Divergence and Stock’s Theorems 137

In spherical coordinates we know that the equation of a sphere of radius 𝑎 is given by, 𝜌 = 𝑎. A smooth surface is said to be orientable if the positive normal direction, given at an arbitrary
Now we change the cartesian coordinates in to spherical coordinates to get point 𝑃0 of S, can be continued in a unique and continuous way to the entire surface.
A smooth surface is said to be piecewise orientable if we can orient each smooth piece of the
𝑥 = 𝑎 cos 𝜃 sin 𝜙, 𝑦 = 𝑎 sin 𝜃 sin 𝜙 and 𝑧 = 𝑎 cos 𝜙
surface S in such a manner that along each curve 𝐶 ∗ which is a common boundary of two pieces
𝜋 𝜋 𝑆1 and 𝑆2 the positive direction of 𝐶 ∗ relative to 𝑆1 is opposite to the positive direction of 𝐶 ∗
for 0 ≤ 𝜃 ≤ 2
and 2
≤ 𝜙 ≤ 𝜋.
Therefore, the parametrization of the sphere in the first octant is : relative to 𝑆2 .

𝑟(𝜃, 𝜙) = 𝑎 cos 𝜃 sin 𝜙𝑖 + 𝑎 sin 𝜃 sin 𝜙𝑗 + 𝑎 cos 𝜙𝑘 �


� ��
𝜋
2
for 0 ≤ 𝜃 ≤ and 0 ≤ 𝜙 ≤ 𝜋2 .
Then 𝑟𝜃 = −𝑎 sin 𝜃 sin 𝜙𝑖 + 𝑎 cos 𝜃 sin 𝜙𝑗 and 𝑟𝜙 = 𝑎 cos 𝜃 cos 𝜙𝑖 + 𝑎 sin 𝜃 cos 𝜙𝑗 − 𝑎 sin 𝜙𝑘. �
Therefore,
𝑖 𝑗
� �
� �

��
� �
� 𝑘 �

��������������������
� �

�����������������

𝑟𝜃 × 𝑟𝜙 = �� 𝑎 sin 𝜃 sin 𝜙 𝑎 cos 𝜃 sin 𝜙 0 �� .

This implies
� �
�𝑎 cos 𝜃 cos 𝜙 𝑎 sin 𝜃 cos 𝜙 −𝑎 sin 𝜙�

2 2 2 2 2
Figure 5.16: Smooth orientable and pieceorientable surfaces.
𝑟𝜃 × 𝑟𝜙 = −𝑎 sin 𝜙 cos 𝜃𝑖 − 𝑎 sin 𝜙 sin 𝜃𝑗 − 𝑎 sin 𝜙 cos 𝜙𝑘

and ∥𝑟𝜃 × 𝑟𝜙 ∥ = 𝑎2 sin 𝜙.


There are also non-orientable surfaces. Möbius strip [no inward and no outward directions once
Therefore 𝜋 𝜋
2 2 2
𝑎𝜋 in once out word.]
𝑚= 𝜇0 𝑎2 sin 𝜙𝑑𝜃𝑑𝜙 = 𝜇0 𝑎2 sin 𝜙𝑑𝜃𝑑𝜙 = 𝜇0 .
Ω 0 0 2 Consider a boundary surface of a solid region D in 3-space. Such surfaces are called closed.
�� � �

Then let us find the coordinates of center of mass which is given by


If a closed surface is orientable or piecewise orientable, then there are only two possible orienta-
1 1 1 tions: inward (to ward the solid) and outward (away from the solid).
𝑥= 𝑥𝜇0 𝑑𝜎, ,𝑦 = 𝑦𝜇0 𝑑𝜎 and 𝑧= 𝑧𝜇0 𝑑𝜎.
𝑚 Ω 𝑚 Ω 𝑚 Ω
�� �� ��

Let 𝐹 (𝑥, 𝑦, 𝑧) = 𝐹1 (𝑥, 𝑦, 𝑧)𝑖 + 𝐹2 (𝑥, 𝑦, 𝑧)𝑗 + 𝐹3 (𝑥, 𝑦, 𝑧)𝑘 be vector field defined on a solid D.
𝜋 𝜋 𝜋 𝜋
1 2 2 2 2 𝑎 2𝐹1 2𝐹2 2𝐹3
3 2 2 Then 𝑑𝑖𝑣𝐹 = + +
𝑥= 𝜇0 𝑎 cos 𝜃 sin 𝜙𝑑𝜃𝑑𝜙 = 2𝑎 cos 𝜃 sin 𝜙𝑑𝜃𝑑𝜙 = 2𝑥 2𝑦 2𝑧
𝑚 0 0 0 0 2
� � � �

𝑎 Theorem 5.5.1 (Divergence Theorem of Gauss). Let D be a solid in R3 with surface S oriented
and in a similar fashion we can find find 𝑦 = 2
and 𝑧 = 𝑎2 . Therefore, the center of mass of the
outward. If 𝐹 = 𝐹1 𝑖 + 𝐹2 𝑗 + 𝐹3 𝑘, where 𝐹1 , 𝐹2 and 𝐹3 have continuous first and second partial
portion of the sphere is
derivatives on some open set containing D, then
(𝑥, 𝑦, 𝑧) = , , .
2 2 2
𝐹.𝑛𝑑𝐴, = 𝑑𝑖𝑣𝐹 𝑑𝑣,
(𝑎 𝑎 𝑎)

𝑆 𝐷
� � ���

that is,
5.5 Divergence and Stock’s Theorems
∂𝐹1 ∂𝐹2 ∂𝐹3
( + + )𝑑𝑥𝑑𝑦𝑑𝑧 = (𝐹1 𝑑𝑦𝑑𝑧 + 𝐹2 𝑑𝑧𝑑𝑥 + 𝐹3 𝑑𝑥𝑑𝑦).
𝐷 ∂𝑥 ∂𝑦 ∂𝑧 𝑆
If a surface S is smooth and P is any point in S we can choose a unit normal vector n of S at P.
��� ��

Example 5.5.1. Let S be the sphere 𝑥2 + 𝑦 2 + 𝑧 2 = 𝑎2 oriented outward. Find the flux of the
Then we can take the direction of n as the positive normal direction of S at P( two possibilities).
vector function 𝐹 (𝑥, 𝑦, 𝑧) = 𝑧𝑘 across S.
5.5 Divergence and Stock’s Theorems 138 5.5 Divergence and Stock’s Theorems 139

Solution This implies,


Φ(𝐷)
=
𝑑𝑖𝑣𝐹 (𝑃0 ) ∼ .
Here 𝑉 (𝐷)
∂𝑧 Φ(𝐷)
𝑑𝑖𝑣𝐹 =
= 1. The ratio 𝑉 (𝐷)
is called the flux density of F over D. If the radius of the sphere approach
∂𝑧
If D is the spherical solid enclosed by S by Divergence Theorem, the flux Φ across S is zero,(i.e. if 𝑉 (𝐷) → 0) then the approximate value will be exact.
Hence
4𝜋𝑎3
Φ= 𝐹.𝑛𝑑𝐴 = 𝑑𝑉 = volume of D = .
𝑆 𝐷 3 Φ(𝐷) 1
�� ���

𝑑𝑖𝑣𝐹 (𝑃0 ) = lim or 𝑑𝑖𝑣𝐹 (𝑃0 ) = lim 𝐹.𝑛𝑑𝐴.


𝑣(𝐷)→0 𝑣(𝐷) 𝑣(𝐷)→0 𝑣(𝐷) 𝑆
��

Example 5.5.2. Let S be the surface of the solid enclosed by the circular cylinder 𝑥2 + 𝑦 2 = 9
and the planes 𝑧 = 0 and 𝑧 = 2, oriented outward. use the Divergence theorem to find the flux This limit is called the flux density of F at the point 𝑃0 and some times is taken as the definition
Φ of the vector field for divergence.
3 3 2
𝐹 (𝑥, 𝑦, 𝑧) = 𝑥 𝑖 + 𝑦 𝑗 + 𝑧 𝑘 In an incompressible fluid:

across S.
∙ Points 𝑃0 at which div 𝐹 (0 ) > 0 are called sources (because Φ(𝐷) > 0, out flow).

Solution ∙ Points 𝑃0 at which div𝐹 (𝑃0 ) < 0 are called sinks (b/c Φ(𝐷) < 0, in flow).

We have 𝑑𝑖𝑣𝐹 = 3𝑥2 + 3𝑦 2 + 2𝑧. Thus if D is the cylindrical solid enclosed by S, we have: ∙ Fluid enters the flow at a source and drains out at a sink.

Φ= 𝐹.𝑛𝑑𝐴 = 𝑑𝑖𝑣𝐹 𝑑𝑣 = (3𝑥2 + 3𝑦 2 + 2𝑧)𝑑𝑣. If an incompressible fluid is without sources or sinks, we must have: 𝑑𝑖𝑣𝐹 (𝑃 ) = 0 for all P point
𝑆 𝐷 𝐷
�� ��� ���

and in hydrodynamics, this is called the continuity equation for incompressible fluids.
Let 𝑥 = 𝛾 cos 𝜃 for 0 ≤ 𝜃 ≤ 2𝜋 , 𝑦 = 𝛾 sin 𝜃 for 0 ≤ 𝛾 ≤ 3 and 𝑧 = 𝑧 for 0 ≤ 𝑧 ≤ 2. Then
Up to this point we were looking at the application of 𝑑𝑖𝑣𝐹 in 3- space, We will now go to the
Φ = (3𝛾 2 + 2𝑧)𝛾𝑑𝑧𝑑𝛾𝑑𝜃
0 0 0 𝑐𝑢𝑟𝑙𝐹 in 3-space which helps us in generalizing Green’s Theorem to a 3-dimensional object.
� 2𝜋 � 3 � 2

= (𝑟3 + 2𝑟𝑧)𝑑𝑧𝑑𝑟𝑑𝜃
0 0 0
� 2𝜋 � 3 � 2

= (6𝛾 3 + 4𝛾)𝑑𝑟𝑑𝜃 = 279𝜋


0 0
� 2𝜋 � 3

Physical interpretation of the Divergence of a Vector Field �



����������������������� �����������������������
�������������������������� ��������������������������
The Divergence Theorems leads us to the following physical interpretation of the divergence of a
vector field F. Suppose D is a small spherical region centered at the point 𝑃0 and that its surface
Figure 5.17: Orientated Curves.
S is oriented outward. Let 𝑉 (𝐷) denote its volume and Φ(𝐷) denote the flux of F across S.
If 𝑑𝑖𝑣𝐹 is continuous on D, then it will not vary much from 𝑑𝑖𝑣𝐹 (𝑃0 ) over a small region D. Consider an oriented surface S with boundary C. If
Hence
𝐹 (𝑥, 𝑦, 𝑧) = 𝐹1 (𝑥, 𝑦, 𝑧)𝑖 + 𝐹2 (𝑥, 𝑦, 𝑧)𝑗 + 𝐹3 (𝑥, 𝑦, 𝑧)𝑘,
Φ(𝐷) = 𝐹.𝑛𝑑𝐴 = 𝑑𝑖𝑣𝐹 𝑑𝑣 ≅ 𝑑𝑖𝑣𝐹 (𝑃0 ) 𝑑𝑣 = 𝑑𝑖𝑣𝐹 (𝑃0 )𝑉 (𝐷).
𝑠 𝐷 𝐷
�� ��� ���
5.5 Divergence and Stock’s Theorems 140 5.5 Divergence and Stock’s Theorems 141

then recall that 𝑑𝑟 = (−2 sin 𝑡, 2 cos 𝑡)𝑑𝑡. Therefore

𝐹.𝑑𝑟 = (2𝑧𝑑𝑥 + 3𝑥𝑑𝑦 + 5𝑦𝑑𝑧)


∂ ∂ ∂ 𝐶 𝐶
� � � �

∂𝑦 ∂𝑧
� �

∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
� 𝑖 𝑗 𝑘 � � � � � � �

=
� �

2 × 0 × 𝑑𝑥 + 3(2 cos 𝑡)([2 cos 𝑡𝑑𝑡) + 5(2 sin 𝑡)𝑑𝑧


CurlF = �� ∂𝑥 � = ∂𝐹3 − ∂𝐹2 𝑖 + ∂𝐹1 − ∂𝐹3 𝑗 + ∂𝐹2 − ∂𝐹1 .
� � 2𝜋
� � ( )

Theorem 5.5.2 (Stoke’s Theorem). Let S be a piecewise smooth orientable surface that is 1 1
� 𝐹1 𝐹 2 𝐹 3 �

= 12 cos 𝑠2 𝑡𝑑𝑡 = [ 𝑡 + sin 2𝑡]2𝜋


0 = 12𝜋.
bounded by a simple, closed, piecewise smooth curve C with positive orientation. If the compo- 0 2 4
�0 2𝜋

nents of 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 , ) are continuous and have continuous first partial derivatives on some On the other hand:
open set containing S, and if T is the unit tangent vector of C, then.
2 2 2
� �

𝐹.𝑇 𝑑𝑆 = (𝑐𝑢𝑟𝑙𝐹 ).𝑛𝑑𝐴. 2𝑦 2𝑧


� �

𝐶 𝑆
� �� � 𝑖 𝑗 𝑘 �
� �
� = 5𝑖 + 2𝑗 + 3𝑘 = (5, 2, 3)

𝑑𝑟
curlF = �� 2𝑥 �

Recall that, 𝑇 = 𝑑𝑆
which implies 𝑑𝑟 = 𝑇 𝑑𝑆. Hence the above formula takes the following form
� �
� 2𝑧 3𝑥 5𝑦 �

Since 𝑧 = 4 − 𝑥2 − 𝑦 2 , we have 𝑔(𝑥, 𝑦, 𝑧) = 𝑥2 + 𝑦 2 + 𝑧 − 4 and 𝑁 = ∇𝑔 = (2𝑥, 2𝑦1) Then:


𝐹.𝑑𝑟 = (𝑐𝑢𝑟𝑙𝐹 ).𝑛𝑑𝐴
𝑐 𝑆
� ��

(𝑐𝑢𝑟𝑙𝐹 ).𝑛𝑑𝐴 = (5, 2, 3).(2𝑥, 2𝑦, 1)𝑑𝐴 = (10𝑥 + 4𝑦 + 3)𝑑𝐴


Example 5.5.3. Let S be the portion of the paraboloid 𝑆 𝑅 𝑅
�� �� ��

2
= (10𝑟 cos 𝜃 + 4𝑟 sin 𝜃 + 3)𝑟𝑑𝑟𝑑𝜃
0 0
𝑧 = 4 − 𝑥2 − 𝑦 2
� 2𝜋 �

2
for which 𝑧 ≥ 0, and let the vector field = (10𝑟2 𝑐𝑜𝑠𝜃 + 4𝑟2 𝑠𝑖𝑛𝜃 + 3𝑟)𝑑𝑟𝑑𝜃
0 0
� 2𝜋 �

2𝜋
𝐹 (𝑥, 𝑦, 𝑧) = 2𝑧𝑖 + 3𝑥𝑗 + 5𝑦𝑘 10 4 3
=
3 3 2
× 8 cos + 𝑥8 sin 𝜃 + × 4 𝑑𝜃
0
� � �

is defined on S. Verify Stoke’s Theorem , if S is oriented upward.


� 80 32
= cos 𝜃 + 6𝜃 = 12𝜋,
3 3
sin 𝜃 −
0
� �2𝜋

which agrees with the line integral value.

Remark 5.5.3. 1. If 𝑆1 and 𝑆2 have the same boundary C which is oriented positively, then
� for any vector function F that satisfy the hypotheses in Stoke’s Theorem, we have:
� �
(𝑐𝑢𝑟𝑙𝐹 ).𝑛𝑑𝐴 = (𝑐𝑢𝑟𝑙𝐹 ).𝑛𝑑𝐴.
𝑆1 𝑆2
�� ��

Figure 5.18: The paraboloid 𝑧 = 4 − 𝑥2 − 𝑦 2 for 𝑧 ≥ 0.

2. If 𝐹 = (𝐹1 , 𝐹2 ) is a vector function that is continuously differentiable in a domain in the


Solution 𝑥𝑦−plane containing a simply connected domain S whose boundary C is a piecewise smooth
simple closed curve, then
Here C is the circle 𝑥2 + 𝑦 2 = 4 and is oriented counterclockwise, since S is oriented upward.
∂𝐹2 ∂𝐹1
(𝑐𝑢𝑟𝑙𝐹 ).𝑛 = (𝑐𝑢𝑟𝐹 ).𝑘 = .
∂𝑥 ∂𝑦
Hence C can be parameterized as 𝑟(𝑡) = (2 cos 𝑡, 2 sin 𝑡), 𝑧 = 0, for (0 ≤ 𝑡 ≤ 2𝜋) and we have −
5.6 Exercises 142 5.6 Exercises 143

� This page is left blank intensionally.

��
��


Figure 5.19: Surfaces with the same boundary.

Hence from Stoke’s Theorem we have:


2𝐹2 2𝐹1
𝑑𝐴 = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦),
2𝑥 2𝑦

𝑆 𝐶
�� � � �

which is the result of Green’s Theorem.

5.6 Exercises
145

This page is left blank intensionally.

Part III

Complex Analysis
6.1 Complex Numbers 147

in the form of 𝑎 + 𝑏𝑖. But


3+𝑖 1
= (3 + 𝑖)
2 − 4𝑖 2 − 4𝑖
and
1 2 4 2 4 1 1
= 2 + 𝑖= + 𝑖= + 𝑖.
2 − 4𝑖 2 + (−4)2 22 + (−4)2 20 20 10 5
Chapter 6 Therefore,
3+𝑖 1 1 1 7
= + 𝑖 (3 + 𝑖) = + 𝑖.
10 5 10 10
� �

2 − 4𝑖

For a complex number 𝑧 = 𝑎 + 𝑏𝑖, the number 𝑎 is called the real part of 𝑧 and denoted by 𝑅𝑒(𝑧)
COMPLEX ANALYTIC FUNCTIONS and 𝑏 is called the imaginary part of 𝑧 and denoted by 𝐼𝑚(𝑧).

Remark 6.1.1. Some basic points about complex numbers.

6.1 Complex Numbers 1. The real and imaginary parts of any complex number are real numbers.

2. Any real number 𝑎 can be considered as a complex number 𝑎 + 0𝑖. Therefore, the set of
In this section we are going to revise the set of complex numbers on which we are going to work
complex numbers is an extension of the set of real numbers.
with in the coming chapters.
3. The set of complex numbers is denoted by C.
A complex number 𝑧 is a symbol of the form 𝑥 + 𝑦𝑖 or 𝑥 + 𝑖𝑦, where 𝑥 and 𝑦 are real numbers and
4. If 𝑥, 𝑦 and 𝑧 are complex numbers, then:
𝑖2 = −1. Let 𝑎 + 𝑏𝑖 and 𝑐 + 𝑑𝑖 be two complex numbers. The four basic arithmetic operations
are defined as follows. 4.1. 𝑥 + 𝑦 = 𝑦 + 𝑥 (Addition is commutative.)
4.2. 𝑥𝑦 = 𝑦𝑥 (Multiplication is commutative.)
1. Equality: 𝑎 + 𝑏𝑖 = 𝑐 + 𝑑𝑖 if and only if 𝑎 = 𝑐 and 𝑏 = 𝑑.
4.3. 𝑥 + (𝑦 + 𝑧) = (𝑥 + 𝑦) + 𝑧 (Associative law for addition.)
2. Addition: (𝑎 + 𝑏𝑖)(𝑐 + 𝑑𝑖) = (𝑎 + 𝑐) + (𝑏 + 𝑑)𝑖.
4.4. 𝑥(𝑦𝑧) = (𝑥𝑦)𝑧 (Associative law for multiplication.)
3. Multiplication: (𝑎 + 𝑏𝑖)(𝑐 + 𝑑𝑖) = (𝑎𝑐 − 𝑏𝑑) + (𝑎𝑑 + 𝑏𝑐)𝑖. 4.5. 𝑥(𝑦 + 𝑧) = 𝑥𝑦 + 𝑥𝑧 (Distributive law.)

4. Division: Let 𝑧 = 𝑎 + 𝑏𝑖 and 𝑤 = 𝑐 + 𝑑𝑖 be complex numbers and 𝑧 ∕= 0. Then 4.6. 𝑥 + 0 = 0 + 𝑥 = 𝑥 (0 is identity element for addition.)

1 𝑎 𝑏 4.7. 𝑥.1 = 1.𝑥 = 𝑥 (1 is an identity element for multiplication.)


= 2 2
𝑖
𝑧 𝑎 +𝑏 𝑎 + 𝑏2
− 2
Any complex number 𝑧 = 𝑎 + 𝑏𝑖 can be represented by the point (𝑎, 𝑏) in the cartesian coordinate
and hence
𝑤 1 plane. In this case the coordinate plane is called the complex plane and the horizontal and the
= 𝑤. .
𝑧 𝑧 vertical axes are called the real axis and the imaginary axis respectively.
Example 6.1.1. Suppose we want write
Remark 6.1.2. The representation of a complex 𝑧 = 𝑎 + 𝑏𝑖 by the point (𝑎, 𝑏) gives us a one to
3+𝑖
one correspondence between the set of complex numbers C and the set of ordered pairs of real
2 − 4𝑖
numbers R × R.
6.1 Complex Numbers 148 6.1 Complex Numbers 149

� ���� Therefore the polar form of z is ,


√ 3𝜋

����� 𝑧= 2𝑒𝑖 4

and in the polar coordinates (𝑟, 𝜃)

3𝜋
� 𝑧= 2 cos + 𝑖 sin .
4 4
� � � �
√ ( 3𝜋 )

Definition 6.1.4. Let 𝑧 = 𝑎 + 𝑏𝑖 be a complex number. The conjugate of 𝑧 is the complex


number 𝑧¯ = 𝑎 − 𝑏𝑖.
Figure 6.1: The complex number a+bi as a point in the cartesian coordinate plane.
On the complex plane, the conjugate of a complex number 𝑧 = 𝑎 + 𝑏𝑖 is the reflection of z on
Definition 6.1.3. Let 𝑧 = 𝑎 + 𝑏𝑖 be a complex number. the real axis.
√ �
1. The magnitude (modules)of 𝑧 is the real number 𝑚𝑜𝑑(𝑧) = ∣𝑧∣ = 𝑎2 + 𝑏2 .

2. The point (𝑎, 𝑏) has polar coordinates (𝑟, 𝜃), where 𝑟 = ∣𝑧∣ and 𝜃 = arctan( 𝑎𝑏 ). Then 𝜃 is
called an argument of 𝑧, denoted by 𝑎𝑟𝑔(𝑧).

3. If (𝑟, 𝜃) is a polar coordinate of (𝑎, 𝑏), then �

𝑧 = 𝑎 + 𝑏𝑖 = 𝑟 cos 𝜃 + 𝑖𝑟 sin 𝜃 = 𝑟(cos 𝜃 + 𝑖 sin 𝜃)

and using Euler’s formula we can write 𝑟(cos 𝜃 + 𝑖 sin 𝜃) = 𝑟𝑒𝑖𝜃 . The expression 𝑧 = 𝑟𝑒𝑖𝜃 is ��������������
called the polar form of 𝑧.
Figure 6.3: A complex number and its conjugate in the complex plane.

Remark 6.1.5. Let 𝑧 and 𝑤 be complex numbers. Then


������

����� 1. 𝑧 = 𝑧 and 𝑧 = 𝑧 if and only if 𝑧 is a real number.

� 2. 𝑧 ± 𝑤 = 𝑧 ± 𝑤, 𝑧𝑤 = 𝑧𝑤 and 𝑧/𝑤 = 𝑧 𝑤 if 𝑤 ∕= 0.
/

� 3. ∣𝑧∣ = ∣𝑧∣ and ∣𝑧∣2 = 𝑧𝑧.

1
4. 𝑅𝑒(𝑧) = 12 (𝑧 + 𝑧) and 𝐼𝑚(𝑧) = 2𝑖
(𝑧 − 𝑧).
Figure 6.2: Polar coordinates of a complex number 𝑧 = 𝑎 + 𝑏𝑖.
Disks Open Sets and Closed Sets in the Complex Plane

Example 6.1.2. Let 𝑧 = −1 + 𝑖. Then ∣𝑧∣ = 𝑟 = (−1)2 + 12 = 2 and
Let 𝑧 = 𝑥 + 𝑖𝑦 and 𝑧0 = 𝑥0 + 𝑖𝑦0 be complex numbers and 𝑟 be a positive real number.

3𝜋
.
4
𝑎𝑟𝑔(𝑧) = arctan −1 =
6.1 Complex Numbers 150 6.2 Complex Functions, Differential Calculus and Analyticity 151

1. Then ∣𝑧 − 𝑧0 ∣ = 𝑟 if and only if 6.2 Complex Functions, Differential Calculus and Analyt-

(𝑥 − 𝑥0 )2 + (𝑦 − 𝑦0 )2 = 𝑟 icity
which is a circle with center (𝑥0 , 𝑦0 ) and radius 𝑟. In the next subsequent sections we are going to consider functions from a subset of the set of
complex numbers to the set of complex numbers.
2. The set {𝑧 ∈ C : ∣𝑧 − 𝑧0 ∣ < 𝑟} is an open disk of radius 𝑟 about 𝑧0 and it contains all
points enclosed by the circle but does not contain the boundary. Definition 6.2.1. A function w of a complex variable z is a rule that assigns a unique value w(z)
to each point z in some set D in the complex plane. If 𝑤 is a complex function and 𝑧 = 𝑥 + 𝑖𝑦,
3. The set {𝑧 ∈ C : ∣𝑧 − 𝑧0 ∣ ≤ 𝑟} is a closed disk about 𝑧0 and it contains all points enclosed
by the circle the boundary points on the circle. � � �

4. The S be a set of complex numbers and 𝑤 be a complex number. ����
� �
4.1 𝑤 is an the interior of S is there is some open disk about 𝑧 which is contained in S.

4.2 𝑤 is a boundary point of S if every open disk about 𝑤 contains at least one point of �
S and at leats one point ont in S.
4.3 The set S is an open set if every point of S is an interior point of S. Figure 6.5: A complex function.
4.4 The set S is a closed set if S contains all the boundary points.
then we can always write

𝑤(𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦),

where 𝑢 and 𝑣 are real-valued functions of 𝑥 and 𝑦 and 𝑢(𝑥, 𝑦) = 𝑅𝑒(𝑓 (𝑧)) and 𝑣(𝑥, 𝑦) =
�� ���������������������
𝐼𝑚(𝑓 (𝑧.))
�� If 𝑧 = 𝑥 + 𝑖𝑦, then 𝑤(𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦). That is, the real and imaginary parts of 𝑤(𝑧) are
���������������������� functions of x and y.

Example 6.2.1. Let 𝑤 be a complex function defined by 𝑤(𝑧) = 𝑧 2 .= (𝑥 + 𝑖𝑦)2 = (𝑥2 − 𝑦 2 ) +
� 𝑖2𝑥𝑦. Hence 𝑅𝑒(𝑤(𝑧)) = 𝑢(𝑥, 𝑦) = 𝑥2 − 𝑦 2 and 𝐼𝑚(𝑤(𝑧)) = 𝑣(𝑥, 𝑦) = 2𝑥𝑦.

Example 6.2.2. Let 𝑓 (𝑧) = 𝑧1 , for 𝑧 ∕= 0. Then

1 𝑥 𝑦
𝑓 (𝑧) = = 2 = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦).
𝑥 + 𝑖𝑦 𝑥 + 𝑦2 𝑥 + 𝑦2
−𝑖 2
Figure 6.4: Interior and boundary points of a set in the complex plane.
𝑥 −𝑦
Then 𝑅𝑒(𝑓 (𝑧)) = 𝑢(𝑥, 𝑦) = 𝑥2 +𝑦 2
and 𝐼𝑚(𝑓 (𝑧)) = 𝑣(𝑥, 𝑦) = 𝑥2 +𝑦 2
.

Let 𝑓 : C → C be a complex valued function. Then clearly 𝑓 maps R2 into R2 and hence all the
concept of limit and derivatives that are defined for vector functions of two variables also apply
here with the notations modified in terms of the complex numbers notation.
6.2 Complex Functions, Differential Calculus and Analyticity 152 6.2 Complex Functions, Differential Calculus and Analyticity 153

6.2.1 Limit 4.
lim (𝑐𝑓 )(𝑧) = 𝑐𝐿.
𝑧→𝑧0
Let 𝑧0 be an interior point in the domain of definition of a function 𝑓 : C → C. We say that the
limit of 𝑓 (𝑧) as z approaches to 𝑧0 is L and write Definition 6.2.3. For a complex function 𝑓 , if

lim 𝑓 (𝑧) = 𝑓 (𝑧0 ),


lim 𝑓 (𝑧) = 𝐿 𝑧→𝑧0
𝑧→𝑧0
then we say that 𝑓 is continuous at 𝑧0 and a function is continuous in a set if it is continuous at
if to each 𝜖 > 0 (no matter how small it is), there corresponds a 𝛿 > 0 such that ∣𝑓 (𝑧) − 𝐿∣ < 𝜖
each point of the set.
for all z satisfying 0 < ∣𝑧 − 𝑧0 ∣ < 𝛿.

� �

6.2.2 Derivatives

�� Let 𝑓 be a complex function. The derivative of 𝑓 at the point 𝑧0 , denoted by 𝑓 ′ (𝑧0 ), is defined
����
as

� �
𝑓 (𝑧) − 𝑓 (𝑧0 )
𝑓 ′ (𝑧0 ) = lim = lim
𝑓 (𝑧0 + △𝑧) − 𝑓 (𝑧)
,
𝑧→𝑧0 𝑧 − 𝑧0 △𝑧→0 △𝑧
if the limit exists and is a complex number.

Figure 6.6: Limit of a complex function at a given point. Here as well the limit value should be unique and independent of the way z approaches 𝑧0 .

Example 6.2.3. Find the derivative of each of the following functions if it exists.
In the above definition, 𝑧 = 𝑥 + 𝑖𝑦 and 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦). Moreover ∣𝑧 − 𝑧0 ∣ means the
modulus of the complex number 𝑧 − 𝑧0 and ∣𝑧 − 𝑧0 ∣ < 𝛿 represents an open circle centered 𝑧0 . 1. If 𝑓 (𝑧) = 𝑧 2 , then

𝑓 ′ (𝑧) = lim = lim


𝑓 (𝑧 + △𝑧) − 𝑓 (𝑧) (𝑧 + △𝑧)2 − 𝑧 2
Recall that, in the calculus of real variables, the limit of sum( product, quotient) of two functions △𝑧→ △𝑧 △𝑧→0 △𝑧
is sum( product, quotient) of the limits whenever the limits are defined and the limit of the
= lim
𝑧 2 + 2𝑧△𝑧 + △𝑧 2 − 𝑧 2
denominator is nonzero. The same is true for complex functions, which is summarized below.
= lim (2𝑧 + △𝑧) = 2𝑧.
△𝑧→0 △𝑧 △𝑧→0

Therefore (𝑧 2 )′ = 2𝑧 as in the real case


Remark 6.2.2. Let 𝑓 and 𝑔 be complex functions and 𝑧0 and 𝑐 be complex numbers such that
lim𝑧→𝑧0 𝑓 (𝑧) = 𝐿 and lim𝑧→𝑧0 𝑔(𝑧)) = 𝑀. 2. 𝑓 (𝑧) = 𝑧¯ (the complex conjugate function) is not differentiable any where in C.
To see this Let 𝑧 = 𝑥 + 𝑖𝑦 be any complex number. Then
1.
(△𝑥 + 𝑖△𝑦)
= = = = = .
𝑓 (𝑧 + △𝑧) − 𝑓 (𝑧) (𝑧 + △𝑧) − 𝑧 𝑧 + △𝑧 − 𝑧 △𝑧 △𝑥 − 𝑖△𝑦
𝑧→𝑧0
lim (𝑓 ± 𝑔)(𝑧) = 𝐿 + 𝑀.
△𝑧 △𝑧 △𝑧 △𝑧 △𝑥 + 𝑖△𝑦 △𝑥 + 𝑖△𝑦
2. Now if we approach to 0 in the horizontal direction, then △𝑦 = 0. Hence
lim (𝑓.𝑔)(𝑧) = 𝐿𝑀.
𝑧→𝑧0
lim = lim = lim =1
𝑓 (𝑧 + △𝑧) − 𝑓 (𝑧) △𝑥 − 𝑖△𝑦 △𝑥
△𝑧→0 △𝑧 △𝑥+𝑖△𝑦→0 △𝑥 + 𝑖△𝑦 △𝑥→0 △𝑥
3. If 𝑀 ∕= 0, then
𝑓 𝐿
lim (𝑧) = .
On the other hand if we approach 0 in the vertical direction, we set ∇𝑥 = 0. In this case
𝑧→𝑧0 𝑔 𝑀
𝑖△𝑦
� �

lim = lim = lim


𝑓 (𝑧 + △𝑧) − 𝑓 (𝑧) △𝑥 − 𝑖△𝑦
= −1
△𝑧→0 △𝑧 △𝑥+𝑖△𝑦→0 △𝑥 + 𝑖△𝑦 △𝑦→0 𝑖△𝑦
6.2 Complex Functions, Differential Calculus and Analyticity 154 6.3 The Cauchy - Riemann Equation 155

Hence 6.3 The Cauchy - Riemann Equation


lim
𝑓 (𝑧 + △𝑧) − 𝑓 (𝑧)
△𝑧→0
Since differentiability of a complex function (together with analyticity) plays a crucial role in the
△𝑧
does not exist for all 𝑧 ∈ C. That is, 𝑓 (𝑧) = 𝑧 is not differentiable any where in the complex
study of complex variables theory, we need to answer the question that: When is a complex
plane.
function differentiable? The answer will be given in this section.
Remark 6.2.4. By induction it can be shown that (𝑧 𝑛 )′ = 𝑛𝑧 𝑛−1 , for all 𝑛 ∈ N.
Definition 6.3.1. Let 𝑓 be a complex function. Then
As in the real functions case we use the definition of derivatives for complex functions very rarely,
since we have the following rules of differentiation of complex functions. 1. 𝑓 is said to be analytic in a domain D if 𝑓 (𝑧) is defined and differentiable at all points
of D.
Rules of Differentiation for Complex Functions
2. 𝑓 is said to be analytic at a point 𝑧0 ∈ 𝐷 if is analytic in some neighborhood of 𝑧0 .
Let 𝑓 and 𝑔 be complex functions and 𝑐 be a complex number.
3. 𝑓 is said to be (simply) an analytic function if it is analytic in some domain (open
1. Sum(Difference) Rule: (𝑓 ± 𝑔)′ (𝑧) = 𝑓 ′ (𝑧) ± 𝑔 ′ (𝑧). connected subset of C.)

2. Constant Multiple Rule: (𝑐𝑓 )′ (𝑧) = 𝑐𝑓 ′ (𝑧).


6.3.1 Test for Analyticity
3. Product Rule: (𝑓.𝑔)′ (𝑧) = 𝑓 ′ (𝑧)𝑔(𝑧) + 𝑓 (𝑧)𝑔 ′ (𝑧).

4. Quotient Rule: Recall that, if 𝑓 is a complex function and 𝑧 = 𝑥 + 𝑖𝑦, then we can always write
𝑓
(𝑧) = .
𝑓 ′ (𝑧)𝑔(𝑧) − 𝑓 (𝑧)𝑔 ′ (𝑧)
𝑔 𝑔(𝑧) 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦),
� �′

5. The complex version of the Chain Rule: (𝑓 𝑜𝑔)′ (𝑧) = 𝑓 ′ (𝑔(𝑧))𝑔 ′ (𝑧).
( )2

where 𝑢 and 𝑣 are real-valued functions of 𝑥 and 𝑦.


1
Example 6.2.4. Let 𝑓 (𝑧) = 𝑧
. Then Consider a complex function 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦) with 𝑧 = 𝑥 + 𝑖𝑦. If f is analytic in some
domain D (and hence differentiable in D), then the partial derivatives exist and for 𝑧0 = 𝑥0 + 𝑖𝑦0
𝑓 ′ (𝑧) =
(1)′ 𝑧 − 1.(𝑧)′ −1
= 2.
𝑧2 𝑧 and 𝑧 = 𝑥0 + 𝑖𝑦, we have
Therefore, f is differentiable every where in C except at 𝑧 = 0.

Suppose a complex function 𝑓 is differentiable at 𝑧0 . Consider the equation lim


𝑓 (𝑧) − 𝑓 (𝑧0 )
= lim
𝑢(𝑥0 , 𝑦) + 𝑖𝑣(𝑥0 , 𝑦) − 𝑢(𝑥0 , 𝑦0 ) + 𝑖𝑣(𝑥0 , 𝑦0 )
𝑧→𝑧0 𝑧→𝑧0
( ) ( )

𝑧 − 𝑧0 (𝑥0 + 𝑖𝑦) − (𝑥0 + 𝑖𝑦0 )


.
𝑓 (𝑧) − 𝑓 (𝑧0 )
𝑓 (𝑧) − 𝑓 (𝑧0 ) = (𝑧 − 𝑧0 )
𝑢(𝑥0 , 𝑦) + 𝑖𝑣(𝑥0 , 𝑦) − 𝑢(𝑥0 , 𝑦0 ) + 𝑖𝑣(𝑥0 , 𝑦0 )
= lim
� �

𝑧 − 𝑧0
𝑧→𝑧0
Then
( ) ( )

𝑖(𝑦 − 𝑦0 )
lim = lim
𝑓 (𝑧) − 𝑓 (𝑧0 )
. 1
= lim + lim
𝑢(𝑥0 , 𝑦) − 𝑢(𝑥0 , 𝑦0 ) 𝑖 𝑣(𝑥0 , 𝑦) − 𝑣(𝑥0 , 𝑦0 )
𝑧→𝑧0 𝑧→𝑧0
𝑓 (𝑧) − 𝑓 (𝑧0 ) (𝑧 − 𝑧0 )
𝑖 𝑦→𝑦 0 𝑖 𝑦→𝑦 0
� � � � ��

𝑧 − 𝑧0 𝑦 − 𝑦0 𝑦 − 𝑦0
But
1 ∂𝑢 ∂𝑣 1 𝑖1
= + = 𝑢𝑦 + 𝑣 𝑦 = 𝑢𝑦 + 𝑣 𝑦
lim 𝑖 ∂𝑦 ∂𝑦 𝑖 𝑖𝑖
𝑓 (𝑧) − 𝑓 (𝑧0 ) 𝑓 (𝑧) − 𝑓 (𝑧0 )
= 0.𝑓 ′ (𝑧0 ) = 0.
𝑧→𝑧0 𝑧→𝑧0 𝑧→𝑧0
(𝑧 − 𝑧0 ) = lim (𝑧 − 𝑧0 ). lim
� � �� � �

𝑧 − 𝑧0 𝑧 − 𝑧0
𝑖 ∂𝑣 ∂𝑢
Therefore, lim𝑧→𝑧0 𝑓 (𝑧) = 𝑓 (𝑧0 ) and hence we have the following theorem. =
∂𝑦 ∂𝑦
𝑢𝑦 + 𝑣𝑦 = 𝑣𝑦 − 𝑖𝑢𝑦 = −𝑖 .
−1
Theorem 6.2.5. If 𝑓 is a differentiable complex function at 𝑧0 , then 𝑓 is continuous at 𝑧0 .
6.3 The Cauchy - Riemann Equation 156 6.3 The Cauchy - Riemann Equation 157

Similarly, if we set △𝑦 = 0 and △𝑥 → 0, that is, if 𝑧 = 𝑥 + 𝑖𝑦0 and 𝑧0 = 𝑥0 + 𝑖𝑦0 , then we have Example 6.3.1. Consider 𝑓 (𝑧) = ∣𝑧∣2 = 𝑧 𝑧¯. Then 𝑓 (𝑧) = (𝑥2 + 𝑦 2 ) + 0𝑖 and hence we have
𝑢(𝑥, 𝑦0 ) + 𝑖𝑣(𝑥, 𝑦0 ) − 𝑢(𝑥0 , 𝑦0 ) + 𝑖𝑣(𝑥0 , 𝑦0 ) 𝑢(𝑥, 𝑦) = 𝑥2 + 𝑦 2 and 𝑣 = 0.
lim = lim
𝑓 (𝑧) − 𝑓 (𝑧0 )
𝑧→𝑧0 𝑧→𝑧0 Since 𝑢𝑥 = 2𝑥, 𝑢𝑦 = 2𝑦, 𝑣𝑥 = 0 and 𝑣𝑦 = 0, all 𝑢, 𝑣, 𝑢𝑥 , 𝑢𝑦 , 𝑣𝑥 , 𝑣𝑦 are continuous in R2 and hence
( ) ( )

𝑧 − 𝑧0 (𝑥 + 𝑖𝑦0 ) − (𝑥0 + 𝑖𝑦0 )


u and v are continuously differentiable everywhere in R2 .
𝑢(𝑥, 𝑦0 ) − 𝑢(𝑥0 , 𝑦0 ) − 𝑢(𝑥0 , 𝑦0 ) + 𝑖𝑣(𝑥0 , 𝑦0 )
= lim
𝑥→𝑥0
But 𝑢𝑥 = 𝑣𝑦 only if 𝑥 = 0, that is on the y-axis and 𝑣𝑥 = −𝑢𝑦 only if 𝑦 = 0, that is, on the
( ) ( )

𝑥 − 𝑥0
∂𝑢 ∂𝑣 x-axis. Thus the Cauchy-Riemann equations holds true only at the origin and hence 𝑓 (𝑧) = ∣𝑧∣2
= lim + 𝑖 lim = +𝑖 .
𝑢(𝑥, 𝑦0 ) − 𝑢(𝑥0 , 𝑦0 ) 𝑣(𝑥, 𝑦0 ) − 𝑣(𝑥0 , 𝑦0 )
𝑥→𝑥0 𝑥 − 𝑥0 𝑥→𝑥0 𝑥 − 𝑥0 ∂𝑥 ∂𝑥 is differentiable only at 𝑧 = 0 and it is analytic nowhere.
Since 𝑓 is differentiable at 𝑧0 , the two partial derivatives must be equal. That is, we must have
Example 6.3.2. Let 𝑓 (𝑧) = 𝑧 2 − 8𝑧 + 3. If 𝑧 = 𝑥 + 𝑖𝑦, then

∂𝑣 ∂𝑢 ∂𝑢 ∂𝑣
= +𝑖 𝑓 (𝑧) = 𝑓 (𝑥 + 𝑖𝑦) = (𝑥2 − 𝑦 2 ) + 2𝑥𝑦𝑖 − 8𝑥 − 8𝑦𝑖 + 3 = (𝑥2 − 8𝑥 − 𝑦 2 + 3) + (2𝑥𝑦 − 𝑦)𝑖
∂𝑦 ∂𝑦 ∂𝑥 ∂𝑥
−𝑖

and this implies


∂𝑢 ∂𝑣 ∂𝑣 ∂𝑢
and hence 𝑢(𝑥, 𝑦) = 𝑥2 −8𝑥−𝑦 2 +3 and 𝑣(𝑥, 𝑦) = 2𝑥𝑦−8𝑦. Then 𝑢𝑥 = 2𝑥−8, 𝑢𝑦 = −2𝑦, 𝑣𝑥 = 2𝑦
= and (6.1)
∂𝑥 ∂𝑦 ∂𝑥 ∂𝑦
=− . and 𝑣𝑦 = 2𝑥 − 8 and we also have
The above equation (6.1) is called the Cauchy-Riemann equations (and is only the necessary
𝑢𝑥 = 2𝑥 − 8 = 𝑣𝑦 and 𝑣𝑥 = 2𝑦 = −𝑢𝑦
condition for analyticity of 𝑓 at 𝑧0 .) Hence we proved the first part of the following Theorem.

Theorem 6.3.2 (Necessary and Sufficient Conditions for Analyticity). Let 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) +
for all (𝑥, 𝑦) ∈ R2 , that is, the Cauchy-Riemann equations are satisfied everywhere in R2 and
all 𝑢𝑥 , 𝑢𝑦 , 𝑣𝑥 and 𝑣𝑦 are continuous in R2 and hence 𝑢 and 𝑣 are continuously differentiable
𝑖𝑣(𝑥, 𝑦) be a function that is defined throughout some neighborhood of a point 𝑧0 = 𝑥0 + 𝑖𝑦0 .
everywhere in R2 .
1. ( Necessary Condition) If f is differentiable at 𝑧0 , then the Cauchy-Riemann equation is Therefore, 𝑓 is differentiable for all 𝑧 and 𝑓 ′ (𝑥 + 𝑖𝑦) = 𝑢𝑥 (𝑥, 𝑦) + 𝑖𝑣𝑥 (𝑥, 𝑦) = (2𝑥 − 8) + 2𝑦𝑖.
satisfied.
Remark 6.3.3. Let 𝑓 = 𝑢 + 𝑖𝑣 be a differentiable complex function on an open disk D such that
2. ( Sufficient Condition) If the Cauchy-Riemann equations are satisfied at 𝑧0 and if u and v 𝑓 ′ (𝑧) = 0 on D. Suppose that 𝑢 and 𝑣 are continuous with continues first and second derivatives
are continuously differentiable (real valued functions of two variables) in some neighborhood and satisfy the Cauchy-Riemann equations on D. Then
of 𝑧0 , then f is analytic at 𝑧0 ∂𝑢 ∂𝑣 ∂𝑣 ∂𝑢
0 = 𝑓 ′ (𝑧) = +𝑖 =
∂𝑥 ∂𝑥 ∂𝑦 ∂𝑦
−𝑖 .
If f is differentiable at 𝑧 = 𝑥 + 𝑖𝑦, then due to the Cauchy-Riemann equations, 𝑓 ′ (𝑧) can be given
in any one of the following four equivalent expressions. This implies 𝑢𝑥 = 𝑣𝑥 = 0 and 𝑢𝑦 = 𝑣𝑦 = 0 and hence 𝑢 and 𝑣 are constant functions.
Therefore 𝑓 is a constant function on D.
𝑓 ′ (𝑧) = 𝑢𝑥 (𝑥, 𝑦) + 𝑖𝑣𝑥 (𝑥, 𝑦)
If a function 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦) is analytic in some domain D, then clearly
or
𝑓 ′ (𝑧) = 𝑣𝑦 (𝑥, 𝑦) − 𝑖𝑢𝑦 (𝑥, 𝑦) 𝑢𝑥 = 𝑣𝑦 and 𝑢𝑦 = −𝑣𝑥
or
on D and the partial derivatives of the component functions of all order exist and are continuous
𝑓 ′ (𝑧) = 𝑢𝑥 (𝑥, 𝑦) − 𝑖𝑢𝑦 (𝑥, 𝑦)
in D. Then taking the derivative with respect to x and y respectively we have: 𝑢𝑥𝑥 = 𝑣𝑦𝑥 and
or 𝑢𝑥𝑦 = 𝑣𝑦𝑦 and also 𝑢𝑦𝑦 = −𝑣𝑥𝑦 and 𝑢𝑦𝑥 = −𝑣𝑥𝑥 .
𝑓 ′ (𝑧) = 𝑣𝑦 (𝑥, 𝑦) + 𝑖𝑣𝑥 (𝑥, 𝑦). Hence 𝑣𝑦𝑥 = 𝑣𝑥𝑦 implies 𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 0 in D and 𝑢𝑥𝑦 = 𝑢𝑦𝑥 implies 𝑣𝑦𝑦 + 𝑣𝑥𝑥 = 0 in D.
6.3 The Cauchy - Riemann Equation 158 6.4 Elementary Functions 159

Definition 6.3.4. A real-valued function 𝑢(𝑥, 𝑦) of two variables satisfy Laplace’s equation, that 6.4 Elementary Functions
is,
∇2 𝑢 = 𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 0 6.4.1 Exponential Functions
and all various first and second - order partial derivatives of its component functions with respect
For a complex number 𝑧 = 𝑥 + 𝑖𝑦, the complex exponential function 𝑒𝑧 is defined by
to x and y are continuous (in this case these functions are called 𝐶 2 functions) is called a
harmonic function. 𝑒𝑧 = 𝑒𝑥+𝑖𝑦 = 𝑒𝑥 .𝑒𝑖𝑦 = 𝑒𝑥 (cos 𝑦 + 𝑖 sin 𝑦)

Then we have proved the following theorem. and 𝑒𝑖𝑦 = cos 𝑦 + 𝑖 sin 𝑦 is Euler’s formula.
We also have ∣𝑒𝑖𝑦 ∣ = ∣ cos 𝑦+𝑖 sin 𝑦∣ = cos2 𝑦 + sin2 𝑦 = 1 and hence ∣𝑒𝑧 ∣ = ∣𝑒𝑥 𝑒𝑖𝑦 ∣ = 𝑒𝑥 ∣𝑒𝑖𝑦 ∣ =
Theorem 6.3.5 (Harmonic Functions). If 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦) is analytic in a domain D,
𝑒𝑥 , for all 𝑧 = 𝑥 + 𝑖𝑦.

then u and v are harnonic in D. That is, they are 𝐶 (2) functions and satisfy the Laplace’s equation:
Example 6.4.1. ∣𝑒−2+4𝑖 ∣ = 𝑒−2 and ∣𝑒3−5𝑖 ∣ = 𝑒3 .
∇2 𝑢 = 𝑢𝑥𝑥 + 𝑣𝑦𝑦 = 0
Example 6.4.2. If 𝑒𝑧 = 2𝑖, then 𝑒𝑥 cos 𝑦 + 𝑖𝑒𝑥 sin 𝑦 = 0 + 2𝑖. This implies 𝑒𝑥 cos 𝑦 = 0 and
∇2 𝑣 = 𝑣𝑥𝑥 + 𝑣𝑦𝑦 = 0
𝑒𝑥 sin 𝑦 = 2. Squaring these equations adding the results will give us
Since f is analytic, u and v are related by the Cauchy-Riemann equation and to refer this rela-
𝑒2𝑥 (cos2 𝑦 + sin2 𝑦) = 4
tionship we such functions are called conjugate harmonic functions.

Example 6.3.3. Show that 𝑢 = 𝑥2 − 𝑦 2 − 𝑦 is harmonic in C and find a conjugate harmonic which implies 𝑒2𝑥 = 4 and then 𝑥 = ln 2 and
function v of u. 𝑒𝑥 cos 𝑦
= cot 𝑦 = 0
𝑒𝑥 sin 𝑦

Solution which gives 𝑦 = 2𝑛𝜋 + 𝜋2 , 𝑛 ∈ Z. Therefore, the solutions of 𝑒𝑧 = 2𝑖 are


𝜋
𝑧 = ln 2 + 𝑖2𝑛𝜋 +
2
Clearly ∇2 𝑢 = 𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 2 − 2 = 0 and hence u is harmonic. To find the conjugate harmonic , 𝑛 ∈ Z.
function v, first we have 𝑢𝑥 = 2𝑥 and 𝑢𝑦 = −2𝑦 − 1 and by the Cauchy-Riemann equations 𝑣
Remark 6.4.1. For a complex number 𝑧 = 𝑥 + 𝑖𝑦, we have 𝑒𝑧 ∕= 0 for all 𝑧 ∈ C since 𝑒𝑥 ∕= 0
must satisfy 𝑣𝑦 = 𝑢𝑥 = 2𝑥 and 𝑣𝑥 = −𝑢𝑦 = 2𝑦 + 1.
for all (finite) x and cos 𝑦 and sin 𝑦 do not vanish simultaneously for any value of 𝑦.
Integrating the first part with respect to y, we get 𝑣 = 2𝑥𝑦 + ℎ(𝑥), where ℎ(𝑥) is a function of
x only and we differentiate 𝑣 with respect to 𝑥 to get 𝑣𝑥 = 2𝑦 + ℎ′ (𝑥) = 2𝑦 + 1. This implies
For 𝑧 = 𝑥+𝑖𝑦, let 𝑓 (𝑧) = 𝑒𝑧 . Then 𝑓 (𝑧) = 𝑒𝑥 (cos 𝑦 +𝑖 sin 𝑦) and the functions 𝑢(𝑥, 𝑦) = 𝑒𝑥 cos 𝑦
ℎ′ (𝑥) = 1 and hence ℎ(𝑥) = 𝑥 + 𝑐 for some constant 𝑐.
and 𝑣(𝑥, 𝑦) = 𝑒𝑥 sin 𝑦 are continuous with continuous first partial derivatives.
Therefore 𝑣(𝑥, 𝑦) = 2𝑥𝑦 + 𝑥 + 𝑐 and hence
We also have 𝑢𝑥 = 𝑒𝑥 cos 𝑦 = 𝑣𝑦 and 𝑢𝑦 = −𝑒𝑥 sin 𝑦 = −𝑣𝑥 and hence 𝑢 and 𝑣 satisfy the Cauchy-

𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦) = (𝑥2 − 𝑦 2 − 𝑦) + 𝑖(2𝑥𝑦 + 𝑥 + 𝑐) Riemann equations. Therefore, the complex exponential function 𝑓 (𝑧) = 𝑒𝑧 is differentiable for
all z and
= 𝑥2 − 𝑦 2 + 𝑖2𝑥𝑦 + − 𝑦 + 𝑖𝑥 + 𝑖𝑐 = 𝑧 2 + 𝑖(𝑧 + 𝑐) = 𝑧 2 + 𝑖𝑧 + 𝑐1 , 𝑓 ′ (𝑧) = 𝑒𝑧 = 𝑢𝑥 + 𝑖𝑣𝑥 = 𝑒𝑧 .
( ) ( ) ( )′

where 𝑐1 = 𝑖𝑐.
6.4 Elementary Functions 160 6.4 Elementary Functions 161

6.4.2 Trigonometric and Hyperbolic Functions Another basic relation of the complex trigonometric functions is derived as follows.

𝑒 + 𝑒−𝑖𝑧
sin2 𝑧 + cos2 𝑧 = + = +
From Euler’s formula we have that: 𝑒𝑖𝜃 = cos 𝜃 + 𝑖 sin 𝜃 and 𝑒−𝑖𝜃 = cos 𝜃 − 𝑖 sin 𝜃. By adding 𝑒 − 𝑒−𝑖𝑧 (𝑒𝑖𝑧 − 𝑒−𝑖𝑧 )2 (𝑒𝑖𝑧 + 𝑒−𝑖𝑧 )2
2𝑖 2 4
these two equations we get: 𝑒𝑖𝜃 + 𝑒−𝑖𝜃 = 2 cos 𝜃 which implies
� 𝑖𝑧 �2 � 𝑖𝑧 �2

−4
1 1
𝑒𝑖𝜃 + 𝑒−𝑖𝜃
cos 𝜃 = 4 4
= [−𝑒2𝑖𝑧 + 2𝑒𝑖𝑧−𝑖𝑧 − 𝑒−𝑧𝑖𝑧 + 𝑒2𝑖𝑧 + 2𝑒𝑖𝑧−𝑖𝑧 + 𝑒−2𝑖𝑧 ] = (2 + 2) = 1.
2
Therefore sin2 𝑧 + cos2 𝑧 = 1.
and by subtracting the second from the first we get: 𝑒𝑖𝜃 − 𝑒𝑖𝜃 = 2𝑖 sin 𝜃 which implies

sin 𝜃 = . For a complex number 𝑧 show that


𝑒𝑖𝜃 − 𝑒−𝑖𝜃
2𝑖
Using these two formulae we can now define the trigonometric complex functions as follows. i) sin(−𝑧) = − sin 𝑧

ii) cos(−𝑧) = cos 𝑧


For any complex number z
𝑒𝑖𝑧 + 𝑒−𝑖𝑧
iii) cos(𝑧1 + 𝑧2 ) = cos 𝑧1 cos 𝑧2 − sin 𝑧1 sin 𝑧2
cos 𝑧 = and sin 𝑧 = . (6.2)
𝑒𝑖𝑧 − 𝑒−𝑖𝑧
2 2𝑖
iv) sin(𝑧1 + 𝑧2 ) = sin 𝑧1 cos 𝑧2 + sin 𝑧2 cos 𝑧1
All other trigonometric functions can be derived from these two basic definitions. For example
v) cos(𝑧 + 2𝜋) = cos 𝑧 and sin(𝑧 + 2𝜋) = sin 𝑧.
2
.
𝑒𝑖𝑧 − 𝑒−𝑖𝑧
tan 𝑧 = −𝑖 and sec 𝑧 = 𝑧
𝑒𝑖𝑧 + 𝑒−𝑖𝑧 𝑒 + 𝑒−𝑖𝑧 vi) cosh2 𝑧 − sinh2 𝑧 = 1.
Recall that:
𝑒𝑥 + 𝑒−𝑥 Example 6.4.3. Let 𝑧 = 𝑥 + 𝑖𝑦 and 𝑓 (𝑧) = sin 𝑧. Then 𝑓 (𝑧) = sin(𝑥 + 𝑖𝑦) = sin 𝑥 cosh 𝑦 +
sinh 𝑥 = and cosh 𝑥 =
𝑒𝑥 − 𝑒−𝑥
2 2 𝑖 cos 𝑥 sinh 𝑦. This implies 𝑢(𝑥, 𝑦) = sin 𝑥 cosh 𝑦 and 𝑣(𝑥, 𝑦) = cos 𝑥 sinh 𝑦 and we also have
for any 𝑥 ∈ R. Similarly for complex numbers z we define 𝑢𝑥 = cos 𝑥 cosh 𝑦, 𝑢𝑦 = sin 𝑥 sinh 𝑦, 𝑣𝑥 = − sin 𝑥 sinh 𝑦 and 𝑣𝑦 = cos 𝑥 cosh 𝑦.
𝑧 𝑧
−𝑧
𝑒 +𝑒 −𝑧 Since 𝑢, 𝑣, 𝑢𝑥 , 𝑢𝑦 , 𝑣𝑥 , 𝑣𝑦 , are all continuous everywhere in R2 and 𝑢𝑥 = 𝑣𝑦 and 𝑢𝑦 = −𝑣𝑥 , f is
sinh 𝑧 = and cosh 𝑧 = . (6.3)
𝑒 −𝑒
2 2 differentiable and 𝑓 ′ (𝑧) = cos 𝑥 cosh 𝑦 − 𝑖 sin 𝑥 sinh 𝑦. But
From (6.2) and (6.3) it follows that
cos 𝑥 cosh 𝑦 − 𝑖 sin 𝑥 sinh 𝑦 = cos 𝑥 cosh 𝑦 − sin 𝑥(𝑖 sinh 𝑦)
𝑒𝑖(𝑖𝑧) + 𝑒−𝑖(𝑖𝑧) 𝑒−𝑧 + 𝑒𝑧
cos(𝑖𝑧) = = = cosh 𝑧
2 2 = cos 𝑥 cos(𝑖𝑦) − sin 𝑥 sin(𝑖𝑦) = cos(𝑥 + 𝑖𝑦) = cos 𝑧.
and Therefore (sin 𝑧)′ = cos 𝑧.
𝑒𝑖(𝑖𝑧) + 𝑒−𝑖(𝑖𝑧)
sin(𝑖𝑧) = = = 𝑖. = 𝑖 sinh 𝑧.
𝑒−𝑧 − 𝑒𝑧 𝑒𝑧 − 𝑒−𝑧
2𝑖 2𝑖 2 Example 6.4.4. Let 𝑓 (𝑧) = 𝑒sin 𝑧 . Then by using chain rule we get 𝑓 ′ (𝑧) = 𝑒sin 𝑧 cos 𝑧.
Therefore, we have proved the relations

cos(𝑖𝑧) = cosh 𝑧 and sin(𝑖𝑧) = 𝑖 sinh 𝑧.

Similarly we can show that

cos(𝑖𝑧) = cos 𝑧 and sin(𝑖𝑧) = 𝑖 sin 𝑧.


6.4 Elementary Functions 162 6.4 Elementary Functions 163

6.4.3 Polar form and Multi-Valuedness. From the equation 𝑟𝑒𝑖𝜃 = 𝑒𝑎+𝑏𝑖 = 𝑒𝑎 𝑒𝑖𝑏 we get 𝑒𝑖(𝑏−𝜃) = 1 = 𝑒2𝑘𝜋𝑖 for 𝑘 ∈ Z, which implies that
𝑖(𝑏 − 𝜃) = 2𝑘𝜋𝑖 and hence 𝑏 = 𝜃 + 2𝑘𝜋 for 𝑘 ∈ Z. Therefore, for 𝑧 ∕= 0 there are infinitely many
The Polar form of a complex number z is
numbers
𝑧 = 𝑟𝑒𝑖𝜃 , 𝑤 = ln 𝑟 + 𝑖(𝜃 + 2𝑘𝜋), 𝑘 ∈ Z

such that 𝑧 = 𝑒𝑤 .
where 𝑟 = ∣𝑧∣ and 𝜃 = arctan 𝑥𝑦 = 𝑎𝑟𝑔𝑧. However, the angle 𝜃 = 𝑎𝑟𝑔𝑧 for 𝑧 ∕= 0) can be
Now we are in a position to define the logarithm of a nonzero complex number 𝑧 as
determined only to within an arbitrary integer multiple of 2𝜋 The angle 𝜃 with −𝜋 < 𝜃 < 𝜋 is
called the Principal argument of z and denoted by 𝑎𝑟𝑔𝑧. That is, log(𝑧) = ln ∣𝑧∣ + 𝑖(arg(𝑧) + 2𝑘𝜋), 𝑘 ∈ Z

𝜃 = 𝑎𝑟𝑔𝑧 + 2𝑘𝜋, 𝑘 ∈ Z. which is infinite valued.

The expression 𝑧 𝑘 is single valued only if the exponent k is an integer. If k is a rational number Example 6.4.6. Compute log(1 + 𝑖).
𝑚
√ √
𝑛
(in its reduced form), then the map Let 𝑧 = 1 + 𝑖. Then 𝑟 = 1 + 1 = 2 𝑎𝑟𝑔(𝑧) = arctan−1 ( 11 ) = 𝜋4 .
Therefore
𝑓 (𝑧) = 𝑧 𝑘 √ 𝜋
log(1 + 𝑖) = ln 2 + 𝑖(
4
+ 2𝑘𝜋), 𝑘 ∈ Z.

is n - valued, (since there are exactly n 𝑛𝑡ℎ roots of a complex number z.)
Let 𝑓 be a complex function which is differentiable at 𝑧. If 𝑓 is expressed in polar form as:
𝜋 𝑓 (𝑧) = 𝑢(𝑟, 𝜃) + 𝑖𝑣(𝑟, 𝜃), the Cauchy-Rieman equations can be calculated (using definition along
√ √
Example 6.4.5. Let 𝑧 = 1 + 𝑖. Then 𝑟 = 1+1= 2 and 𝑎𝑟𝑔𝑧 = tan−1 ( 11 ) = 4
.
Therefore constant 𝜃 and along constant 𝑟 or using change of variables 𝑥 = 𝑟 cos 𝜃 and 𝑦 = 𝑟 sin 𝜃.)
1/3 𝑖𝜃 1/3 1/6 𝑖𝜋/3 By using change of variables 𝑥 = 𝑟 cos 𝜃 and 𝑦 = 𝑟 sin 𝜃 and from chain rule we have:
𝑧 = (𝑟𝑒 ) =2 𝑒 )
2𝜋
for 𝑘 = 0, 1, 2. Then 𝐹𝑘 = 𝑒𝑖𝑘( 2 ) for 𝑘 = 0, 1, 2 which implies 𝐹0 = 𝑒0 = 1, 𝐹1 = 𝑒𝑖2𝜋/3, and ∂𝑢 ∂𝑢 ∂𝑟 ∂𝑢 1 ∂𝑢 1
= . = = .
𝑖4𝜋/3 ∂𝑥 ∂𝑟 ∂𝑥 ∂𝑟 ∂𝑥
∂𝑟
∂𝑟 cos 𝜃
𝐹2 = 𝑒 which correspond to the three points on the unit circle.
Therefore, which implies that
9𝜋 17𝜋 ∂𝑢 1 ∂𝑢
𝑧 1/3 = 21/6 𝑒𝑖𝜋/12 , 21/6 𝑒𝑖 12 , 21/6 𝑒𝑖 12 . = , (6.4)
∂𝑥 𝑐𝑜𝑠𝜃 ∂𝑟
∂𝑢 ∂𝑢 ∂𝜃 ∂𝑢 1 ∂𝑢 1
= . = . = .
6.4.4 The Logarithmic Functions ∂𝑥 ∂𝜃 ∂𝑥 ∂𝜃 ∂𝑥
∂𝜃
∂𝜃 𝑟 sin 𝜃
which implies that
Recall that: in the calculus of real function, the natural logarithm is the inverse of the exponential ∂𝑢 1 ∂𝑢
, (6.5)
∂𝑥 𝑟 sin 𝜃 ∂𝜃
=−
function.
∂𝑣 ∂𝑣 ∂𝑟 ∂𝑣 1 ∂𝑣 1
𝑦 = ln 𝑥 if and only if 𝑥 = 𝑒𝑦 , = . = . = .
∂𝑦 ∂𝑟 ∂𝑦 ∂𝑟 ∂𝑦∂𝑟
∂𝑟 sin 𝜃
for 𝑥 > 0. Hence the real logarithm is a solution to the equation 𝑥 = 𝑒𝑦 . which implies that
∂𝑣 1 ∂𝑣
Here we want to develop the natural logarithm, that is, for a complex number 𝑧 ∕= 0 we want to = (6.6)
𝑤 ∂𝑦 sin 𝜃 ∂𝑟
find a solution for 𝑧 = 𝑒 .
∂𝑣 ∂𝑣 ∂𝜃 ∂𝑣 1 ∂𝑣 1
Let 𝑧 = 𝑟𝑒𝑖𝜃 in its polar form and 𝑤 = 𝑎 + 𝑏𝑖. Then 𝑧 = 𝑟𝑒𝑖𝜃 = 𝑒𝑎+𝑏𝑖 = 𝑒𝑎 𝑒𝑖𝑏 and we also have = . = . = .
∂𝜃 ∂𝜃 ∂𝑦 ∂𝜃 ∂𝑦
∂𝜃
∂𝜃 𝑟 cos 𝜃
𝑟 = ∣𝑧∣ = 𝑒𝑎 , which implies that 𝑎 = ln 𝑟.
6.5 Exercises 164 6.5 Exercises 165

which implies that This page is left blank intensionally.


∂𝑣 1 ∂𝑣
= . . (6.7)
∂𝑦 𝑟 cos 𝜃 ∂𝜃
∂𝑢 ∂𝑣
From the relation ∂𝑥
= ∂𝑦
and from (6.4) and (6.7) we have

∂𝑢 1 ∂𝑣
=
∂𝑟 𝑟 ∂𝜃
and from from (6.5) and (6.6) we have

∂𝑣 1 ∂𝑢
∂𝑟 𝑟 ∂𝜃
=−

and thus
𝑒−𝑖𝜃 𝑖 1
𝑓 ′ (𝑧) = 𝑒−𝑖𝜃 (𝑢𝑟 + 𝑖𝑣𝑟 ) =
𝑟 𝑟 𝑟
(𝑣𝜃 − 𝑖𝑢𝜃 ) = 𝑒−𝑖𝜃 (𝑢𝑟 − 𝑢𝜃) = 𝑒−𝑖𝜃 ( 𝑣𝜃 + 𝑖𝑣𝑟 ).

Example 6.4.7. Let 𝑓 (𝑧) = log 𝑧 = ln 𝑟 + 𝑖𝜃, with 𝜃 = 𝑎𝑟𝑔(𝑧) is the principal argument
(𝑖.𝑒.0 < 𝑟 < ∞ and −𝜋 < 𝜃 < 𝜋 Then find 𝑓 ′ (𝑧) in terms of z.
Here 𝑢(𝑟, 𝜃) = ln 𝑟, 𝑣(𝑟, 𝜃) = 𝜃 and 𝑢𝑟 = 1𝑟 , 𝑢𝜃 = 0, 𝑢𝑟 = 0, 𝑣𝜃 = 1.
Since 𝑢, 𝑣, 𝑢𝑟 , 𝑣𝑟 , 𝑢𝜃 , 𝑣𝜃 are all continuous in the plane where log 𝑧 is defined and since the Cauchy-
Riemann equations are satisfied, log 𝑧 is analytic everywhere in the domain of log 𝑧.
Hence
1 1 1
𝑟 𝑟𝑒 𝑧
𝑓 ′ (𝑧) = (log 𝑧)′ = 𝑒−𝑖𝜃 (𝑢𝑟 + 𝑖𝑣𝑟 ) = 𝑒−𝑖𝜃 ( + 𝑖 × 0) = 𝑖𝜃 =

as in the real case.

6.5 Exercises
7.1 Complex Integration: 167

Then the integral of 𝑓 over C is, denoted by

𝑓 (𝑧)𝑑𝑧,
𝑐

is defined by

Chapter 7 𝑏
𝑑𝑧
𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧(𝑡))𝑧(𝑡)𝑑𝑡,
˙ where 𝑧(𝑡)
˙ = .
𝐶 𝑎 𝑑𝑡
� �

Now the question is how can we evaluate this integral. One other possible way to evaluate the
COMPLEX INTEGRAL CALCULUS complex integral is to write the line integral into one or more real line integrals. To see this, let
𝑓 (𝑧) = 𝑢 + 𝑖𝑣 and 𝑧 = 𝑥 + 𝑖𝑦. Then 𝑑𝑧 = 𝑑𝑥 + 𝑖𝑑𝑦.
Hence

7.1 Complex Integration: 𝑓 (𝑧)𝑑𝑧 = 𝑢 + 𝑖𝑣 (𝑑𝑥 + 𝑖𝑑𝑦) = (𝑢𝑑𝑥 − 𝑣𝑑𝑦) + 𝑖 (𝑣𝑑𝑥 + 𝑢𝑑𝑦)
𝐶 𝐶 𝐶 𝐶
� � � �
( )

Recall that: there is a one=to-once correspondence between the set of Complex numbers C and Example 7.1.1. Evaluate
𝑑𝑧
,
𝑧
the set of points in the Euclidean real plane R × R.. Hence the natural generalization of the
𝐶

Riemann integral
𝑏 where C is a unit circle around the origin.
𝑓 (𝑥)𝑑𝑥
𝑎

of a real valued function 𝑓 on a real 𝑥−axis the line integral in R2 or in R3 . Following this fact Solution
we define the integral of a complex valued of a complex variable as a line integral of the function
Here C is parameterized by 𝑧(𝑡) = cos 𝑡 + 𝑖 sin 𝑡 = 𝑒𝑖𝑡 , 0 ≤ 𝑡 ≤ 2𝜋. Then
along a given oriented curve C in the complex plane.
𝑑𝑧
i.e. The complex integral of a complex function 𝑓 on a curve C is given by: = 𝑒−𝑖𝑡 .𝑖𝑒𝑖𝑡 𝑑𝑡 = 𝑖 𝑑𝑡 = 2𝜋𝑖.
𝐶 𝑧 0 0
� � 2𝜋 � 2𝜋

𝐼= 𝑓 (𝑧)𝑑𝑧. Example 7.1.2. Evaluate 𝐼 = 𝐶 𝑧 2 𝑑𝑧, where C is the parabolic arc given in the figure below.
𝐶


Here we assume that C is an oriented curve in the complex plane, wich is piecewise smooth and
simple. �
������

In the complex plane, a curve C be perimetrically represented as:

� �

𝑧(𝑡) = 𝑥(𝑡) + 𝑖𝑦(𝑡).

��
The direction in which t is increasing is called the positive sense of C. we can now state:

Definition 7.1.1. Let C be a smooth curve, represented by 𝑧 = 𝑧(𝑡), where 𝑎 ≤ 𝑡 ≤ 𝑏. Let 𝑓 (𝑧) Figure 7.1: The parabola 𝑥 = 4 − 𝑦 2 .
be a continuous complex function on C.
7.1 Complex Integration: 168 7.1 Complex Integration: 169

Solution Definition 7.1.2. Let C be a piecewise smooth curve such that 𝐶 = 𝐶1 ⊕ ... ⊕ 𝐶𝑛 and 𝑓 (𝑧) be
a continuous complex function on C. Then we define
First 𝑧 2 = (𝑥 + 𝑖𝑦)2 = (𝑥2 − 𝑦 2 ) + 𝑖(2𝑥.𝑦) which implies 𝑢(𝑥, 𝑦) = 𝑥2 − 𝑦 2 , 𝑣 = 2𝑥𝑦. Then
𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧.
𝐼= 𝑧 2 𝑑𝑧 = 𝐶 𝑖=1 𝐶𝑖
� 𝑛 �

(𝑥 − 𝑦 2 )𝑑𝑥 − (2𝑥𝑦)𝑑𝑦 + 𝑖 2𝑥𝑦𝑑𝑥 + (𝑥2 − 𝑦 2 )𝑑𝑦 .


𝐶 𝐶 𝐶
� � � �
( 2 ) ( )

Example 7.1.4. Let C be a curve consisting of portion of a parabola 𝑦 = 𝑥2 in the 𝑥𝑦−plane


Parameterizing C according to 𝑡 = 𝑦, we have 𝑥 = 4 − 𝑡2 and 𝑑𝑦 = 𝑑𝑡, 𝑑𝑥 = −2𝑡𝑑𝑡, −2 ≤ 𝑡 ≤ 2.
from (0, 0) to (2, 4) and a horizontal line from (2, 4) to (4, 4). If 𝑓 (𝑧) = 𝐼𝑚(𝑧), then evaluate
Therefore,
𝑓 (𝑧)𝑑𝑧.
𝐶

−2 −2
𝐼= (4−𝑡2 )2 −𝑡2 )(−2𝑡𝑑𝑡)−(2(4−𝑡2 )𝑡)𝑑𝑡 +𝑖 2(4−𝑡2 )+(−2𝑡𝑑𝑡)+((4−𝑡2 )2 −𝑡2 ))𝑑𝑡
2 +2 Solution
� � � � � �
(

−2 −2
= [(16 − 9𝑡2 + 𝑡4 )(−2𝑡) − (8𝑡 − 2𝑡3 )]𝑑𝑡 + 𝑖 [((−16𝑡2 + 4𝑡4 ) + (16 − 9𝑡2 + 𝑡4 )]𝑑𝑡
+2 +2
First we write 𝐶 = 𝐶1 ⊕ 𝐶2 , where 𝐶1 is the portion of the parabola and 𝐶2 is the line segment.
� �

−2 −2
25 16 16
Parameterize 𝐶1 by 𝑧(𝑡) = 𝑡 + 𝑖𝑡2 for 0 ≤ 𝑡 ≤ 2 and on 𝐶1 , 𝑑𝑧 = (1 + 2𝑡𝑖)𝑑𝑡 and 𝑓 (𝑧(𝑡)) =
5 3 4 2
= 𝐼𝑚(𝑧(𝑡)) = 𝑡2 . Therefore,
3 3 3
−2𝑡 +20𝑡 −40𝑡 𝑑𝑡+𝑖 5𝑡 −25𝑡 +16 𝑑𝑡 = 0+𝑖[−64+ ×16−64] = 0+𝑖 = 𝑖.
+2 +2
� �

2 2
( ) ( )

Example 7.1.3. Evaluate 𝑡3 8


𝐼𝑚(𝑧(𝑡))𝑑𝑧 = 𝑡2 (1 + 2𝑡𝑖)𝑑𝑡 = 𝑡2 𝑑𝑡 + 𝑖 2𝑡3 𝑑𝑡 = +𝑖 = + 8𝑖.
𝐶1 0 0 0 3
� � 2 � � � ��2

(𝑧 − 𝑎)𝑛 𝑑𝑧,
𝐶
� 𝑡4 ��
2 �0 3

where 𝑎 is any given complex number, n is any integer and C is a circle centered at 𝑎 and with Parameterize 𝐶2 by 𝑧(𝑡) = 𝑡 + 2𝑖 for 2 ≤ 𝑡 ≤ 4 and on 𝐶2 , 𝑑𝑧 = 𝑑𝑡 and 𝑓 (𝑧(𝑡)) = 𝐼𝑚(𝑧(𝑡)) = 2.
radius 𝑟. Therefore,
4
𝐼𝑚(𝑧)𝑑𝑧 =
𝐶2 2
� �
�4

Solution
2𝑑𝑡 = 2𝑡�2 = 4.

Hence
8 32
𝑖𝑡 𝑖𝑡 (𝑦 2 𝑑𝑥 + 𝑥2 𝑑𝑦) = + 8𝑖 + 8 = + 8𝑖.
Here the curve is parameterized by 𝑧 − 𝑎 = 𝑟𝑒 for 0 ≤ 𝑡 ≤ 2𝜋 which implies 𝑑𝑧 = 𝑖𝑟𝑒 𝑑𝑡.
𝐶 3 3
� � �

Therefore, Remark 7.1.3. As in the line integrals we have the following generalizations.
2𝜋 2𝜋 2𝜋
Let f and g be continuous complex functions on a piecewise smooth curve C and 𝑘 be a constant.
(𝑧 − 𝑎)𝑛 𝑑𝑧 = 𝑟𝑒𝑖𝑡 .𝑖𝑟𝑒𝑖𝑡 𝑑𝑡 = 𝑖𝑟𝑛+1 𝑒𝑖(𝑛+1)𝑡 𝑑𝑡 = 𝑟𝑛+1 𝑒𝑖(𝑛+1)𝑡 (𝑖𝑑𝑡).
𝐶 0 0 0 Then
� � � �
( )𝑛

But
2𝜋 𝑟𝑛+1 1.
𝑛+1 𝑖(𝑛+1)𝑡 𝑛+1
𝑒𝑖(𝑛+1)𝑡 0
= 0 if 𝑛 ∕= −1
𝑟 𝑒 (𝑖𝑑𝑡) = 0 𝑖(0)𝑡
0 𝑟 𝑒 (𝑘𝑓 )(𝑧)𝑑𝑧.𝑑𝑟 = 𝑘 𝑓 (𝑧)𝑑𝑧.

𝑖 0
� [ ]2𝜋

𝑑𝑡 = 2𝜋𝑖 if = −1.
𝐶 𝐶
� �

Therefore
∫ 2𝜋

2.
𝑛 2𝜋𝑖 if 𝑛 = −1
(𝑧 − 𝑎) 𝑑𝑧 = (𝑓 + 𝑔)(𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧 + 𝑔(𝑧)𝑑𝑧.
𝐶

𝐶 𝐶 𝐶

0 if 𝑛 ∕= −1.
� � �

In the previous examples, we have been integrating over smooth curves. Let C be a piecewise 3. If C’ has an opposite orientation to that of C, then
smooth curve. That is, C is a curve made up of smooth curves 𝐶1 , 𝐶2 , ..., 𝐶𝑛 such that the
𝑓 (𝑧)𝑑𝑧 = − 𝑓 (𝑧)𝑑𝑧.
𝐶 𝐶′
� �

terminal point of 𝐶𝑖 is the initial point of 𝐶𝑖+1 and in this case we write 𝐶 = 𝐶1 ⊕ ... ⊕ 𝐶𝑛 .
7.2 Cauchy’s Integral Theorem. 170 7.2 Cauchy’s Integral Theorem. 171

7.2 Cauchy’s Integral Theorem. where C is the unit circle centered at the origin and traverse counterclockwise. Then
1
𝑓 (𝑧) =
Let C be a piecewise-smooth simple closed curve in the complex plane (and hence in R2 ). Then (𝑧 − 2)(𝑧 − 3)
C encloses some simply connected region R. Let 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦) be continuous in a is analytic everywhere except at 𝑧 = 2 and 𝑧 = 3. But 𝑧 = 2 and 𝑧 = 3 are out of the region
simply connected domain D containing the curve C. Then enclosed in C. Hence f is analytic in the region enclosed by C. Then by Cauchy’s Theorem
𝑑𝑧
𝑓 (𝑧)𝑑𝑧 = (𝑢𝑑𝑥 − 𝑣𝑑𝑦) + 𝑖 (𝑣𝑑𝑥 + 𝑢𝑑𝑦) (7.1) 2
= 0.
𝐶 𝐶 𝐶

𝐶 𝑧 − 5𝑧 + 6
� � �

Now assume that 𝑓 is analytic and that 𝑓 ′ (𝑧) is continuous in D so that u and v are continuously Let 𝐶1 and 𝐶2 be closed paths in the complex plane with 𝐶2 is in the interior of 𝐶1 . Suppose
differentiable. Then by Green’s Theorem on u and v we can write (7.1) as: that a complex function 𝑓 is analytic in an open set containing both paths and all points between
them. Now let 𝐿 be the line segment as shown in the Figure 7.4. Then the region D is a simply
∂(−𝑣) ∂𝑢 ∂𝑢 ∂𝑣
𝑓 (𝑧)𝑑𝑧 = 𝑑𝐴 + 𝑖 𝑑𝐴
connected region bounded by the curve 𝐶, where 𝐶 = 𝐶1 ⊕ 𝐶2′ ⊕ 𝐿 ⊕ 𝐿′ , where 𝐿′ is the line
∂𝑥 ∂𝑦 ∂𝑥 ∂𝑦
− −
𝐶 𝑅 𝑅 segment which is oriented opposite to that of L and 𝐶2′ is the curve 𝐶2 but in opposite orientation.
� �� � � �� � �

∂𝑣 ∂𝑢 ∂𝑢 ∂𝑣 � �
= 𝑑𝐴 + 𝑖 𝑑𝐴.
∂𝑥 ∂𝑦 ∂𝑥 ∂𝑦
− − −
𝑅 𝑅
�� � � �� � �

But since f is analytic in D, by Cauchy-Riemann equations we have that �� �� �


� �
� �
∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 �
= and �
∂𝑥 ∂𝑦 ∂𝑦 ∂𝑥
=− .
�� ��
� �
This implies
∂𝑢 ∂𝑣
= 0 and = 0 in D.
−∂𝑣 ∂𝑢
∂𝑥 ∂𝑦 ∂𝑥 ∂𝑦
− −
Figure 7.2: Multiply and Simply Connected Regions.
Therefore
𝑓 (𝑧)𝑑𝑧 = 0𝑑𝐴 + 0𝑑𝐴 = 0 Then. since 𝑓 is analytic in D , by Cauchy’s Theorem, we have
𝐶 𝑅 𝑅
� �� ��

and hence we have proved the following theorem (called Cauchy’s Theorem.) 𝑓 (𝑧)𝑑𝑧 = 0.
𝐶

Theorem 7.2.1 (Cauchy’s Theorem). If 𝑓 (𝑧) is analytic in a simply connected domain D,then But since 𝐶 is a piecewise smooth curve, we have

𝑓 (𝑧)𝑑𝑧 = 0, 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧 + 𝑓 (𝑧)𝑑𝑧 + 𝑓 (𝑧)𝑑𝑧 + 𝑓 (𝑧)𝑑𝑧


𝐶 𝐶1 𝐶2 𝐿 𝐿′
𝐶
� � � � � �

and
for every piecewise smooth simple closed curve C in D.
𝑓 (𝑧)𝑑𝑧 = − 𝑓 (𝑧)𝑑𝑧 and 𝑓 (𝑧)𝑑𝑧 = − 𝑓 (𝑧)𝑑𝑧
𝐿′ 𝐿 𝐶2′ 𝐶2
� � � �

Remark 7.2.2. In the Cauchy’s Theorem above, the continuity of 𝑓 ′ (𝑧) is omitted. This is done
Therefore, we get
intentionally because as we can show it later, if 𝑓 is analytic at a point 𝑧0 , then the derivatives
′ 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧 = 0
of all order of 𝑓 at 𝑧0 exists. and hence 𝑓 (𝑧) is continuous
𝑓 (𝑧)𝑑𝑧 −
𝐶 𝐶1 𝐶2
� � �

and hence
Example 7.2.1. Consider the integral
𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧.
𝑑𝑧 𝑑𝑧 𝐶1 𝐶2
� �

2
= , Therefore, we have proved the following theorem.
� �

𝐶 𝑧 − 5𝑧 + 6 𝐶 (𝑧 − 2)(𝑧 − 3)
7.2 Cauchy’s Integral Theorem. 172 7.3 Cauchy’s Integral Formula and The Derivative of Analytic Functions. 173

Theorem 7.2.3 (The Deformation Theorem). Let 𝐶1 and 𝐶2 be closed paths in the complex path with radius 𝑟 and centered at 𝑎.
plane with 𝐶2 is in the interior of 𝐶1 . Suppose that a complex function 𝑓 is analytic in an open Then
𝑑𝑧 𝑑𝑧
set containing both paths and all points between them. Then = .
𝑐 𝐶1
� �

𝑧−𝑎 𝑧−𝑎
𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧. Set 𝑧 − 𝑎 = 𝑟𝑒𝑖𝜃 . Then 𝑑𝑧 = 𝑟𝑖𝑒𝑖𝜃 𝑑𝜃 and hence
𝐶1 𝐶2
� �

𝑟𝑖𝑒𝑖𝜃
𝑖𝜃
𝑑𝜃 = 𝑖 𝑑𝜃 = 𝑖
Remark 7.2.4. If f is analytic in a simply connected domain D, then the integral 𝑓 (𝑧)𝑑𝑧is
𝑑𝜃 = 2𝜋𝑖 ∕= 0.
𝑐 𝐶1 𝑟𝑒 𝐶1 0
� � � 2𝜋

independent of path in D. That is, if 𝐶1 and 𝐶2 are open curves with the same initial and terminal

points, then 7.3 Cauchy’s Integral Formula and The Derivative of An-
𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧.
𝐶1 𝐶2
� �

alytic Functions.
Hence we can deform 𝐶1 into 𝐶2 without changing the value of the integral.
However, if f is not analytic in D, then Cauchy’s Theorem does not hold true in general. In the last example of the previous section we have seen that

Example 7.2.2. Consider the integral 𝑑𝑧


= 2𝜋𝑖,
𝑑𝑧

𝐶 𝑧 −𝑎
𝑐 where C is any piecewise smooth, simple closed curve oriented counterclockwise and containing

𝑧−𝑎
where C is any piecewise smooth simple closed curve, oriented counterclockwise and containing 𝑎 in the interior. During the evaluation of the integral we used the idea of path deformation and
𝑎 inside. Since a circle 𝐶1 with center 𝑎 and radius r.

Now let 𝑓 (𝑧) be analytic in a simply- connected domain D containing C inside. Then

𝑓 (𝑧) 𝑓 (𝑧)
𝐼= 𝑑𝑧 = 𝑑𝑧,
𝐶 𝐶1

� �

𝑧−𝑎 𝑧−𝑎
��
� where 𝐶1 is a sufficiently small circle with radius r and centered at 𝑎.


Since this last integral is independent of 𝑟, provided 𝐶1 stays inside we will let 𝑟 → 0. Hence
𝑓 (𝑧) 𝑓 (𝑎)
𝐼= 𝑑𝑧 = 𝑧𝑎 + 𝑑𝑧 = 𝑓 (𝑎)2𝜋𝑖 + 𝑑𝑧.
𝑓 (𝑧) − 𝑓 (𝑎) 𝑓 (𝑧) − 𝑓 (𝑎)
𝐶1 𝐶1
� � � � � � � �

𝐶1 𝑧 − 𝑎 𝐶1 𝑧 − 𝑎 𝑧−𝑎 𝑧−𝑎

At this final step letting 𝑟 → 0 we have ∣𝑓 (𝑧) − 𝑓 (𝑎)∣ → 0.(The deviation of integrand goes to
Figure 7.3: A curve C containing 𝑎 inside.
zero).
1 Thus
𝑓 (𝑧) = 𝑓 (𝑧)
𝑧−𝑎 𝐼= 𝑑𝑧 = 𝑓 (𝑎)2𝜋𝑖.
is analytic in the region bounded by C except in some neighborhood of 𝑧 = 𝑎, we can conclude 𝐶

𝑧−𝑎
that 𝑓 is analytic in every domain not containing 𝑎 inside. Definition 7.3.1. A Complex function 𝑔 is said to be singular at a point, say 𝑧 = 𝑧0 , if it is not
Thus, because of path deformation, we can assume without loss of generality that 𝐶1 is a circular analytic at that point.

We have proved the following theorem


7.3 Cauchy’s Integral Formula and The Derivative of Analytic Functions. 174 7.3 Cauchy’s Integral Formula and The Derivative of Analytic Functions. 175

Theorem 7.3.2 (Cauchy Integral Formula). Let 𝑓 (𝑧) be analytic in a simply - connected domain A very striking result in complex analysis is that, if f is analytic in a domain D (once it is
D and let C be a piecewise smooth simple closed curve in D oriented counterclockwise. Then differentiable at a point of D), then it has derivatives of all orders in D. We have the following

𝑓 (𝑧) theorem that can be used to find higher order derivatives of an analytic at a given point and
𝑑𝑧 = 2𝜋𝑖𝑓 (𝑎) evaluate integrals.
𝐶 𝑧

−𝑎
for all a in D. This implies Theorem 7.3.3 (Cauchy Integral Formula for Higher Derivatives). Let 𝑓 (𝑧) be analytic in a
1 𝑓 (𝑧)
𝑓 (𝑎) = 𝑑𝑧. simply - connected domain D and let C be a piecewise smooth simple closed curve in D oriented
2𝜋𝑖 𝐶

𝑧−𝑎
counterclockwise. Then for all 𝑎 in D
Example 7.3.1. Evaluate
𝑛! 𝑓 (𝑧)
𝑧3 − 6
𝑑𝑧, 𝑓 (𝑛) (𝑎) = 𝑑𝑧
2𝜋𝑖 𝑐
𝐶

(𝑧 − 𝑎)𝑛+1
� � �

2𝑧 − 𝑖
𝑖 for any nonnegative integer 𝑛.
where C is any closed simple piecewise smooth curve containing 𝑎 = 2
in its interior.
Then Example 7.3.3. By using Cauchy Integral Formula for Higher Derivatives we have
2
𝑧 −3 𝑓 (𝑧)
𝑑𝑧 = 𝑑𝑧 = ( 𝑑𝑧
𝑧3 − 6
𝐶 𝐶 𝐶 sin 𝑧
� � � � �1 3 � � � �

𝑑𝑧 = 2𝜋𝑖(sin 𝑧) = 2𝜋𝑖 cos(𝜋𝑖) = 2𝜋𝑖 cosh 𝜋


2𝑧 − 𝑖 𝑧 − 12 𝑖 𝑧 − 12 𝑖
2
1 3 𝑧=𝜋𝑖
where 𝑓 (𝑧) = 𝑧
� �

2
− 3 and f is analytic every where in C. Thus 𝐶 (𝑧 − 𝜋𝑖)

′�

for any simple closed path C containing 𝜋𝑖 in its interior and oriented in counterclockwise direction.

1 1 𝜋
𝑑𝑧 = 2𝜋𝑖.𝑓 𝑖 = 2𝜋𝑖
𝑧 −6
𝐶 2 20 𝑖 16 8 Example 7.3.4. Evaluate
2
� � 3 � � � � �

2𝑧 − 𝑖
𝑑𝑧
(1 )
( 1 )3 − 3 = 2𝜋𝑖 − 𝑖 − 3 = − 6𝜋𝑖.

Example 7.3.2. Evaluate 𝐶


𝑧 2 (𝑧 − 2)(𝑧 − 4),
𝑧2 + 1 where C is the rectangle in Figure 7.3.4.
𝑑𝑧,
𝐶
� � �

𝑧2 − 1
where C is a unit circle centered at 𝑧 = 1. �

Here
𝑧2 + 1 𝑧2 + 1 𝑧2 + 1 1 𝑓 (𝑧)
= = . = ,
𝑧+1 �
� �� �

𝑧2 − 1 (𝑧 − 1)(𝑧 + 1) 𝑧−1 𝑧−1


𝑧 2 +1
where 𝑓 (𝑧) = 𝑧+1
. Therefore,
� � � �
2
𝑧 +1 𝑧 +1 2
𝑑𝑧 = 2𝜋𝑖𝑓 (1) = 2𝜋𝑖
2
= 2𝜋𝑖 × = 2𝜋𝑖.
𝐶 𝑧 + 1 𝑧=1
� � � � 2 �

𝑧2 − 1

If C is a unit circle containing −1 in its interior, then we can write



Figure 7.4: A rectangular region for Example 7.3.4.
𝑧2 + 1 𝑧2 + 1 𝑧2 + 1 1 𝑓 (𝑧)
= = . = .
𝑧+1 𝑧+1
� �� �

𝑧2 − 1 (𝑧 + 1)(𝑧 − 1) 𝑧−1
𝑧 2 +1 Solution
where 𝑓 (𝑧) = 𝑧−1
. Hence

𝑧2 + 1 𝑧2 + 1 Expanding the integrand


= 2𝜋𝑖𝑓 (−1) = 2𝜋𝑖 = −2𝜋𝑖. 1
𝐶 𝑧=−1
� � � � �

𝑧2 − 1 𝑧−1
𝑧 2 (𝑧 − 2)(𝑧 − 4)
7.4 Cauchy’s Theorem for Multiply Connected Domains 176 7.4 Cauchy’s Theorem for Multiply Connected Domains 177

in partial fractions we have: � �

1 𝐴 𝐵 𝐶 𝐷 �� �� �
= + 2+ + � �
𝑧 2 (𝑧 − 2)(𝑧 − 4) 𝑧 𝑧 𝑧−2 𝑧−4 � �
3 1 � �
which implies 𝐴 = 32
,𝐵 . Therefore ��
= 18 , 𝐶 = − 18 , 𝐷 = 32
��
𝑑𝑧 3 𝑑𝑧 1 𝑑𝑧 1 𝑑𝑧 1 𝑑𝑧 � �
2
= + 2
+ .
𝑧 (𝑧 2)(𝑧 4) 32 𝑧 8 𝑧 8 𝑧 2 32 𝑧

𝐶 𝐶 𝐶 𝐶 𝐶
� � � � �

− − − −4
1 𝑑𝑧
𝑧2
But 18 𝐶 𝑑𝑧 = 0 since the exponent is 2 and 32 𝐶 𝑧−4
since 𝑓 𝑟𝑎𝑐1𝑧 − 2 is analytic in D. Hence Figure 7.6: Multiply and Simply Connected Regions.
∮ ∮

𝑑𝑧 3 1 1 1
= 2𝜋𝑖 + 0 + 2𝜋𝑖 + 0 = 𝑖.
−𝜋
2 32 8 8 32 16 and
× × × ×
� � � � � � � � �

𝐶 𝑧 (𝑧 − 2)(𝑧 − 4)
𝑓 (𝑧) 𝑓 (𝑧)
𝑑𝑧 = − 𝑑𝑧.
𝐿′ 𝐿
� �

𝑧−𝑎 𝑧−𝑎
7.4 Cauchy’s Theorem for Multiply Connected Domains Therefore, we get
𝑓 (𝑧) 𝑓 (𝑧) 𝑓 (𝑧)
𝑑𝑧 = 𝑑𝑧 + 𝑑𝑧
Now let us extend the the Cauchy’s theorem for multiply connected regions. 𝐶 𝐶1 𝐶2
� � �

𝑧−𝑎 𝑧−𝑎 𝑧−𝑎


Suppose f is analytic on 𝐶1 and 𝐶2 and in the annulus domain D bounded by 𝐶1 and 𝐶2 and hence
1 𝑓 (𝑧) 1 𝑓 (𝑧) 𝑓 (𝑧)
counterclockwise and clockwise respectively, and 𝑎 is in the interior of the domain as shown in 𝑓 (𝑎) = 𝑑𝑧 = 𝑑𝑧 + 𝑑𝑧 .
2𝜋𝑖 𝐶 2𝜋𝑖 𝐶1 𝐶2
� �� � �

𝑧−𝑎 𝑧−𝑎 𝑧−𝑎


Figure 7.4.
In general if D is bound by 𝐶1 , 𝐶2 , 𝐶3 , . . . 𝐶𝑚 , where 𝐶1 is oriented counterclockwise and all the
� others are oriented clockwise and each of the 𝐶𝑖′ 𝑠 are closed, simple paths, 𝑎 is in the interior of
D, then
�� 𝑓 (𝑧) 𝑓 (𝑧) 𝑓 (𝑧)
� 𝑑𝑧 + 𝑑𝑧 + . . . + 𝑑𝑧 = 2𝜋𝑖𝑓 (𝑎).
𝐶1 𝐶2 𝐶𝑚
� � �

𝑧−𝑎 𝑧−𝑎 𝑧−𝑎



�� Theorem 7.4.1. Let C be a closed path and 𝐶1 , ..., 𝐶𝑛 be closed paths enclosed by C. Assume
� that any two of 𝐶, 𝐶1 , ..., 𝐶𝑛 intersect and no interior point to any 𝐶𝑖 is interior to any other
𝐶𝑘 . Let 𝑓 be analytic on an open set containing C and each 𝐶𝑖 and all the points that are both
Figure 7.5: Annulus. interior to C an exterior to each 𝐶𝑖 . Then

Now let 𝐿 be the line segment as shown in the Figure 7.4. Then the region D is a simple bounded 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧.
𝐶 𝑖=1 𝐶𝑖
� 𝑛 �

by the curve 𝐶, where 𝐶 = 𝐶1 ⊕ 𝐶2 ⊕ 𝐿 ⊕ 𝐿′ , where 𝐿′ is the line segment which is oriented


opposite to that of L. Example 7.4.1. Evaluate
𝑑𝑧
,
Then. since 𝑓 is analytic in D and 𝑎 is D, by Cauchy Integral Formula, we have 𝐶

𝑧(𝑧 − 1)
1 𝑓 (𝑧)
𝑓 (𝑎) = 𝑑𝑧.
where C is the circle ∣𝑧∣ = 3 counterclockwise.

2𝜋𝑖 𝐶 𝑧 − 𝑎
But 𝐶 is a piecewise smooth curve, we have Solution
𝑓 (𝑧) 𝑓 (𝑧) 𝑓 (𝑧) 𝑓 (𝑧) 𝑓 (𝑧)
𝑑𝑧 = 𝑑𝑧 + 𝑑𝑧 + 𝑑𝑧 + 𝑑𝑧 Let 𝐶1 and 𝐶2 be the circles as in Figure 7.4.1.
� � � � �

𝐶 𝑧 −𝑎 𝐶1 𝑧 − 𝑎 𝐶2 𝑧 − 𝑎 𝐿 𝑧 −𝑎 𝐿′ 𝑧 − 𝑎
7.5 Fundamental Theorem of Complex Integral Calculus 178 7.5 Fundamental Theorem of Complex Integral Calculus 179

If 𝑧0 is any fixed point in D, then the integral

�� �� 𝑧

� �
𝑓 (𝜁)𝑑𝜁 denoted by 𝑓 (𝜁)𝑑𝜁
� � 𝐿 𝑧0
� �

where 𝐿 is the line segment with initial point 𝑧0 and terminal point 𝑧 is path independent and
hence defines a single- valued function of 𝑧, provided that f is analytic in the domain D. Thus we
Figure 7.7: The curves in Example 7.4.1. can define 𝑧
𝐺(𝑧) = 𝑓 (𝜁)𝑑𝜁
𝑧0

Therefore,
and thus it follows that 𝐺′ (𝑧) = 𝑓 (𝑧).
𝑑𝑧 𝑑𝑧 𝑑𝑧
= + = 2𝜋𝑖 × 𝑓1 (1) + 2𝜋𝑖 × 𝑓2 (0) ,
𝐶 𝐶 𝐶
� � �

𝑧(𝑧 − 1) 𝑧(𝑧 − 1) 𝑧(𝑧 − 1)


( ) ( )

1 1 If 𝐹 (𝑧) is any particular primitive of 𝑓 (𝑧), then


where 𝑓1 (𝑧) = 𝑧
and 𝑓2 (𝑧) = 𝑧−1
. Therefore

𝑑𝑧 𝐺(𝑧) = 𝑓 (𝜁)𝑑𝜁 = 𝐹 (𝑧) + 𝑘


𝑧0
� 𝑧

= 2𝜋𝑖 × 1 + 2𝜋𝑖 × (−1) = 0.


𝐶

𝑧(𝑧 − 1)
( ) ( )

Example 7.4.2. Evaluate where 𝑘 is arbitrary constant. Suppose the line segment is parameterized by 𝑧(𝑡) for 𝑎 ≤ 𝑡 ≤ 𝑏.
𝑧+1 First write 𝐹 (𝑧) = 𝑈 (𝑥, 𝑦) + 𝑖𝑉 (𝑥, 𝑦). Then we get
𝑑𝑧,
𝑐
𝑏

𝑧(𝑧 − 2)(𝑧 − 4)3


where C is the counterclockwise circle ∣𝑧 − 3∣ = 2. 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧(𝑡))𝑧 ′ (𝑡)𝑑𝑡
𝐿 𝑎
� �

𝑏
𝑑
Solution = 𝐹 (𝑧(𝑡))𝑑𝑡
𝑎 𝑑𝑡

𝑑 𝑑
Here the integrand has singularities at 𝑧 = 0, 2 and 4. of which 2 and 4 lie inside C. It is easier to = 𝑈 (𝑥(𝑡), 𝑦(𝑡))𝑑𝑡 + 𝑖 𝑉 (𝑥(𝑡), 𝑦(𝑡))𝑑𝑡
𝑎 𝑑𝑡 𝑎 𝑑𝑡
� 𝑏 � 𝑏

deform C into two closed curves and to evaluate each of the integrand using generalized Cauchy’s = 𝑈 (𝑥(𝑏), 𝑦(𝑏)) + 𝑖𝑉 (𝑥(𝑏), 𝑦(𝑏)) − 𝑈 (𝑥(𝑎), 𝑦(𝑎)) − 𝑖𝑉 (𝑥(𝑎), 𝑦(𝑎))
formula,
= 𝐹 ′ (𝑧(𝑡))𝑧 ′ (𝑡)𝑑𝑡
(𝑧 + 1) 𝑧+1 𝑑𝑧 𝑧+1 𝑑𝑧 𝑎
� 𝑏

𝑑𝑧 = +
𝐶 𝐶1 𝑧(𝑧 4) 𝐶2 𝑧(𝑧 2) (𝑧 4)3 = 𝐹 (𝑧) − 𝐹 (𝑧0 ).
� � � � � � �

𝑧(𝑧 − 2)(𝑧 − 4)3 − 3 𝑧 −2 − −


2
𝑧+1 2𝜋𝑖 𝑑 𝑧+1
= 2𝜋𝑖 + Hence we have the following theorem.
� � � �

𝑧(𝑧 − 4)3 𝑧=2 2! 𝑑𝑧 2 𝑧(𝑧 − 2) 𝑧=4


3𝜋𝑖 23𝜋𝑖 𝜋𝑖
+ = . Theorem 7.5.1 (Fundamental Theorem of the complex Integral Calculus). Let 𝑓 (𝑧) be analytic
8 64 64
=−
in a simply - connected domain D and let 𝑧0 be any fixed point in D. Then

7.5 Fundamental Theorem of Complex Integral Calculus (i) The function


𝑧
𝐺(𝑧) = 𝑓 (𝜁)𝑑𝜁
Suppose 𝑓 is complex continuous function on an open set D and F be a function defined on D with 𝑧0

the property that 𝐹 ′ (𝑧) = 𝑓 (𝑧) for all 𝑧 ∈ 𝐷. Then any function 𝐹 (𝑧) satisfying 𝐹 ′ (𝑧) = 𝑓 (𝑧) is analytic in D and 𝐺′ (𝑧) = 𝑓 (𝑧)
is called an anti-derivative or a primitive of 𝑓 (𝑧).
7.6 Exercises 180 7.6 Exercises 181

(ii) if 𝐹 (𝑧) is any primitive of 𝑓 (𝑧), then This page is left blank intensionally.

𝑓 (𝜁)𝑑𝜁 = 𝐹 (𝑧) − 𝐹 (𝑧0 ).


𝑧0
� 𝑧

Example 7.5.1. Evaluate:

1. the integral
3
sin 𝑧𝑑𝑧
2𝑖

2. the integral
−𝑖
𝑑𝑧
1+𝑖 𝑧

Solution

1. Since sin 𝑧 is analytic on the segment joining the points 2𝑖 and3, we have

sin 𝑧𝑑𝑧 = − cos 𝑧 2𝑖 = − cos 3 + cos 2𝑖 = cosh 2 − cos 3


2𝑖
� 3
[ ]3

1
2. Since 𝑧
is analytic every where except at 𝑧 = 0, it is analytic on the line segment joining
−𝑖 and 1 + 𝑖. Hence
−𝑖
𝑑𝑧
= log 𝑧
1+𝑖 𝑧
= ln 𝑟 + 𝑖𝜃 √
1+𝑖 𝑟= 2, 3𝜋 2
� � �−𝑖 � �𝑟=1,𝜃=𝜋/2

3𝜋 𝜋 √
= (ln 1 + 𝑖
2 4
) − (ln 2 + 𝑖 )

ln 2 5𝜋 ln 2 3𝜋
+𝑖
2 4 2 4
=− =− −𝑖 .

Therefore we have −𝑖
𝑑𝑧 ln 2 3𝜋
𝑧 2 4
=− −𝑖 .
1+𝑖

7.6 Exercises
8.1 Sequence and Series of Complex Numbers 183

However, since this state is difficult to apply, we use, in practice, array of standard convergence
theorems (comparison, integral test, ratio test and others) which are easier to apply.

Theorem 8.1.2. Consider a complex series 𝑘=1 𝑐𝑘 = 𝑘=1 (𝑎𝑘 + 𝑖𝑏𝑘 ).


∑∞ ∑𝑛

1. The series 𝑘=1 𝑐𝑘 , converges to 𝑢 + 𝑖𝑣 if and only 𝑘=1 𝑎𝑘 = 𝑢 and 𝑘=1 𝑏𝑘 = 𝑣.


Chapter 8
∑∞ ∑∞ ∑∞

2. We say that the series ∞ 𝑘=1 𝑐𝑘 converges absolutely if the series 𝑘=1 ∣𝑐𝑘 ∣ converges and
if the series ∞ 𝑐
𝑘=1 𝑘 converges absolutely, then it is convergent.
∑ ∑∞

All absolute convergence tests that apply for real series(i.e. comparison test, ratio test and

TAYLOR AND LAURENT SERIES root test) hold also for complex series with the necessary notational adjustments.

Example 8.1.1. Determine the convergence or divergence of the series


8.1 Sequence and Series of Complex Numbers ∞ ∞
(1 + 𝑖) 𝑛 (−1)𝑛 + 𝑖
1. 2. 𝑒−(2+3𝑖)𝑘 3. .
For all the discussions in this chapter, the reader is assumed to be familiar with real sequences 𝑛=0
𝑛! 𝑘=2 𝑛=1
𝑛
� � �∞ � �

and series, which are discussed in the previous courses.


Solution
A sequence {𝑧𝑛 } whose terms are complex numbers is a complex sequence. A complex sequence
𝑛
is a complex valued function whose domain is a subset of the set of integers which is bounded 1. Here 𝑐𝑛 = (1+𝑖) . Then by ratio test if lim 𝑛→∞
𝑛!
� �
� 𝑐𝑛+1 �

from below (or N.) That is, a sequence 𝑧𝑛 is a function 𝑓 : N → C, and 𝑓 (1) = 𝑧1 , ..., 𝑓 (𝑛) = 𝑧𝑛 .
� �
� 𝑐𝑛 � < 1, the series converges and
� �
� �

Remark 8.1.1. Suppose that {𝑧𝑛 } is a complex sequence. If 𝑧𝑛 = 𝑥𝑛 + 𝑖𝑦𝑛 , then √


� > 1, the series diverges.

. = = 1 + 𝑖
lim𝑛→∞ �� 𝑐𝑛+1
𝑛𝑛 �
� � � � � � � �

lim = 𝑙, 𝑙 = 𝑎 + 𝑏𝑖
𝑛→∞
� � � (1+𝑖)𝑛+1 𝑛! � � 1+𝑖 � � �

1

�=� � � � 1 � � = 1 2 and by evaluating the limit

𝑛+1
2 = 0.
But �� 𝑐𝑛+1
𝑐𝑛 � � (𝑛+1)! (1+𝑖)𝑛 � � 𝑛+1 � 𝑛+1 � � 𝑛+1
� � � �

if and only if
� �

Therefore, the series 𝑛=0 𝑛! converges.


� = lim𝑛→0

lim 𝑥𝑛 = 𝑎 and lim 𝑦𝑛 = 𝑏.


we get lim𝑛→∞ �� 𝑐𝑛+1 𝑐𝑛 �

𝑛→∞ 𝑛→∞
∑∞ (1+𝑖)𝑛

Therefore limits and convergence properties of complex sequences is exactly similar (if not iden-
2. Here 𝑐𝑘 = 𝑒−(2+3𝑖)𝑘 and by root test, if lim𝑘→∞ ∣(𝑐𝑘 )1/𝑘 ∣ < 1, the series converges and if

tical) to that of real sequences. So, we omit the discussion here.


lim𝑘→∞ ∣(𝑐𝑘 )1/𝑘 ∣ > 1, the series diverges.

Any sum of the form


� � � � � � � � � �� �

lim𝑘→∞ ∣(𝑐𝑘 )1/𝑘 ∣ = lim𝑘→∞ 𝑒−2 = 𝑒−2 < 1. Hence the series converges.
But �(𝑐𝑘 )1/𝑘 � = �(𝑒−(2+3𝑖)𝑘 )1/𝑘 � = �𝑒−(2+3𝑖) � = �𝑒−2 .𝑒−3𝑖 � = �𝑒−2 �.�𝑒−3𝑖 � = 𝑒−2 < 1. and


(−1) +𝑖 𝑛
(−1) 𝑛 (−1)𝑛
𝑐𝑛 = 𝑐1 + 𝑐2 + . . . , 𝑖
3. In this case we have 𝑛=1 𝑛
is divergent. Hence
𝑛=1
𝑛

∑∞

where the terms 𝑐′𝑛 𝑠 are complex numbers is called a Complex series. the series 𝑛=1 (−1)𝑛 +𝑖 is divergent.
� ( 𝑛 � = 𝑛 + 𝑛 and we know that
∑∞

As in the case of the real series, a complex series ∞𝑛=1 𝑐𝑛 is convergent, if and only if its partial
sum 𝑠𝑛 = 𝑘=1 𝑐𝑘 is a cauchy sequence, that is, if and only if to each 𝜖 > 0 there corresponds

∑𝑛

an integer 𝑁 (𝜖) such that ∣𝑠𝑚 − 𝑠𝑛 ∣ < 𝜖 for all integers m and n greater than 𝑁 (𝜖).
8.2 Complex Taylor Series. 184 8.2 Complex Taylor Series. 185
1 𝑛
8.2 Complex Taylor Series. 3. 𝑛=0 𝑛! 𝑧 , the radius of convergence is 𝑅 = ∞.
∑∞

A series of the form 4. 𝑛=0 𝑛𝑛 𝑧 𝑛 , the radius of convergence is 𝑅 = 0.


∑∞


In (8.1) if 𝑎 = 0 and the radius of convergence is 𝑅 > 0, the function:
𝑐𝑛 (𝑧 − 𝑎)𝑛 = 𝑐0 + 𝑐1 (𝑧 − 𝑎) + 𝑐2 (𝑧 − 𝑎)2 + . . . , (8.1)
𝑛=0 ∞

𝑓 (𝑧) = 𝑐𝑛 𝑧 𝑛 , ∣𝑧∣ < 𝑅


𝑛=0
where the terms are complex numbers, is known as a power series in powers of 𝑧 − 𝑎. In the

power series
∞ is a power series representation of 𝑓 .
𝑐𝑛 (𝑧 − 𝑎)𝑛 = 𝑐0 + 𝑐1 (𝑧 − 𝑎) + 𝑐2 (𝑧 − 𝑎)2 + . . . ,
𝑛=0 Remark 8.2.3. If the power series representations

∙ 𝑐′𝑛 𝑠 are the coefficients, (real or complex constants) ∞ ∞


𝑎𝑛 𝑧 𝑛 and 𝑏𝑘 𝑧 𝑘
𝑛=0 𝑘=0
� �

∙ 𝑧 is the variable (complex variable)

∙ 𝑎 is the center (real or complex constant) of the series. both converge for ∣𝑧∣ < 𝑅 to the same value for all 𝑧 such that ∣𝑧∣ < 𝑅, then the two series are
identical. That is, 𝑎𝑛 = 𝑏𝑛 for all 𝑛 = 0, 1, . . . . Thus if a complex function 𝑓 has a power series
Remark 8.2.1. The given power series (8.1) converges at 𝑧 = 𝑧0 . If the series (8.1) converges representation with any center 𝑎, then the representation
at 𝑧1 ∕= 𝑧0 , then the series converges for all 𝑧, ∣𝑧 − 𝑧0 ∣ ≤ ∣𝑧1 − 𝑧0 ∣.

𝑓 (𝑧) = 𝑐𝑛 (𝑧 − 𝑎)𝑛
𝑐𝑛 (𝑧−𝑎) 𝑛 = 𝑛=0

� � � � � �
� � � 𝑐𝑛+1 (𝑧−𝑎) � � 𝑐𝑛+1 �

is unique.
(𝑧−𝑎)𝑛+1 � � �=� �

1
If we apply the ratio test on (8.1) we get: �� 𝑐𝑛+1 � � 𝑐𝑛 � � 𝑐𝑛 �∣𝑧 − 𝑎∣.

𝐿
.
� 𝑐𝑛+1 �

the set ∣𝑧 − 𝑎∣ > If 𝐿 = ∞, then the series converges only at 𝑧 = 𝑎, and if 𝐿 = 0, then it
Theorem 8.2.4. If a power series function
If lim𝑛→∞ � 𝑐𝑛 � = 𝐿, the power series (8.1) converges in the disk ∣𝑧 − 𝑎∣ < 𝐿1 and diverges in

converges for all z.



𝑓 (𝑧) = 𝑐𝑛 (𝑧 − 𝑎)𝑛
We have proved the following theorem. 𝑛=0

Theorem 8.2.2. Given a power series


converges for all 𝑧 in D, where 𝐷 = {𝑧 ∈ C : ∣𝑧 − 𝑎∣ < 𝑅} where 𝑅 > 0 is the radius of
𝑛=0 𝑐𝑛 (𝑧 − 𝑎)𝑛 , there is a number 𝑅 such that:
convergence, then
∑∞

𝑛
1. 𝑛=0 𝑐𝑛 (𝑧 − 𝑎) converges if ∣𝑧 − 𝑎∣ < 𝑅 and
1. by termwise differentiation , 𝑓 ′ (𝑧) = 𝑛=1 𝑛𝑐𝑛 (𝑧 − 𝑎)𝑛−1 for all 𝑧 in D, where 𝐷 = {𝑧 ∈
𝑛
∑∞

2. 𝑛=0 𝑐𝑛 (𝑧
∑∞

− 𝑎) diverges if ∣𝑧 − 𝑎∣ > 𝑅. C : ∣𝑧 − 𝑎∣ < 𝑅} and 𝑓 ′ (𝑧) has the series has the same radius of convergence as 𝑓 (𝑧).
∑∞

The number 𝑅 is called the radius of convergence. 2. If 𝐶 is any path in 𝐷 = {𝑧 ∈ C : ∣𝑧 − 𝑎∣ < 𝑅}, then by termwise integration, we have

Example 8.2.1. For the series: 𝑓 (𝑧)𝑑𝑧 = (𝑧 − 𝑎)𝑛 𝑑𝑧


𝐶 𝑛=1 𝐶
� ∞ �

1. 𝑛=0 𝑧 𝑛 , the radius of convergence is 𝑅 = 1.


and 𝐶
𝑓 (𝑧)𝑑𝑧 has the same radius of convergence as 𝑓 (𝑧).
1 𝑛
∑∞

2. 𝑛=0 𝑛 𝑧 , the radius of convergence is 𝑅 = 1.



∑∞
8.2 Complex Taylor Series. 186 8.2 Complex Taylor Series. 187
1
Recall that, unlike to real valued functions, for a complex function 𝑓 , if 𝑓 is analytic in some On the other hand, if we replace 𝑧 by −𝑧 in 1−𝑧
= 𝑛=0 𝑧 𝑛 we obtain
domain D, then it admits derivatives of all orders in D. If 𝑎 is in the interior of D, the series
∑∞


1
∞ (𝑛) = (−1)𝑛 𝑧 𝑛 for
𝑓 (𝑎) 1+𝑧
∣𝑧∣ < 1
𝑛=0

𝑛!
(𝑧 − 𝑎)𝑛
𝑛=0
and differentiating this yields

is well defined and is known as the Taylor series (or Expansion ) of 𝑓 about the point 𝑎 and if ∞
= 𝑛(−1)𝑛 𝑧 𝑛−1 for
−1
𝑎 = 0, then the Taylor series is also known as the Maclaurin Series
(1 + 𝑧)2
∣𝑧∣ < 1.
𝑛=1

Example 8.2.3 (Exponential function). Let 𝑓 (𝑧) = 𝑒𝑧 . Then 𝑓 (𝑛) (𝑧) = 𝑒𝑧 .


Therefore,
Theorem 8.2.5 (Taylor Expansion). If the disk ∣𝑧 − 𝑎∣ < 𝑅 lie entirely within D, then


𝑓 (𝑛) (𝑎) 𝑧𝑛 𝑧2 𝑧3
𝑛 𝑒𝑧 = =1+𝑧+ + + ...
𝑓 (𝑧) = 𝑛! 2! 3!
𝑛!
(𝑧 − 𝑎)
𝑛=0 𝑛=0
� �

2
Replacing 𝑧 by 𝑧 we get
and by Cauchy’s integral formula, we have ∞
2 𝑧 2𝑛
𝑛! 𝑓 (𝜁)𝑑𝜁 𝑒𝑧 = .
𝑓 (𝑛) (𝑎) = 𝑛=0
𝑛!

2𝜋𝑖 𝐶

(𝜁 − 𝑎)𝑛+1
𝑒𝑖𝑧 +𝑒−𝑖𝑧
Example 8.2.4 (Trigonometric and hyperbolic functions.). Since we have cos 𝑧 = 2
,
where C is a curve that is contained in D that contains 𝑎 in its interior. 𝑒𝑖𝑧 −𝑒−𝑖𝑧
sin 𝑧 = 2!
, cosh 𝑧 = cos(𝑖𝑧) and sinh 𝑧 = 𝑖 sin(𝑖𝑧), we have
Thus the Taylor series of an analytic function 𝑓 is :
∞ ∞ ∞
∞ 𝑧 2𝑛 𝑧 2𝑛+1 𝑧 2𝑛
cos 𝑧 = (−1)𝑛 , sin 𝑧 = (−1)𝑛 , cos ℎ𝑧 =
𝑓 (𝑧) = (2𝑛!) (2𝑛 + 1)! (2𝑛)!
𝑛=0 𝑛=0 𝑛=0
𝑏𝑛 (𝑧 − 𝑎)𝑛
𝑛=0
� � �

and
where ∞
1 𝑓 (𝜁) 𝑧 2𝑛+1
𝑏𝑛 = 𝑑𝜁. sinh 𝑧 = .
2𝜋𝑖 𝑛=0
(2𝑛 + 1)!
𝐶
� �

(𝜁 − 𝑎)𝑛+1
𝑡ℎ 1
(Here 𝑏𝑛 is called the 𝑛 Taylor coefficient of 𝑓 at 𝑎.) Example 8.2.5. Consider 𝑔(𝑧) = 1+𝑧 2
. Then 𝑔 is analytic everywhere except at 𝑧 = ±𝑖. The
That is, every analytic function can be represented by a Taylor’s series. Maclaurin’s series of 𝑔 is:
1 (𝑛) 𝑛! 𝑛 ∞ ∞
Example 8.2.2 (Geometric series). Let 𝑓 (𝑧) = 1−𝑧
. Then 𝑓 (𝑧) = (1−𝑧)𝑛+1
and 𝑓 (0) = 𝑛!. 1 1
= = (−𝑧 2 )𝑛 = (−1)𝑛 𝑧 2𝑛
Therefore, 1 + 𝑧2 1 − (−𝑧 2 ) 𝑛=0 𝑛=0
� �


1
= 𝑧𝑛 = 1 + 𝑧 + 𝑧2 + . . . for ∣𝑧∣ < 1. converges for ∣𝑧∣ < 1.
𝑛=0
1−𝑧

Differentiating this gives us Example 8.2.6 (Undetermined coefficients). Find the Maclaurin’s series for

1
∞ 𝑒𝑧
= 𝑛𝑧 𝑛−1 == 1 + 𝑧 + 𝑧 2 + . . . for 𝑓 (𝑧) = .
∣𝑧∣ < 1. cos 𝑧
𝑛=1
(1 − 𝑧)2

Clearly 𝑓 (𝑧) is analytic in the neighborhood of 0. Hence 𝑓 has the Taylor’s series representation
Differentiating this gives us
at 𝑧 = 0, say ;
∞ ∞
2 𝜋
3
= for 𝑓 (𝑧) = 𝑎𝑛 𝑧 𝑛 𝑓 𝑜𝑟 .
2
𝑛(𝑛 − 1)𝑧 𝑛−2 ∣𝑧∣ < 1. ∣𝑧∣ <
𝑛=2 𝑛=0
(1 − 𝑧)
� �
8.2 Complex Taylor Series. 188 8.3 Laurent Series 189

But the successive derivatives get tedious to find the coefficients 𝑎𝑛 . Then Here
1 1 1
𝑧 =
𝑒 1 + 2𝑧
= 𝑎0 + 𝑎1 𝑧 + 𝑎2 𝑧 2 + . . .
� �� �

(1 − 𝑧)(1 + 2𝑧) 1−𝑧


cos 𝑧
and ∞
implies 𝑒𝑧 = (𝑎0 + 𝑎1 𝑧) cos 𝑧. Thus 1
. = 𝑧𝑛 = 1 + 𝑧 + 𝑧2 + . . .
𝑛=0
1−𝑧
𝑧2 𝑧3 𝑧2 𝑧4

1+𝑧+ + + + . . .)
2! 3! 2 24
+ . . . = (𝑎0 + 𝑎1 𝑧 + 𝑎2 𝑧 2 + . . .)(1 − in ∣𝑧∣ < 1.
∞ ∞
which implies 1
. = (−2𝑧)𝑛 =
1 + 2𝑧
(−1)𝑛 2𝑛 𝑧 𝑛 = 1 − 2𝑧 1 + 4𝑧 2 − 8𝑧 3 + . . .
𝑧2 𝑧3 1 1 𝑛=0 𝑛=0
� �

1+𝑧+ +
2 6 2 2
+ . . . = 𝑎0 + 𝑎1 𝑧 + (𝑎2 − 𝑎0 )𝑧 2 + (𝑎3 − 𝑎, )𝑧 3
in ∣𝑧∣ < 12 .
1 1 1
Equating coefficients we get: 𝑎0 = 11 𝑎1 = 1, 𝑎2 − 12 𝑥1 = 2
⇒ 𝑎2 = 1, 𝑎3 = 6
+ 2
= 23 , . . . Then the product
Hence ∞ ∞
𝑒𝑧 2 1 1
= 1 + 𝑧 + 𝑧2 + 𝑧3 + . . . = 𝑧𝑛 (−1𝑛 (2𝑧)𝑛 )
cos 𝑧 3 1 + 2𝑧 𝑛=0 𝑛=0
� �� � �

1−𝑧
�� �� �


for ∣𝑧∣ < 𝜋2 .
= (1.(−1)𝑛 2𝑛 0 + 1.(−1)𝑛−1 2𝑛−1 + . . . + 1)𝑧 𝑛
𝑛=0

In the above example we used the product of two power series term by term. This kind of
= 1 − 𝑧 + 3𝑧 2 + . . . .
manipulation can be generalized in the following theorem.
in ∣𝑧∣ < 12 .
Theorem 8.2.6 (Termwise Product of Power series). If
∞ Remark 8.2.7. From the Fundamental Theorem of Complex Integration, we know that, if 𝑓 is
𝑎𝑛 (𝑧 − 𝑎)𝑛 analytic in an open disk D about 𝑎, then there exists a function 𝐹 such that 𝐹 ′ (𝑧) = 𝑓 (𝑧) for
𝑛=0
all 𝑧 in D. Then we can construct an antiderivative F of 𝑓 from the power series expansion of 𝑓

𝑛 𝑛+1
converges to 𝑓 (𝑧) in ∣𝑧 − 𝑎∣ < 𝑅1 and about 𝑎 in D. If 𝑓 (𝑧) = ∞𝑛=1 𝑐𝑛 (𝑧 −𝑎) , then 𝐹 (𝑧) = 𝑛=0 𝑛+1 𝑐𝑛 (𝑧 −𝑎) is an antiderivative
∞ of 𝑓.
∑ ∑∞ 1

𝑏𝑛 (𝑧 − 𝑎)𝑛
𝑛=0

converges to 𝑔(𝑧) in ∣𝑧 − 𝑎∣ < 𝑅2 , 8.3 Laurent Series


then the termwise product
In the previous part we have seen that if a function f is analytic in some domain D containing a
∞ ∞ ∞
𝑎𝑛 (𝑧 − 𝑎)𝑛 𝑏𝑛 (𝑧 − 𝑎)𝑛 = (𝑎0 𝑏𝑛 + 𝑎1 𝑏𝑛−1 + . . . + 𝑎𝑛 𝑏0 )(𝑧 − 𝑎)𝑛 . point 𝑎 in its interior, then f admits the Taylor series representation at 𝑎 and this representation
𝑛=0 𝑛=0 𝑛=0 is unique.
�� �� � � �

converges to the product 𝑓 (𝑧)𝑔(𝑧) in ∣𝑧 − 𝑎∣ < 𝑚𝑖𝑛{𝑅1 , 𝑅2 }. However, if f is not analytic at 𝑧 = 𝑎, then the Taylor series about 𝑎 do not have a unique-
representation. Hence we need the following result.
Example 8.2.7. Find the Maclaurin series of:
1 Theorem 8.3.1 (Laurent’s Theorem). Let D be the closed region between, and including con-
. centric circles 𝐶1 and 𝐶0 with their centers at 𝑧 = 𝑎.
(1 − 𝑧)(1 + 2𝑧)
8.3 Laurent Series 190 8.3 Laurent Series 191

(a) Since f is analytic every where except at 𝑧 = −𝑖, the Taylor series expansion 𝑎𝑡𝑧 = 0 is
� 1 1 1

= = =
𝑧+𝑖
−𝑖 −𝑖 (𝑖𝑧)𝑛 ; ∣𝑧∣ < 1
� �� 𝑖(1 + 𝑧𝑖 ) 1 − 𝑖𝑧 𝑛=0

= −𝑖[1 + 𝑖𝑧 + (𝑖𝑧)2 + . . .] = −𝑖[1 + 𝑖𝑧 − 𝑧 2 + . . .]


= −𝑖 + 𝑧 + 𝑖𝑧 2 − . . .
��
for ∣𝑧∣ < 1.
Figure 8.1: Annulus.
(b) However, in the annulus 1 < ∣𝑧∣ < ∞, we can expand it using Laurent series. This time
If f is analytic in D, then it admits a Laurent series representation given by we write
1 1 1 1
= =
𝑧+𝑖 𝑧(1 + 𝑧𝑖 ) 𝑧 1 + 𝑖/𝑧
� �� �


𝑓 (𝑧) = 𝑏𝑛 (𝑧 − 𝑎)𝑛 to extract out the singularity point of 𝑓 . Since we are expanding in the annulus 1 < ∣𝑧∣ < ∞
𝑛=−∞

about 𝑧 = 0, 𝑧1 is already in the required form.


in D, with the coefficients 𝑏′𝑛 𝑠 are calculated from: 1
Now put 𝑡 = 𝑧𝑖 . Then ∣𝑡∣ = ∣ 𝑧𝑖 ∣ = ∣𝑧∣
< 1, since ∣𝑧∣ > 1. Thus we can use Taylor series
expansion on :
1 𝑓 (𝑤)
𝑏𝑛 = 𝑑𝑤, (8.2) ∞
2𝜋𝑖 𝑐
1 1
= = (−1)𝑛 𝑡𝑛 𝑓 𝑜𝑟∣𝑡∣ < 1.

(𝑤 − 𝑎)𝑛+1
𝑖
where C is a piecewise smooth simple closed counterclockwise curve in D. 1+ 𝑧
1 + 𝑡 𝑛=0


𝑖
Remark 8.3.2. Note that: = (−1)𝑛
𝑛=0
𝑧
� � �𝑛

1 𝑖 1
1. If f is analytic on or in the interior of 𝐶1 then (𝑤−𝑎)𝑛+1
+ ...
𝑧 𝑧2
is also analytic for all 𝑛 < 0, 𝑛 ∈ Z. = 1− −
Hence 𝑏𝑛 = 0∀𝑛𝜖𝑧𝑎𝑛𝑑𝑛 < 0𝑖𝑛(∗). Therefore we have the Taylor series expansion.
in 1 < ∣𝑧∣ < ∞. Therefore
2. The Laurent series representation depends on the choice of the annulus D with the same 1 1 1 1 𝑖 1
𝑓 (𝑧) = = =
𝑧+𝑖 𝑧 1 + 𝑖/𝑧 𝑧 𝑧 𝑧
1 − − 2 + ...
center 𝑎.
� � � �

1 𝑖 1
Therefore, a Laurent series is not in general a unique representation (unlike the Taylor series =
𝑧 𝑧2 𝑧3
− − ...
of analytic functions). −1
= (𝑖)(𝑛+1) 𝑧 𝑛
But do we really need to evaluate (9.1) to get the coefficients 𝑏′𝑛 𝑠? No, practically (9.1) will not 𝑛=−∞

be calculated. This can be seen in the next examples Example 8.3.2. Derive the Laurent expansion of
1
𝑓 (𝑧) =
Example 8.3.1. Expand sin 𝑧
1
𝑓 (𝑧) =
𝑧+𝑖
about 𝑧 = 𝜋, in the annulus 0 < ∣𝑧 − 𝜋∣ < 𝜋.

about 𝑧 = 0 Since 𝑓 (𝑧) has singularity at 𝑧 = 𝜋, it does not admit Taylor expansion. Now let 𝑡 = 𝑧 − 𝜋. Then
𝑧 = 𝑡 + 𝜋. and we have:
1 1 1 1 𝑡
= .
sin 𝑧 sin(𝑡 + 𝜋) sin 𝑡 𝑡 sin 𝑡
=− =−
8.3 Laurent Series 192 8.4 Exercises 193

Here since Remark 8.3.3. In the Laurent series expansion of a function 𝑓 about 𝑎 we have two parts:
𝑡3 𝑡5
sin 𝑡 = 𝑡 − + − . . .

𝑡 𝑡 𝑓 (𝑧) = 𝑐𝑛 (𝑧 − 𝑎)𝑛
3! 5! 𝑛=−∞
= 𝑡 1 − + − ... ,
� 3! 2 5! 4 � �

−1 ∞
1 1 =
= 𝑡2 𝑡4
𝑐𝑛 (𝑧 − 𝑎)𝑛 + 𝑐𝑛 (𝑧 − 𝑎)𝑛
sin 𝑡 + 𝑛=−∞ 𝑛=0
3! 5!
� �

𝑡(1 − − . . .)
∞ ∞
1 1
has singularity at 𝑡 = 0. So, the factor 𝑡
contributes to the non-singularity of sin 𝑡
, hence we = 𝑏𝑚 (𝑧 − 𝑎)−𝑚 + 𝑐𝑛 (𝑧 − 𝑎)𝑛
factored it out. But 𝑚=1 𝑛=0
� �

𝑡 1 ∞
𝑏𝑚

= 𝑡2 4 = +
sin 𝑡 3! (𝑧 𝑎) 𝑚
𝑐𝑛 (𝑧 − 𝑎)𝑛
𝑚=1 𝑛=0
1− + 𝑡5! − . . . −
𝑡
� �

is not singular at 𝑡 = 0. That is, sin 𝑡


is analytic at 𝑡 = 0, hence admits a Taylor series expansion
where 𝑏𝑚 = 𝑐−𝑛 and 𝑚 = −𝑛. In this last expression the sum
in some neighborhood of 𝑡 = 0. Therefore, for 0 ≤ ∣𝑡∣ < 𝜋, the Taylor series will be of the form,

say;

𝑡
𝑐𝑛 (𝑧 − 𝑎)𝑛
= 𝑎0 + 𝑎1 𝑡 + 𝑎2 𝑡2 + . . . = 𝑎𝑛 𝑡𝑛 𝑛=0

sin 𝑡 𝑛=0

is part of the Taylor series if the function is analytic and the sum
(Though − 1𝑡 is singular at 𝑡 = 0, it is already a one - term Laurent series about 𝑡 = 0. Thus ∞
𝑏𝑚
𝑡 (𝑧 𝑎)𝑚
𝑚=1

𝑠𝑖𝑛𝑡

is known as the principal part of the Laurent Series. of 𝑓 (𝑧)


has been ” desingularized ” at 𝑡 = 0 by t at the numerator.)
Now to find the coefficients 𝑎0 , 𝑎1 , ..., we use the ”undetermined coefficients - method ”: Exercise 8.3.4. Expand 𝑓 (𝑧) = 𝑒1/𝑧 about 𝑧 = 0.
𝑡
𝑡 = . sin 𝑡
sin 𝑡
𝑡 3 𝑡5 8.4 Exercises
3! 5!
= (𝑎0 + 𝑎1 𝑡 + 𝑎2 𝑡2 + 𝑎3 𝑡3 + 𝑎4 𝑡4 . . .)(𝑡 − + − . . .)
𝑎0 𝑎1
3! 3!
= 0 + 𝑎0 𝑡 + 𝑎1 𝑡2 + (𝑎2 − )𝑡3 + (𝑎3 − )𝑡4 + . . .

𝑡 7 4
This implies 𝑎0 = 1, 𝑎1 = 0, 𝑎2 = 16 , 𝑎3 = 0 and hence sin 𝑡
= 1 + 16 𝑡2 + 360
𝑡 + ...

Thus
1 1 1 7
sin 𝑧 6 360
= −( )(1 + (𝑧 − 𝜋)2 + (𝑧 − 𝜋)4 + . . .)
𝑧−𝜋
1 1 7
360
= − − (𝑧 − 𝜋) − (𝑧 − 4)3 − . . . .
𝑧−𝜋 6
is the desired Laurent series in the annulus 0 < ∣𝑧 − 𝜋∣ < 𝜋.
9.1 Zeros and Classification of Singularities. 195
1 1
is singular at 𝑧 = 𝑘𝜋
,𝑘 = ±, 1, ±2, . . . and at 𝑧 = 0. The singularity points 𝑧 = 𝑘𝜋
, 𝑘 ∈ Z∖{0}
are isolated (as we can find a 𝜌 > 0,) while the point 𝑧 = 0 is not isolated because every annulus
0 < ∣𝑧∣ < 𝜌 inevitably contains at least one singular point (in fact, infinitely many of them) no
1
matter how small we choose 𝜌 > 0. (Since 𝑘𝜋
→ 0 as 𝑘 → ∞, 0 is the limit point or accumulation
point of non-singularities.)
Chapter 9
Assume that 𝑓 has an isolated singularity at 𝑧 = 𝑎 That is, there exists 𝜌 > 0 such that
0 < ∣𝑧 − 𝑎∣ < 𝜌 in which 𝑓 has a Laurent series of the form:
∞ ∞ ∞
𝑏𝑚
INTEGRATION BY THE METHOD OF 𝑓 (𝑧) = 𝑚
+
(𝑧 𝑎)
𝑐𝑛 (𝑧 − 𝑎)𝑛 = 𝑐𝑛 (𝑧 − 𝑎)𝑛 .
𝑚=1 𝑛=0
−∞

� � �

RESIDUE. where 𝑚 = −𝑛 and 𝑏𝑚 = 𝑐−𝑛 .


Recall that the sum ∞
𝑏𝑚
𝑚=1
(𝑧 − 𝑎)𝑚

9.1 Zeros and Classification of Singularities. is the principal part of the Laurent series for 𝑓.

1. If 𝑏𝑚 = 0 for every integer in the principal expansion, then 𝑧 = 𝑎 is called a removable


Recall that: A complex function 𝑓 is said to be singular at a point 𝑧 = 𝑎, if 𝑓 is not analytic at
singularity.
𝑎, that is, 𝑓 is differentiable in an open disk about 𝑎.
There are different kinds of singularities. Suppose 𝑧 = 𝑎 is a singularity point for 𝑓 , that is, 𝑓 2. If the expansion of this principal part terminates at some point, say after 𝑁 < ∞ terms,
is not analytic at 𝑎. If f is analytic in an annulus 0 < ∣𝑧 − 𝑎∣ < 𝑝, for some 𝑝 > 0, then 𝑧 = 𝑎 then the singularity point 𝑧 = 𝑎 of f is known as an 𝑁 𝑡ℎ −order pole. Here, 𝑎 is the pole
is called an isolated singularity point of 𝑓, otherwise (i.e. there does not exist such 𝑝 > 0) it is and N is the order.
known as a non-isolated singular point. If 𝑁 = 1, then the pole 𝑧 = 𝑎 is called a simple pole, and the Laurent series will take the
form
1 𝑏1
Example 9.1.1. Let 𝑓 (𝑧) = sin 𝑧
. Then 𝑧 = 0, ±𝜋, ±2𝜋, . . . are all singularity points of 𝑓. 𝑓 (𝑧) = + 𝑐0 + 𝑐1 (𝑧 − 𝑎) + 𝑐2 (𝑧 − 𝑎)2 + . . .
Moreover all of them are isolated singularity points. For instance, to show at 𝑧 = 𝜋, consider the 𝑧−𝑎

annulus 0 < ∣𝑧 − 𝜋∣ < 𝜋. clearly f is analytic on the annulus. To see this: with 𝑡 = 𝑧 − 𝜋, we 3. If the principal part has infinitely many terms, then 𝑓 is said to have an isolated essential
have: singularity at 𝑧 = 𝑎.
1 1 1 1 𝑡
= Example 9.1.3. Let
sin 𝑧 sin(𝑡 + 𝜋) sin 𝑡 𝑡 sin 𝑡
=− =−
cos 𝑧
.
𝑓 (𝑧) =
for 𝑧 ∕= 𝜋 and hence 𝑡 ∕= 0, 𝑧2
𝑡
Then 𝑓 is differentiable at all 𝑧 =
sin 𝑡
∕ 0 and not defined at 𝑧 = 0. Using the Maclaurin series of
is analytic in the annulus. cos 𝑧, we can get the Laurent expansion of 𝑓 (𝑧) around zero is

cos 𝑧 1 1
Example 9.1.2. The function 𝑓 (𝑧) = 2 = 2 𝑧
𝑧 𝑧 26
= 2 − 1 + 𝑧 2 − +...
1 𝑧 𝑛=0 (2𝑛)!
� ∞ �

𝑔(𝑧) =
1 � (−1)𝑛 2𝑛

sin( 𝑧1 ) 1
Then the highest power of 𝑧
in the expansion is 2, so 𝑓 has a pole of order 2 at 0.
9.1 Zeros and Classification of Singularities. 196 9.2 The Residue Theorem 197

For it has 2𝑛𝑑 order zero at 𝑧 = 𝑛𝜋, 𝑛 ∕= 0.


cos 𝑧
𝑓 (𝑧) = 2 , Hence by the above Theorem, 𝑓 is analytic at 𝑧 = 0, has a pole of order 1 at 𝑧 = 𝜋 and of order
𝑧
2
consider lim𝑧→0 𝑧 𝑓 (𝑧) = 1 =
2 at all 𝑧 = 𝑛𝜋, 𝑛 ∕= 0, 1.
∕ 0. We have the following theorem that relates a pole of order 𝑚
at 𝑧 = 𝑎 and pole and lim𝑧→𝑎 𝑧 𝑚 𝑓 (𝑧).
Suppose that f has a pole at 𝑧 = 𝑎, say of second-order. Then from the Laurent series we have:

𝑔(𝑧)+...
Theorem 9.1.1. Let 𝑓 be differentiable at 0 < ∣𝑧 − 𝑎∣ < 𝑝. Then 𝑓 has a pole of order 𝑚 at 𝑎
if and only if lim𝑧→𝑎 𝑧 𝑚 𝑓 (𝑧) is a non zero number. 𝑏2 𝑏1
𝑓 (𝑧) = 2
+ + 𝑐0 + 𝑐1 (𝑧 − 𝑎) + 𝑐2 (𝑧 − 𝑎)2 , with 𝑏2 = 𝑐−2 ∕= 0
(𝑧 − 𝑎) (𝑧 − 𝑎)
� �� �

Now, the question is: how to (find) determined the order of a pole? We use a zero of a
for 0 < ∣𝑧 − 𝑎∣ < 𝜌 for some 𝜌 > 0.
function.
Suppose that a function 𝑓 is analytic at 𝑧 = 𝑎. We say that a function 𝑓 has a zero at 𝑎 ∈ 𝐷 if By multiplying both sides by (𝑧 − 𝑎)2 , we get:

𝑓 (𝑎) = 0. A zero 𝑎 of 𝑓 is said to have order 𝑚 if 𝑓 (𝑎) = 𝑓 ′ (𝑎) = 𝑓 ′′ (𝑎) = . . . = 𝑓 (𝑚−1) (𝑎) = 0
(𝑧 − 𝑎)2 𝑓 (𝑧) = 𝑏2 + 𝑏1 (𝑧 − 𝑎) + (𝑧 − 𝑎)2 𝑔(𝑧).
and 𝑓 (𝑚) (𝑎) ∕= 0.
′′′
Since 𝑔(𝑧) is analytic at 𝑎, it is continuous at 𝑎. Hence
Example 9.1.4. Let 𝑓 (𝑧) = 𝑧 3 . Then 𝑓 ′ (𝑧) = 3𝑧 2 , 𝑓 ′′ (𝑧) = 6𝑧 and 𝑓 (𝑧) = 6.
Here 𝑓 (0) = 𝑓 ′ (0) = 𝑓 ′′ (0) = 0 and 𝑓 ′′′ (0) = 6 ∕= 0 and hence 𝑓 has aa zero, 𝑧 = 0, of order 2.
𝑧→𝑎
lim (𝑧 − 𝑎)2 𝑓 (𝑧) = 𝑏2 + (𝑏1 × 0) + (0 × 𝑔(𝑎)) = 𝑏2 .
Remark 9.1.2. If 𝑓 is analytic at 𝑎 and 𝑓 (𝑎) ∕= 0, we say for convenience that f has a zero, This implies ∣(𝑧 − 𝑎)2 𝑓 (𝑧)∣ = ∣𝑧 − 𝑎∣2 ∣𝑓 (𝑧)∣ → ∣𝑏2 ∣ ∕= 0 which implies
𝑧 = 𝑎, of order 0, or a zeroth-order of zero at 𝑧 = 𝑎.
∣𝑏2 ∣
𝑧→𝑎 𝑧→𝑎
lim ∣𝑓 (𝑧)∣ = lim = ∞.
Now we state a theorem which help us to determine the different kind of singularities of a function. ∣𝑧 − 𝑎∣2

Theorem 9.1.3. Let 𝑝 and 𝑞 be analytic functions at 𝑧 = 𝑎, and have zero of order P and Q Hence we have the following theorem for the general case.

respectively at 𝑎. Then Theorem 9.1.4 (Behavior of a function near its pole). If 𝑓 (𝑧) has a pole at 𝑧 = 𝑎, then
1
1. 𝑓 (𝑧) = 𝑝(𝑧)
has a pole of order P at 𝑧 = 𝑎.
𝑛→𝑧
lim ∣𝑓 (𝑧)∣ = ∞.
𝑝(𝑧)
2. 𝑓 (𝑧) = 𝑞(𝑧)
However, if f has an essential singularity at 𝑧 = 𝑎, the above theorem does not hold in general.
has a pole of order 𝑁 = 𝑄 − 𝑃 at 𝑧 = 𝑎, if 𝑄 − 𝑃 > 0, and 𝑓 is analytic at 𝑎
if 𝑄 ≤ 𝑃.

Example 9.1.5. Find and classify all the singularities of:


9.2 The Residue Theorem
4 2
𝑓 (𝑧) =
(𝜋 − 𝑧)(𝑧 − 3𝑧 )
sin2 𝑧 Consider a complex function 𝑓 which has 2 isolated singularity points say 𝑎 and 𝑏, inside a simple
closed path C (called contour) and analytic elsewhere in C.
Solution
Then by Cauchy’s Theorem, the integral
2
𝐼= 𝑓 (𝑧)𝑑𝑧
Let 𝑝(𝑧) = (𝜋 − 𝑧)(𝑧 4 − 3𝑧 2 ) = (𝜋 − 𝑧)(𝑧 2 − 3)𝑧 2 ) and 𝑞(𝑧) = sin 𝑧. Then 𝑝 has 1𝑠𝑡 order zero
𝐶

at 𝑧 = 𝜋, 𝑧 = ± 3 and a 2𝑛𝑑 − order zero at 𝑧 = 0 and 𝑞 has 2𝑛𝑑 order zero at 𝑧 = 0 and also
9.2 The Residue Theorem 198 9.2 The Residue Theorem 199

That is, the integral I is equal to 2𝜋𝑖 times the sum of the residues of 𝑓 in C.

� Example 9.2.1. Calculate the residue and evaluate
��
3 1

𝐼= 𝑧 cos 𝑑𝑧,
� 𝑧
�� 𝐶

� � �

where C is a circle ∣𝑧∣ = 1 oriented counterclockwise.

Figure 9.1: The Contour C.


Solution
can be decomposed into
The only singular point is 𝑧 = 0. and the Laurent series about 𝑧 = 0 is
𝐼= 𝑓 (𝑧)𝑑𝑧 + 𝑓 (𝑧)𝑑𝑧,
𝐶1 𝐶2
� �


1 (1/𝑧)2𝑛
where 𝐶1 and 𝐶2 be the curves as in Figure 9.2. Since 𝑎 and 𝑏 are singularity points 𝑓, the 𝑧 3 cos = 𝑧 3 ( (−1)𝑛 )
𝑧 𝑛=0
(2𝑛)!
� �

Laurent series expansion of 𝑓 will be:


1 1 1
∞ ∞ + + . . .
𝑛 𝑛 2!𝑧 2 4!𝑧 4 6!𝑧 6
= 𝑧3 1 − −
𝑓 (𝑧) = 𝑐(1) 𝑐(2)
� �

𝑛 (𝑧 − 𝑎) in 0 < ∣𝑧 − 𝑎∣ < 𝑝1 and 𝑓 (𝑧) = 𝑛 (𝑧 − 𝑏) in 0 < ∣𝑧 − 𝑎∣ < 𝑝2


𝑛=−∞ 𝑛=−∞
𝑧 1 1
� �

2! 4!𝑧 6!𝑧 3
= 𝑧3 − + − + . . . for 0 < ∣𝑧∣ < ∞.
for some 𝑝1 > 0, 𝑝2 > 0.
1 1
Thus the residue is 4!
= 24
= 𝑐−1 and hence
Assume that 𝐶1 is in the annulus 0 < ∣𝑧 − 𝑎∣ < 𝑝1 and 𝐶2 is in the annulus 0 < ∣𝑧 − 𝑏∣ < 𝑝2
Then: 1 1 𝜋
𝐼= = 𝑖.
𝑧 24 12
𝑧 3 cos 𝑑𝑧 = 2𝜋𝑖 ×
𝐶

𝐼= 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧 + 𝑓 (𝑧)𝑑𝑧


𝐶 𝐶1 𝐶2
� � �

∞ ∞ Question: How to calculate the residue in the general case?


𝑛 𝑛
= 𝑐(1)
𝑛 (𝑧 − 𝑎) 𝑑𝑧 + 𝑐(2)
𝑛 (𝑧 − 𝑏) 𝑑𝑧
𝐶1 𝑛=−∞ 𝐶2 𝑛=−∞
To start with, suppose 𝑓 (𝑧) has a simple (or first-order) pole at 𝑧 = 𝑎, so that
� � � �

𝑛 𝑛 1
= 𝑐(1)
𝑛 (𝑧 − 𝑎) 𝑑𝑧 + 𝑐(2)
𝑛 (𝑧 − 𝑏) 𝑑𝑧
𝑛=−∞ 𝐶1 𝑛=−∞ 𝐶2 𝑓 (𝑧) = 𝑐−1 + 𝑐0 + 𝑐1 (𝑧 − 𝑎) + 𝑐2 (𝑧 − 𝑎)2 + . . .
∞ �
� ∞ �

𝑧−𝑎
= 2𝜋𝑖 × 𝑐−1 + 2𝜋𝑖 × 𝑐−1 = 2𝜋𝑖 𝑐−1 + 𝑐−1 in the annulus 0 < ∣𝑧 − 𝑎∣ < 𝑝 for some 𝑝 > 0. Then
(1) (2)
( (1) ) ( (2) ) ( (1) (2) )

In the above integral, the surviving coefficients 𝑐−1 and 𝑐−1 are called the Residue of 𝑓 (𝑧) at 𝑎
(𝑧 − 𝑎)𝑓 (𝑧) = 𝑐−1 + 𝑐0 (𝑧 − 𝑎) + 𝑐1 (𝑧 − 𝑎)2 + 𝑐2 (𝑧 − 𝑎)3 + . . .
and 𝑏 respectively. The residue of a function 𝑓 at 𝑎 is is denoted by 𝑅𝑒𝑠(𝑓, 𝑎). We can generalize
the above result in the following theorem. and lim𝑧→𝑎 [(𝑧 − 𝑎)𝑓 (𝑧)] = 𝑐−1 which is the residue of 𝑓 at 𝑧 = 𝑎.
Next suppose that 𝑓 has an 𝑁 𝑡ℎ order pole at 𝑧 = 𝑎, that is,
Theorem 9.2.1 (Residue - Theorem). Let C be a piecewise smooth simple closed curve ori-
ented counterclockwise and let 𝑓 (𝑧) be analytic inside and on C except at finitely many isolated 1 1
𝑓 (𝑧) = 𝑐−𝑁 + 𝑐−𝑁 +1 + . . . + 𝑐0 + 𝑐1 (𝑧 − 𝑎) + . . .
singularity points 𝑎1 , 𝑎2 , . . . , 𝑎𝑘 in the interior of C and denotes the residue of at 𝑎𝑗 , then
(𝑧 − 𝑎)𝑁 (𝑧 − 𝑎)𝑁 −1
𝑐𝑖−1
𝑘
and
𝐼= 𝑓 (𝑧)𝑑𝑧 = 2𝜋𝑖 𝑐𝑗−1
𝐶 𝑗=1 (9.1)

(𝑧 − 𝑎)𝑁 𝑓 (𝑧) = 𝑐−𝑁 + 𝑐−𝑁 +1 (𝑧 − 𝑎) + . . . + 𝑐0 (𝑧 − 𝑎)𝑁 + 𝑐1 (𝑧 − 𝑎)𝑁 +1 + . . .



9.2 The Residue Theorem 200 9.3 Evaluation of Real Integrals. 201

However, unfortunately for a counterclockwise C containing both - 2 and 1 in the interior.


𝑁
𝑧→0
lim [(𝑧 − 𝑎) 𝑓 (𝑧)] = 𝑐−𝑁 If C contains only, say 𝑧 = −2, then
1 2𝜋
is not the residue 𝑐−1 of 𝑓 .
27 27
𝑓 (𝑧)𝑑𝑧 = 2𝜋𝑖.(− ) = − 𝑖.
𝐶

Now to find the residue 𝑐−1 of 𝑓 at 𝑧 = 𝑎, observe that the right hand side of (9.1) is the Taylor
If C contains neither of them, then 𝑓 is analytic in C, and hence 𝐶
𝑓 (𝑧)𝑑𝑧 = 0.
series expansion of 𝑔(𝑧) = (𝑧 − 𝑎)𝑁 𝑓 (𝑧) at 𝑧 = 𝑎 and the coefficients 𝑐−𝑁 +𝑗 of (𝑧 − 𝑎)𝑗 is the
𝑧𝑒𝜋𝑧

𝑗 𝑡ℎ derivative of 𝑔(𝑧) at 𝑧 = 𝑎 divided by 𝑗!. That is, Example 9.2.3. Let 𝑓 (𝑧) = (𝑧−2)2 (𝑧 2 +4)
. Evaluate the integral I of 𝑓 over the ellipse 9𝑥2 +𝑦 2 = 9
counterclockwise.
1 𝑑𝑗
𝑐−𝑁 +𝑗 = 𝑔(𝑎).
𝑗! 𝑑𝑧 𝑗
Solution
When 𝑖 = 𝑁 − 1, −𝑁 + 𝑗 = −1 and 𝑐−𝑁 +𝑗 becomes 𝑐−1 , the residue of 𝑓 at 𝑧 = 𝑎.
Therefore,
1
Since the denominator (𝑧 − 2)2 (𝑧 2 + 4) has zeros at 𝑧 = 2 of order 2 and 𝑧 = ±2𝑖 each of order
𝑐−1 = lim 𝑔 (𝑁 −1) (𝑧), 1, 𝑓 has poles at 𝑧 = 2 of order 2 and at 𝑧 = 2𝑖 and at 𝑧 = −2𝑖 of order 1 (as the numerator
(𝑁 − 1)! 𝑧→𝑎
𝑧𝑒𝜋𝑧 has no zeros). But since 𝑧 = 2 is not inside the ellipse C, it has no relevance for integration.
where 𝑔(𝑧) = (𝑧 − 𝑎)𝑁 𝑓 (𝑧). This holds true only if the singularity at 𝑧 = 𝑎 is not essential.
Hence we consider only 𝑧 = −2𝑖 and 𝑧 = 2𝑖. Their respective residues are:
Theorem 9.2.2 (Residue at a Pole of Order 𝑚.). Let 𝑓 be a function having a pole of order 𝑚
at 𝑧 = 𝑎. Then
1 𝑑𝑚−1 𝑧𝑒𝜋𝑧 𝑧𝑒𝜋𝑧
𝑅𝑒𝑠(𝑓, 𝑎) = 𝑅𝑒𝑠𝑎𝑡𝑧=−2𝑖 = lim
2)2 (𝑧
lim 𝑚−1 (𝑧 − 𝑎)𝑚 𝑓 (𝑧) . × (𝑧 + 2𝑖)
(𝑧 − 2)2 (𝑧 2 + 4) 𝑧→−2𝑖 (𝑧 − − 2𝑖)(𝑧 + 2𝑖)
� �

(𝑚 − 1)! 𝑧→𝑎 𝑑𝑧
(−2𝑖)𝑒 −2𝜋𝑖
1 1
1 = = = =
−𝑖
Example 9.2.2. Evaluate all residues of 𝑓 (𝑧) = (𝑧+2)(𝑧−1)3 (−2 − 2𝑖)2 (−4𝑖) 2(−2 − 2𝑖)2 16𝑖 16
and
Solution 𝑧𝑒𝜋𝑧 𝑧𝑒𝜋𝑧
𝑅𝑒𝑠𝑎𝑡𝑧=2𝑖 = lim
2)2 (𝑧
(𝑧 − 2𝑖)
(𝑧 − 2)2 (𝑧 2 + 4) 𝑧→+2𝑖 (𝑧 − − 2𝑖)(𝑧 + 2𝑖)
(2𝑖)𝑒2𝜋𝑖 2𝑖
The denominator of 𝑓 has first - order zero at 𝑧 = −2 and 3𝑟𝑑 order zero at 𝑧 = 1. Since the
= =
numerator 1 has no zeros, 𝑓 has first - order pole at 𝑧 = −2 and a third - order pole at 𝑧 = 1 (2𝑖 − 2)2 (2𝑖 + 2𝑖) 4(−1 + 𝑖)2 (42 𝑖)
(N = 3). Thus 1 𝑖
= =
8 × (−2𝑖) 16
1 1 1 1 1
𝑅𝑒𝑠𝑓 = lim (𝑧 + 2). 3
= lim 3 Therefore,
0! 𝑧→−2 𝑧→−2 3 27
=− 3 =−
𝑖
� �

(𝑧 + 2)(𝑧 − 1) (𝑧 − 1)
𝑓 (𝑧)𝑑𝑧 = 2𝜋𝑖 + = 0.
−𝑖
and 𝐶 16 16
� � �

1 1 1 1 ′′
𝑅𝑒𝑠𝑓 = = lim ( )
2! 𝑧→1 𝑧 + 2
lim (𝑧 − 1)3 .
� �(𝑁 −1)=2

(3 − 1)! 𝑧→1 (𝑧 + 2)(𝑧 − 1)3


9.3 Evaluation of Real Integrals.
1 (−1).(−2) 1 1 1
= lim = lim = 3 = .
2 𝑧→1 (𝑧 + 2)3 𝑧→1 (𝑧 + 2)3 3 27
Consider the class of real integrals of the general form
Thus
𝑑𝑧 1 1 𝐼= 𝐹 (cos 𝜃, sin 𝜃)𝑑𝜃
𝑓 (𝑧)𝑑𝑧 = = 2𝜋𝑖(− + ) = 0,
𝐶 𝑐 27 27 0
� � � 2𝜋

(𝑧 + 2)(𝑧 − 1)3
where F is a rational function of cos 𝜃 and sin 𝜃.
9.3 Evaluation of Real Integrals. 202 9.3 Evaluation of Real Integrals. 203

Example 9.3.1. The functions since 𝑧1 lies outside of C, its contribution to the integral is zero, Thus we’ve:

𝐼= ]
−2𝑑𝑧 −2
= 2𝜋𝑖𝑥𝑅𝑒𝑠𝑎𝑡𝑧=𝑧2 [
2 − cos2 𝜃 sin 𝜃
𝐹1 (cos 𝜃, sin 𝜃) =
1 + cos 𝜃

𝑐 (𝑧 − 𝑧1 )(𝑧 − 𝑧2 ) (𝑧 − 𝑧1 )(𝑧 − 𝑧2 )
] (9.2)
−2
and 𝑧→𝑧2
= 2𝜋𝑖𝑥 lim [(𝑧 − 𝑧2 ).
sin 𝑎𝜃 (𝑧 − 𝑧1 )(𝑧 − 𝑧2 )
𝐹2 (cos 𝜃, sin 𝜃) = ,
(1 + cos 𝑏𝜃)2 = 2𝜋𝑖𝑥 lim (
−2
) (9.3)
𝑧→𝑧2 𝑧 − 𝑧1
for 𝑎, 𝑏 ∈ R, rational function of cos 𝜃 and sin 𝜃. 1
= 2𝜋𝑖𝑥 (9.4)
−2
= 2𝜋𝑖𝑥 − √
𝑖𝜃
𝑧2 − 𝑧 1 5𝑖
To evaluate integrals of the above form, use the change of variables 𝑧 = 𝑒 . This change of √
2 2 5
variables will transform the real integral into a closed path complex integral. If 𝜃1 = 0 then = √ 𝜋𝑜𝑟 𝜋. (9.5)
5 5
2𝜋𝑖
𝑧1 = 1 and if 𝜃2 = 2𝜋 then 𝑧2 = 𝑒 = 1. (9.6)
𝑖𝜃 𝑑𝑧
Here 𝑑𝑧 = 𝑖𝑒 𝑑𝜃, which implies that 𝑑𝜃 = 𝑖𝑧
and with this change of variable we get: cos 𝜃 =
𝑒𝑖𝜃 +𝑒−𝑖𝜃 𝑧+𝑧 −1 𝑧 2 +1 𝑒𝑖𝜃 −𝑒−𝑖𝜃 𝑧−𝑧 −1 𝑧 2 −1
2
= 2
= 2𝑧
and sin 𝜃 = 2𝑖
= 2𝑖
= 2𝑧𝑖
.
𝑑𝜃
Hence ∴ 0 2−𝑠𝑖𝑛𝜃
= √2 𝜋
5
2𝜋
∫ 2𝜋

𝐼= 𝐹 (𝑐𝑜𝑠𝜃, 𝑠𝑖𝑛𝜃)𝑑𝜃 = 𝐹 , .
𝑧 2 + 1 𝑧 2 − 1 𝑑𝑧
0 𝐶 2𝑧 2𝑖𝑧 𝑖𝑧 Exercise. Evaluate using Residue Theorem:
� � � �

Then we can use the Residue Theorem to evaluate the final integral. 𝐼 = 0 17−8𝑐𝑜𝑠𝜃 𝑑𝜃 = 12 0 17−8𝑐𝑜𝑠𝜃 𝑑𝜃.
𝜋
Ans.
∫ 𝜋 𝑐𝑜𝑠𝜃 ∫ 2𝜋 𝑐𝑜𝑠𝜃

Example 9.3.2. Evaluate: 60


2𝜋
𝑑𝜃
𝐼= .
0

2 − sin 𝜃
9.3.1 Improper Integrals:
Solution
. Consider real Integrals of type:
𝑖𝜃 𝑑𝑧
Let 𝑧 = 𝑒 . Then 𝑑𝜃 = 𝑖𝑧
. As 𝜃 goes from 0 to 2𝜋, 𝑧 traverses through a complete circular −∞
𝑓 (𝑥)𝑑𝑥
revolution with radius 𝑟 = 1. Thus Clearly the improper integral can be written as:
∫∞

𝑑𝜃 1 𝑑𝑧
𝐼= = .
𝑧 2 −1 𝑖𝑧
−∞
𝑓 (𝑥)𝑑𝑥 = lim𝐴→∞ −𝐴 𝑓 (𝑥)𝑑𝑥 + lim𝐵→∞ 0 𝑓 (𝑥)𝑑𝑥.
0
� 2𝜋 �

2 − 𝑠𝑖𝑛𝜃 𝑐 2 − 2𝑖𝑧
If both limits exists, then the improper integral is said to be convergent and can be expressed in
∫∞ ∫0 ∫𝐵

𝑑𝑧
= −2
𝑧 the form:

2 − 4𝑖𝑧 − 1

𝑑𝑧 𝑓 (𝑥)𝑑𝑥 = lim𝑅→∞ 𝑓 (𝑥)𝑑𝑥. . . . . . . (∗)


, −∞ −𝑅
(𝑧 𝑧
= −2
𝑐 1 𝑝(𝑥)
�𝑐

− )(𝑧 − 𝑧2 )
Now assume that 𝑓 (𝑥) = 𝑠.𝑡.𝑞(𝑥)
∫∞ ∫𝑅

𝑞(𝑥)
∕= 0∀𝑥𝜖ℜ and deg. 𝑞(𝑥) − 𝑑𝑒𝑔.𝑝(𝑥) ≥ 2. Then clearly (*)
√ √
where 𝑧1 = (2 + 5)𝑖 and 𝑧2 = (2 − 5)𝑖 is convergent and we can use the expression in (**) without any further remark.
Consider the upper semicircle C and define a complex 𝑓 𝑛𝑓 (𝑧)𝑤𝑖𝑡ℎ𝑟𝑒𝑎𝑙𝑝𝑎𝑟𝑡𝑓 (𝑥).𝑇 ℎ𝑒𝑛.

𝑐
𝑓 (𝑧)𝑑𝑧 = −𝑅 𝑓 (𝑥)𝑑𝑥 + 𝑐𝑅 𝑓 (𝑧)𝑑𝑧 = 2𝜋𝑖 𝑅𝑒𝑠𝑓 (𝑧)
∮ ∫𝑅 ∫ ∑

⇒ −𝑅 𝑓 (𝑥)𝑑𝑥 = 2𝜋𝑖 𝑅𝑒𝑠𝑓 (2) − 𝑐𝑅 𝑓 (𝑧)𝑑𝑧.


∫𝑅 ∑ ∫
9.3 Evaluation of Real Integrals. 204 9.3 Evaluation of Real Integrals. 205

Now as 𝑅 → ∞, we consider the integral 𝑐𝑅


𝑓 (𝑧)𝑑𝑧. using the substitution 𝑧 = 𝑅𝑒𝑖𝜃 . then 𝐶𝑅 𝑥2 +1
⇒ 𝐼 = 0 𝑐𝑜𝑠𝑎𝑥 𝑑𝑥 12 𝑥2 +1
𝑑𝑥 2 𝜋𝑒 = 2 𝑒
Exercise! Evaluate
∫ ∫∞ ∫ ∞−∞ 𝑐𝑜𝑠𝑎𝑥 1 −𝑎 𝜋 −𝑎

can be represented parametrically that ∣𝑧∣ = 𝑅 ⇒ and hence 𝑅 = const. will be the 𝑒𝑞𝑛𝑜𝑓 𝐶𝑅 .𝑎𝑠𝑧
ranges along 𝐶𝑅 , 𝜃 varies from0𝑡𝑜𝜋.
𝐷 (𝑥+1)2
𝑑𝑥
Thus
using Residue Theorem.
∫ ∞ 𝑥1/3

∣𝑓 (𝑧)∣ < ∣𝑧∣𝑘2 for ∣𝑧∣ = 𝑅 > 𝑅0 sufficiently large k. constant


⇒ ∣ 𝑐𝑅 𝑓 (𝑥)𝑑𝑧∣ ≤ 𝑐𝑅 ∣𝑓 (𝑧)∣𝑑𝑧 < 𝑅𝑘2 𝜋𝑅 = 𝑘𝜋 𝑅
𝑓 𝑜𝑟𝑅 > 𝑅0
∫ ∫

∞ 𝑅
∴ 𝑓 (𝑥)𝑑𝑥 = lim 𝑓 (𝑥)𝑑𝑥 = lim [2𝜋𝑖 𝑓 (𝑧)𝑑𝑧]
𝑅→∞
𝑅𝑒𝑠𝑓 (𝑧) −
−∞ 𝑅→∞ −𝑅 𝑐𝑅
� � � �

= 2𝜋𝑖 𝑅𝑒𝑠𝑓 (𝑧) − lim 𝑓 (𝑧)𝑑𝑧 (9.7)


𝑅→∞ 𝑐
𝑅
� �

2𝜋𝑖 𝑅𝑒𝑠𝑓 (𝑧) − 0 (9.8)


(9.9)

∴ −𝑅
𝑓 (𝑥)𝑑𝑥 = 2𝜋𝑖 ∼ 𝑅𝑒𝑠𝑓 (𝑧)
where sum is over all the residues of f over the upper half plane.
∫∞

Examples. 1. Evaluate:

0 𝑥2 +1
𝑑𝑥 𝑎 > 𝑜.
soln.
∫ ∞ 𝑐𝑜𝑠𝑎𝑥

𝑐𝑜𝑠𝑎𝑥 1
𝐼= 0 𝑥2 +1
𝑑𝑥 = 2 𝑥2 +1
−∞ 𝑐𝑜𝑠𝑎𝑥 𝑑𝑥
𝑒𝑖𝑎𝑧
Now consider the function 𝑓 (𝑧) =
∫∞ ∫∞

𝑧 2 +1

.Clearly f is analytic everywhere except at 𝑧 = ±𝑖. At these two points, f has simple poles.
Thus 𝑐 𝑧2 1
𝑑𝑧 = 2𝜋𝑖𝑅𝑒𝑠𝑎𝑡𝑧=𝑖 .𝑓 (𝑧)(𝑎𝑡𝑡ℎ𝑒𝑢𝑝𝑝𝑒𝑟ℎ𝑎𝑙𝑓 𝑝𝑙𝑎𝑛𝑒).
𝑖𝑎𝑥𝑖
∮ 𝑒𝑖𝑎𝑧

= 2𝜋𝑖 𝑒 2𝑖 = 𝜋𝑒−𝑎

𝑒𝑖𝑎𝑧 𝑒 𝑒𝑖𝑎𝑧
𝑖.𝑒.𝜋𝑒−𝑎 = 2
𝑑𝑧 = lim 𝑑𝑥 + lim 𝑑𝑧
𝑐 𝑧 +1 𝑅→∞ −𝑅 𝑥2 + 1 𝑅→∞ 𝑐 𝑧 2 + 1
𝑅
� � 𝑅 𝑖𝑎𝑥 �

𝑐𝑜𝑠𝑎𝑥 + 𝑖𝑠𝑖𝑛𝑎𝑥 𝑒𝑖𝑎𝑧


= lim 2
𝑑𝑥 + lim 2
𝑑𝑧 (9.10)
𝑅→∞ −𝑅 𝑥 +1 𝑅→∞ 𝑐 𝑧 + 1
𝑅
� 𝑅 �

𝑐𝑜𝑠𝑎𝑥 𝑠𝑖𝑛𝑎𝑥 𝑒𝑖𝑎𝑧 1 1


= 2
𝑑𝑥 + 𝑖 2
𝑑𝑥∣ 2 √ (9.11)
𝑥 + 1 𝑥 + 1 𝑧 + 1 𝑧 + 1
∣≤∣ 2 ∣≤
−∞ −∞ (𝑅 1) 𝑅2 + 1
� ∞ � ∞

𝑐𝑜𝑠𝑎𝑥 𝑠𝑖𝑛𝑎𝑥
∴ −∞ 𝑥2 +1
𝑑𝑥 = 𝜋𝑒−𝑎 𝑎𝑛𝑑 −∞ 𝑥2 +1
𝑑𝑥 =0
∫∞ ∫∞

S-ar putea să vă placă și