Documente Academic
Documente Profesional
Documente Cultură
T. Muthukumar
tmk@iitk.ac.in
Notations vii
1 Introduction 1
1.1 Multi-Index Notations . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Classification of PDE . . . . . . . . . . . . . . . . . . . . . . . 4
2 Introduction Continued... 7
2.1 Solution of PDE . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Well-posedness of PDE . . . . . . . . . . . . . . . . . . . . . . 9
iii
CONTENTS iv
8 The Laplacian 41
8.1 Properties of Laplacian . . . . . . . . . . . . . . . . . . . . . . 41
8.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . 43
8.3 Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . 45
10 Sturm-Liouville Problems 51
10.1 Eigen Value Problems . . . . . . . . . . . . . . . . . . . . . . 51
10.2 Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . 52
11 Spectral Results 55
14 Fourier Series 69
14.1 Periodic Functions . . . . . . . . . . . . . . . . . . . . . . . . 69
14.2 Fourier Coefficients and Fourier Series . . . . . . . . . . . . . 70
Appendices 109
Bibliography 117
Index 119
CONTENTS vi
Notations
Symbols
∂ α1 αn
Dα . . . ∂x∂n αn and α = (α1 , . . . , αn ). In particular, for α = (1, 1, . . . , 1),
∂x1 α1
D = ∇ = D(1,1,,...,1) = ∂x∂ 1 , ∂x∂ 2 , . . . , ∂x∂n
Function Spaces
vii
NOTATIONS viii
Cc∞ (X) is the class of all infinitely differentiable functions on X with com-
pact support
General Conventions
∇x , ∆x or Dx2 When a PDE involves both the space variable x and time vari-
able t, the quantities like ∇, ∆, D2 , etc. are always taken with respect
to the space variable x only. This is a standard convention. Some-
times the suffix, like ∇x or ∆x , is used to indicate that the operation
is taken w.r.t x.
Introduction
(iii) Studying the properties of a solution. Most often the solution of a PDE
may not have a nice formula or representation. How much information
about the solution can one extract without any representation of a
solution? In this text, one encounters similar situation while studying
harmonic functions.
u(x + h) − u(x)
u0 (x) := lim
h→0 h
1
LECTURE 1. INTRODUCTION 2
provided the limit exists. Now, let Ω be an open subset of Rn . The directional
derivative of u : Ω → R, at x ∈ Ω and in the direction of a given vector
ξ ∈ Rn , is defined as
provided the limit exists. Let the n-tuple ei := (0, 0, . . . , 1, 0, . . . , 0), where 1
is in the i-th place, denote the standard basis vectors of Rn . The i-th partial
derivative of u at x is the directional derivative of u, at x ∈ Ω and along the
direction ei , and is denoted as
∂ α1 ∂ αn ∂ |α|
Dα = . . . = .
∂x1 α1 ∂xn αn ∂x1 α1 . . . ∂xn αn
One adopts the convention that, among the similar components of α, the
order in which differentiation is performed is irrelevant. This is not a re-
strictive convention because the independence of order of differentiation is
valid for smooth1 functions. For instance, if α = (1, 1, 2) then one adopts the
convention that
∂4 ∂4
= .
∂x1 ∂x2 ∂x3 2 ∂x2 ∂x1 ∂x3 2
1
smooth, usually, refers to as much differentiability as required.
LECTURE 1. INTRODUCTION 3
∂u
(x) = ∇u(x) · ξ.
∂ξ
The normal derivative is the directional derivative along the normal direction
ν(x), at x, with respect to the surface in which x lies. The divergence of a
vector function u = (u1 , . . . , un ), denoted as div(u), is defined as div(u) :=
∇ · u. The k = 2 case is
∂ 2 u(x) ∂ 2 u(x)
∂x1 2 . . . ∂x1 ∂xn
∂ 2 u(x) ∂ 2 u(x)
2
∂x2 ∂x1
. . . ∂x
2 ∂xn
D u(x) =
.. .. .. .
. . .
∂ 2 u(x) ∂ 2 u(x)
∂xn ∂x1
... ∂x2n n×n
The matrix D2 u is called the Hessian matrix. Observe that the Hessian
matrix is symmetric due to the independence hypothesis of the order in
which partial derivatives are taken. The Laplace operator,
P denoted as ∆, is
∂2
defined as the trace of the Hessian operator, i.e., ∆ := ni=1 ∂x2 . Note that
i
∆ = ∇ · ∇. Further, for a k-times differentiable function u, the nk -tensor
k
Dk u(x) := {Dα u(x) | |α| = k} may be viewed as a map from Rn to Rn .
Thus, the magnitude of Dk u(x) is
12
X
|Dk u(x)| := |Dα u(x)|2 .
|α|=k
1
In particular, |∇u(x)| = ( ni=1 u2xi (x)) 2 or |∇u(x)|2 = ∇u(x) · ∇u(x) and
P
1
|D2 u(x)| = ( ni,j=1 u2xi xj (x)) 2 .
P
LECTURE 1. INTRODUCTION 4
(ii) A k-th order PDE is semilinear if F is linear only in the highest (k-th)
order, i.e., F has the form
X
aα (x)Dα u(x) + f (Dk−1 u(x), . . . , Du(x), u(x), x) = 0.
|α|=k
Observe that, for a semilinear PDE, f is never linear in u and its deriva-
tives, otherwise it reduces to being linear. For a quasilinear PDE, aα (with
|α| = k), cannot be independent of u or its derivatives, otherwise it reduces
to being semilinear or linear.
Example 1.2. (i) xuy − yux = u is linear.
(v) ux + uy − u2 = 0 is semilinear.
(x) ux uy − u = 0 is nonlinear.
LECTURE 1. INTRODUCTION 6
Lecture 2
Introduction Continued...
7
LECTURE 2. INTRODUCTION CONTINUED... 8
On integrating both sides w.r.t x we obtain uy (x, y) = F (y), for any arbitrary
integrable function F : R → R. Now, integrating both sides w.r.t y, u(x, y) =
f (y)+g(x) for an arbitrary g : R → R and a f ∈ C 1 (R)1 . But the u obtained
above is not a solution to uyx (x, y) = 0 if g is not differentiable. Since we
assume mixed derivatives to be same we need to assume f, g ∈ C 1 (R) for the
solution to exist.
Example 2.4. Consider the first order equation ux (x, y) = uy (x, y) in R2 .
On first glance, the PDE does not seem simple to solve. But, by change
of coordinates, the PDE can be rewritten in a simpler form. Choose the
coordinates w = x + y and z = x − y and, by chain rule, ux = uw + uz
and uy = uw − uz . In the new coordinates, the PDE becomes uz (w, z) = 0
which is in the form considered in Example 2.1. Therefore, its solution is
u(w, z) = f (w) for any arbitrary f : R → R and, hence, u(x, y) = f (x + y).
But now an arbitrary f cannot be a solution. We impose that f ∈ C 1 (R).
The family of solutions, obtained in the above examples, may not be
the only family that solves the given PDE. Following example illustrates a
situation where three different family of solutions exist (more may exist too)
for the same PDE.
Example 2.5. Consider the second order PDE ut (x, t) = uxx (x, t).
(i) Note that u(x, t) = c is a solution of the PDE, for any constant c ∈ R.
This is a family of solutions indexed by c ∈ R.
2
(ii) The function u : R2 → R defined as u(x, t) = x2 +t+c, for any constant
c ∈ R, is also a family of solutions of the PDE. Because ut = 1, ux = x
and uxx = 1. This family is not covered in the first case.
(iii) The function u(x, t) = ec(x+ct) is also a family of solutions to the PDE,
for each c ∈ R. Because ut = c2 u, ux = cu and uxx = c2 u. This family
is not covered in the previous two cases.
(c) and the solution depends continuously on the data given (stability).
Any PDE not meeting the above criteria is said to be ill-posed. If the PDE
(with boundary/initial conditions) is viewed as a map then the well-posedness
of the PDE is expressed in terms of the surjectivity, injectivity and continuity
of the “inverse” map. The existence and uniqueness condition depends on the
notion of solution in consideration. There are three notions of solution, viz.,
classical solutions, weak solutions and strong solutions. This textbook, for
the most part, is in the classical situation. Further, the stability condition
means that a small “change” in the data reflects a small “change” in the
solution. The change is measured using a metric or “distance” in the function
space of data and solution, respectively. Though in this text we study only
well-posed problems there are ill-posed problems which are also of interest.
The following example illustrates the idea of continuous dependence of
solution on data in the uniform metric on the space of continuous functions.
has the trivial solution u(x, t) = 0. Consider the IVP with a small change in
data,
utt (x, t) = uxx (x, t) in R × (0, ∞)
u(x, 0) = 0
ut (x, 0) = ε sin xε
which has the unique2 solution uε (x, t) = ε2 sin(x/ε) sin(t/ε). The change in
solution of the IVP is measured using the uniform metric as
Thus, a small change in data induces a small enough change in solution under
the uniform metric3 .
Example 2.7 (Ill-posed). The IVP
utt (x, t) = −uxx (x, t) in R × (0, ∞)
u(x, 0) = ut (x, 0) =0
has the trivial solution u(x, t) = 0. Consider the IVP with a small change in
data,
utt (x, t) = −uxx (x, t) in R × (0, ∞)
u(x, 0) = 0
ut (x, 0) = ε sin xε
which has the unique solution uε (x, t) = ε2 sin(x/ε) sinh(t/ε). The solution
of the IVP is not stable because the data change is small, i.e.,
2
This claim will be proved in later chapters.
3
The space R × (0, ∞) is not compact and the metric is not complete. The example is
only to explain the notion of stability at an elementarty level.
Lecture 3
The aim of this chapter is to find the general solution and to solve the Cauchy
problem associated with the first order PDE of the form
F (∇u(x), u(x), x) = 0 x ∈ Rn .
where c is the diffusive coefficient of the substance and d is the rate of decay
of the substance. The function u(x, t) denotes the concentration/density of
the substance at (x, t). Note that the case of no diffusion (c = 0) is a linear
first order equation which will be studied in this section.
Example 3.1 (One Dimension with no decay). The one space dimension trans-
port equation is
11
LECTURE 3. FIRST ORDER PDE 12
speed b and in the same direction as the substance O. For B, the substance
O would appear stationary while for A, the fixed observer, the substance O
would appear to travel with speed b. What is the equation of the transport
of the “stationary” substance O from the viewpoint of the moving observer
B? The answer to this question lies in identifying the coordinate system
for B relative to A. Fix a point x at time t = 0. After time t, the point
x remains as x for the fixed observer A, while for the moving observer B,
the point x is now x − bt. Therefore, the coordinate system for B is (w, z)
where w = x − bt and z = t. Let v(w, z) describe the motion of O from B’s
perspective. Since B sees O as stationary, the PDE describing the motion
of O is vz (w, z) = 0. Therefore, v(w, z) = g(w), for some arbitrary function
g (sufficiently differentiable), is the solution from B’s perspective. To solve
the problem from A’s perspective, note that
ut = vw wt + vz zt = −bvw + vz and
ux = vw wx + vz zx = vw .
Therefore, ut + bux = −bvw + vz + bvw = vz and, hence, u(x, t) = v(w, z) =
g(w) = g(x − bt) (cf. Fig 3.1). The choice of g is based on our restriction
to be in a classical solution set-up. Note that, for any choice of g, we have
LECTURE 3. FIRST ORDER PDE 13
g(x) = u(x, 0). The line x − bt = x0 , for some constant x0 , in the xt-plane
tracks the flow of the substance placed at x0 at time t = 0 (cf. Fig 3.2). Also,
observe that 0 = ut + bux = (ux , ut ) · (b, 1) is precisely that the directional
derivative along the vector (b, 1) is zero. This means that u is constant if
we move along the direction (b, 1). Thus, the value of u(x, t) on the line
x − bt = x0 is constant.
Example 3.2 (Transport equation in first quadrant). The transport equation
is
ut (x, t) + bux (x, t) = 0; (x, t) ∈ (0, ∞) × (0, ∞)
with b ∈ R. As before, we obtain u(x, t) = g(x − bt) where g(x) = u(x, 0).
This problem is uniquely solvable in the given region only for b < 0. For
b > 0, g defined on x-axis is not adequate to solve the problem. The problem
is not well-posed! The given data is enough only to solve for u in the region
{(x, t) ∈ (0, ∞) × (0, ∞) | x > bt} when b > 0. To compute u in {(x, t) ∈
(0, ∞) × (0, ∞) | x > bt} we need to provide data on the t-axis (0, t).
Example 3.3 (Transport equation in semi-infinite strip). The transport equa-
tion is
ut (x, t) + bux (x, t) = 0; (x, t) ∈ (0, L) × (0, ∞)
LECTURE 3. FIRST ORDER PDE 14
with b ∈ R. As before, we obtain u(x, t) = g(x − bt) where g(x) = u(x, 0). If
b > 0 then the problem is well-posed when the data given on x and t axes.
If b < 0 then the problem is well-posed when the data is given on x-axis and
(L, t)-axis.
Solving for u(x, y) in the above equation is equivalent to finding the surface
S ≡ {(x, y, u(x, y))} generated by u in R3 . If u is a solution of (3.2.1), at
each (x, y) in the domain of u, then
But (∇u(x, y), −1) is normal to S at the point (x, y) (cf. Appendix B).
Hence, the coefficients (A(x, y, u), B(x, y, u), C(x, y, u)) are perpendicular to
the normal and, therefore, (A(x, y, u), B(x, y, u), C(x, y, u)) lie on the tangent
plane to S at (x, y, u(x, y)).
coefficient vector field V (x, y) = (A(x, y, u), B(x, y, u), C(x, y, u)) of (3.2.1).
Let s denote the parametrization of the characteristic curves w.r.t V . For
convenience, let z(s) := u(x(s), y(s)). Then the characteristic curves can be
found by solving the system of ODEs
dx dy dz
= A(x(s), y(s), z(s)), = B(x(s), y(s), z(s)), = C(x(s), y(s), z(s)).
ds ds ds
(3.2.2)
The three ODE’s obtained are called characteristic equations. The union of
these characteristic (integral) curves give us the integral surface S. The union
is in the sense that every point in the integral surface belongs to exactly one
characteristic.
Example 3.4 (Linear Transport Equation). The linear transport equation is
already solved earlier using elementary method. Let us solve the same using
the method of characteristics. Consider the linear transport equation in two
variable,
ut + bux = 0, x ∈ R and t ∈ (0, ∞),
where the constant b ∈ R is given. Thus, the given vector field V (x, t) =
(b, 1, 0). The characteristic equations are
dx dt dz
= b, = 1, and = 0.
ds ds ds
Solving the 3 ODE’s, we get
Note that solving the system of ODEs requires some initial condition. We
have already observe that the solution of the transport equation depended
on the value of u at time t = 0, i.e., the value of u on the curve (x, 0) in
the xt-plane. Thus, the problem of finding a function u solving a first order
PDE such that u is known on a curve Γ in the xy-plane is called the Cauchy
problem.
Example 3.5. Let g be given (smooth enough) function g : R → R. Consider
the linear transport equation
ut + bux = 0 x ∈ R and t ∈ (0, ∞)
(3.2.3)
u(x, 0) = g(x) x ∈ R.
LECTURE 3. FIRST ORDER PDE 16
x(r, 0) = c1 (r) = r
Method of Characteristics:
Continued...
We parametrize the data curve Γ with r-variable, i.e., Γ = {γ1 (r), γ2 (r)} =
{(br, r)}. The characteristic equations are:
x(r, 0) = c1 (r) = br
17
LECTURE 4. METHOD OF CHARACTERISTICS: CONTINUED... 18
Therefore,
(i) Can the three solutions be used to define a function z = u(x, y)?
(ii) If yes to (i), then is the solution z = u(x, y) unique for the Cauchy
problem? The answer is yes because two integral surface intersecting
in Γ must contain the same charaterictics (beyond the scope of this
course).
condition yields g 0 (r) = ux (γ1 (r), γ2 (r))γ10 (r) + uy (γ1 (r), γ2 (r))γ20 (r). Since u
is solution, at (x0 , y0 , z0 ), of the algebraic system
A(x0 , y0 , z0 ) B(x0 , y0 , z0 ) ux (x0 , y0 ) C(x0 , y0 , z0 )
=
γ10 (r) γ20 (r) uy (x0 , y0 ) g 0 (r)
a necessary condition is that
A(x0 , y0 , z0 ) B(x0 , y0 , z0 ) C(x0 , y0 , z0 )
rank = 1.
γ10 (r) γ20 (r) g 0 (r)
If the above rank condition is satisfied then the data curve Γ is said to be a
characteristic at (x0 , y0 , z0 ). Thus, we have the following possibilities:
(a) J(r, 0) 6= 0 for all r ∈ I. Note that, when J(r, 0) 6= 0, then the rank
condition is not satisfied1 and, hence, the data curve Γ does not have
any characteristic points (Γ is not parallel at all points). Then, in a
neighborhood of Γ, there exists a unique solution u = u(x, y) of the
Cauchy problem given by the system of ODEs.
(b) J(r0 , 0) = 0, for some r0 ∈ I, and Γ is characteristic at the point P0 =
(γ1 (r0 ), γ2 (r0 ), g(r0 )). Then a C 1 solution may exist in a neighborhood
of P0 .
(c) J(r0 , 0) = 0 for some r0 ∈ I and Γ is not characteristic at P0 . There are
no C 1 solutions in a neighborhood of P0 . There may exist less regular
solutions.
(d) If Γ is a characteristic then there exists infinitely many C 1 solutions in
a neighborhood of Γ.
Example 4.2. Consider the Burgers’ equation given as
ut + uux = 0 x ∈ R and t ∈ (0, ∞)
u(x, 0) = g(x) x ∈ R.
The parametrization of the curve Γ with r-variable, i.e., Γ = {γ1 (r), γ2 (r)} =
{(r, 0)}. The characteristic equations are:
dx(r, s) dt(r, s) dz(r, s)
= z, = 1, and =0
ds ds ds
1
because rank is 2 in this case
LECTURE 4. METHOD OF CHARACTERISTICS: CONTINUED... 20
Therefore,
Note that the data curve is the parabola x = t2 /4. The parametrization
of the curve Γ with r-variable, i.e., Γ = {γ1 (r), γ2 (r)} = {(r2 , 2r)}. The
characteristic equations are:
Therefore,
s2
x(r, s) = + rs + r2 , t(r, s) = s + 2r and z(r, s) = s + r.
2
Let us compute J(r, s) = 2r + s − 2(s + r) = −s and J(r, 0) = 0 for all r.
The rank of the matrix, at (r, 0),
r 1 1
= 2 6= 1
2r 2 1
and, hence, Γ does not have any characteristic points. We know in this case
we cannot have a C 1 solution but might have less regular solutions.
q Let us
2
solve for r and s, in terms of x, t and z. Thus, we get s = ∓2 x − t4 and
q q
2 2
r = 2t ± x − t4 . Therefore, u(x, t) = 2t ± x − t4 are two solutions in the
region x > t2 /4 and are not differentiable on Γ.
Example 4.4. Consider the Burgers’ equation given as
(
ut+ uux = 1 x ∈ R and t ∈ (0, ∞)
2
u t2 , t = t t > 0.
Note that the data curve is the parabola x = t2 /2. The parametrization
of the curve Γ with r-variable, i.e., Γ = {γ1 (r), γ2 (r)} = {(r2 /2, r)}. The
characteristic equations are:
dx(r, s) dt(r, s) dz(r, s)
= z, = 1, and =1
ds ds ds
LECTURE 4. METHOD OF CHARACTERISTICS: CONTINUED... 22
r2
x(r, 0) = , t(r, 0) = r, and z(r, 0) = r.
2
Solving the ODE corresponding to z, we get z(r, s) = s + c3 (r) with initial
conditions z(r, 0) = c3 (r) = r. Thus, z(r, s) = s + r. Using this in the ODE
of x, we get
dx(r, s)
= s + r.
ds
Solving the ODE’s, we get
s2
x(r, s) = + rs + c1 (r), t(r, s) = s + c2 (r)
2
with initial conditions
r2
x(r, 0) = c1 (r) = and t(r, 0) = c2 (r) = r.
2
Therefore,
s2 + r 2
x(r, s) = + rs, t(r, s) = s + r and z(r, s) = s + r.
2
Let us compute J(r, s) = r + s − (s + r) = 0 for all r and s. Further, Γ is a
characteristic (at all points) because the rank of the matrix, at (r, 0),
r 1 1
=1
r 1 1
A general second order PDE is of the form F (D2 u(x), Du(x), u(x), x) = 0,
for each x ∈ Ω ⊂ Rn and u : Ω → R is the unknown. A Cauchy problem is:
given the knowledge of u on a smooth hypersurface Γ ⊂ Ω, can one find the
solution u of the PDE? The knowledge of u on Γ is said to be the Cauchy
data.
What should be the minimum required Cauchy data for the Cauchy prob-
lem to be solved? Viewing the Cauchy problem as an initial value problem
corresponding to ODE, there is a unique solution to the second order ODE
00
y (x) + P (x)y 0 (x) + Q(x)y(x) = 0 x ∈ I
y(x0 ) = y0
y 0 (x0 ) = y00 .
23
LECTURE 5. CLASSIFICATION OF SECOND ORDER PDE 24
5.1 Semilinear
Consider the Cauchy problem for the second order semilinear PDE in two
variables (x, y) ∈ Ω ⊂ R2 ,
A(x, y)uxx + 2B(x, y)uxy + C(x, y)uyy = D (x, y) ∈ Ω
u(x, y) = g(x, y) (x, y) ∈ Γ
(5.1.1)
ux (x, y) = h1 (x, y) (x, y) ∈ Γ
uy (x, y) = h2 (x, y) (x, y) ∈ Γ.
Note that the geometry hidden in the above definition is very similar
to that we encountered in first order equation. Since ν = (−γ20 , γ10 ) is the
1
twice differentiable
LECTURE 5. CLASSIFICATION OF SECOND ORDER PDE 25
normal to Γ at each point, the above definition says that the curve is non-
characteristic if
2
X
Aij νi νj = A(γ20 )2 − 2Bγ10 γ20 + C(γ10 )2 6= 0
i,j=1
(ii) For any given c ∈ R, consider uy −cuxx = 0. We have already noted that
the equation is parabolic and, hence, should admit one characteristic
curve. The characteristic curve is given by the equation
√
dy B ± B 2 − AC
= = 0.
dx A
Thus, y = a constant is the equation of the characteristic curve. i.e.,
any horizontal line in R2 is a charateristic curve.
(iii) We have already noted that the equation uxx + uyy = 0 is elliptic and,
hence, will have no real characteristics.
(iv) The equation uxx + xuyy = 0 is of mixed type. In the region x > 0, the
characteristic curves are y ∓ 2x3/2 /3 = a constant.
LECTURE 5. CLASSIFICATION OF SECOND ORDER PDE 28
Lecture 6
Classification of SOPDE:
Continued
6.1 Quasilinear
The notion of classification of second order semilinear PDE could be gen-
eralised to quasilinear PDE A(x, u(x), Du(x)), non-linear PDE and system
of ODE. However, in these cases the classification may also depend on the
solution u. The solutions to characteristic equation for a quasilinear equation
depends on the solution considered.
Example 6.1. Consider the quasilinear PDE uxx −uuyy = 0. The discriminant
is d = u. The eigenvalues are 1, −u(x). It is hyperbolic for {u > 0}1 , elliptic
when {u < 0} and parabolic when {u = 0}.
Example 6.2. Consider the quasilinear PDE
(c2 − u2x )uxx − 2ux uy uxy + (c2 − u2y )uyy = 0
where c > 0. Then d = B 2 − AC = c2 (u2x + u2y − c2 ) = c2 (|∇u|2 − c2 ). It is
hyperbolic if |∇u| > c, parabolic if |∇u| = c and elliptic if |∇u| < c.
29
LECTURE 6. CLASSIFICATION OF SOPDE: CONTINUED 30
can have other types of boundary conditions, in addition to the initial (or
Cauchy) condition
y(x0 ) = y0 and y 0 (x0 ) = y00 ,
such as, if I = (a, b) then
where k > 0, is not well-posed. (Hint: Compute explicit solution using sep-
aration of variable. Note that, as k → ∞, the Cauchy data tends to zero
uniformly, but the solution does not converge to zero for any y 6= 0. There-
fore, a small change from zero Cauchy data (with corresponding solution
being zero) may induce bigger change in the solution.)
This issue of ill-posedness of the Cauchy problem is very special to second
order elliptic equations. In general, any hyperbolic equation Cauchy problem
is well-posed, as long as the hyperbolicity is valid in the full neighbourhood
of the data curve.
LECTURE 6. CLASSIFICATION OF SOPDE: CONTINUED 32
Example 6.4. Consider the Cauchy problem for the second order hyperbolic
equation 2
y uxx − yuyy + 21 uy = 0 y > 0
u(x, 0) = f (x)
uy (x, 0) = g(x).
and uy (x, 0) = 0. Thus, the Cauchy problem has no solution unless g(x) = 0.
If g ≡ 0 then the solution is
2 3/2 2 3/2 2 3/2
u(x, y) = F x + y −F x− y +f x− y
3 3 3
Classification of SOPDE:
Continued
Definition 7.1.1. For any PDE of the form (7.1.1) we define its discrimi-
nant as B 2 − AC.
33
LECTURE 7. CLASSIFICATION OF SOPDE: CONTINUED 34
ux = uw wx + uz zx ,
uy = uw wy + uz zy ,
uxx = uww wx2 + 2uwz wx zx + uzz zx2 + uw wxx + uz zxx
uyy = uww wy2 + 2uwz wy zy + uzz zy2 + uw wyy + uz zyy
uxy = uww wx wy + uwz (wx zy + wy zx ) + uzz zx zy + uw wxy + uz zxy
b2 − ac = (B 2 − AC)J 2 .
Therefore, we need to find w and z such that along the slopes of the charac-
teristic curves, √
dy B ± B 2 − AC −wx
= = .
dx A wy
This means that, using the parametrisation of the characteristic curves,
wx γ10 (r) + wy γ20 (r) = 0 and w0 (r) = 0. Similarly for z. Thus, w and z
are chosen such that they are constant on the characteristic curves.
The characteristic curves are found by solving
√
dy B ± B 2 − AC
=
dx A
and the coordinates are then chosen such that along the characteristic curve
w(x, y) √
2
and z(x, y) = a constant. Note that wx zy − wy zx =
= a constant
2
wy zy A B − AC 6= 0.
Example 7.1. For a non-zero constant c ∈ R, let us reduce to canonical
form the PDE uxx − c2 uyy = 0. Note that A = 1, B = 0, C = −c2 and
B 2 − AC = c2 and the equation is hyperbolic. The characteristic curves are
given by the equation
√
dy B ± B 2 − AC
= = ±c.
dx A
Solving we get y ∓ cx = a constant. Thus, w = y + cx and z = y − cx. Now
writing
0 = 4c2 uwz
= uwz .
Example 7.2. Let us reduce the PDE uxx − x2 yuyy = 0 given in the region
{(x, y) | x ∈ R, x 6= 0, y > 0} to its canonical form. Note that A = 1, B = 0,
LECTURE 7. CLASSIFICATION OF SOPDE: CONTINUED 37
ux = uw wx + uz zx = x(uw + uz )
1
uy = uw wy + uz zy = √ (uw − uz )
y
uxx = uww wx2 + 2uwz wx zx + uzz zx2 + uw wxx + uz zxx
= x2 (uww + 2uwz + uzz ) + uw + uz
uyy = uww wy2 + 2uwz wy zy + uzz zy2 + uw wyy + uz zyy
1 1
= (uww − 2uwz + uzz ) − √ (uw − uz )
y 2y y
x2
−x2 yuyy = −x2 (uww − 2uwz + uzz ) + √ (uw − uz )
2 y
Example 7.3. Let us reduce the PDE e2x uxx + 2ex+y uxy + e2y uyy = 0 to its
canonical form. Note that A = e2x , B = ex+y , C = e2y and B 2 − AC = 0.
The PDE is parabolic. The characteristic curves are given by the equation
dy B ey
= = x.
dx A e
Solving, we get e−y − e−x = a constant. Thus, w = e−y − e−x . Now, we
choose z such that wx zy − wy zx 6= 0. For instance, z = x is one such choice.
Then
ux = e−x uw + uz
uy = −e−y uw
uxx = e−2x uww + 2e−x uwz + uzz − e−x uw
uyy = e−2y uww + e−y uw
uxy = −e−y (e−x uww − uwz )
Substituting into the given PDE, we get
ex e−y uzz = (e−y − e−x )uw
Replacing x, y in terms of w, z gives
w
uzz = uw .
1 + wez
In the elliptic case, B 2 − AC < 0, we have no real characteristics. Thus,
we choose w, z to be the real and imaginary part of the solution of the
characteristic equation.
Example 7.4. Let us reduce the PDE x2 uxx + y 2 uyy = 0 given in the region
{(x, y) ∈ R2 | x > 0, y > 0} to its canonical form. Note that A = x2 ,
B = 0, C = y 2 and B 2 − AC = −x2 y 2 < 0. The PDE is elliptic. Solving the
characteristic equation
dy iy
=±
dx x
we get ln x ± i ln y = c. Let w = ln x and z = ln y. Then
ux = uw /x
uy = uz /y
uxx = −uw /x2 + uww /x2
uyy = −uz /y 2 + uzz /y 2
LECTURE 7. CLASSIFICATION OF SOPDE: CONTINUED 39
uww + uzz = uw + uz .
Example 7.5. Let us reduce the PDE uxx + 2uxy + 5uyy = xux to its canonical
form. Note that A = 1, B = 1, C = 5 and B 2 − AC = −4 < 0. The PDE is
elliptic. The characteristic equation is
dy
= 1 ± 2i.
dx
Solving we get x − y ± i2x = c. Let w = x − y and z = 2x. Then
ux = uw + 2uz
uy = −uw
uxx = uww + 4uwz + 4uzz
uyy = uww
uxy = −(uww + 2uwz )
The Laplacian
∂2
(b) the wave equation ∂t2
− ∆,
∂
(c) and the Schrödinger’s equation i ∂t + ∆.
1 ∂2
1 ∂ ∂
∆ := r + 2 2,
r ∂r ∂r r ∂θ
41
LECTURE 8. THE LAPLACIAN 42
∂x ∂y ∂u
y = r sin θ. Then ∂r
= cos θ, ∂r
= sin θ and ∂r
= cos θ ∂u
∂x
+ sin θ ∂u
∂y
. Also,
∂ 2u 2
2 ∂ u
2
2 ∂ u ∂ 2u
= cos θ 2 + sin θ 2 + 2 cos θ sin θ .
∂r2 ∂x ∂y ∂x∂y
∂x ∂y ∂u
Similarly, ∂θ
= −r sin θ, ∂θ
= r cos θ, ∂θ
= r cos θ ∂u
∂y
− r sin θ ∂u
∂x
and
1 ∂ 2u 2
2 ∂ u
2
2 ∂ u ∂ 2u 1 ∂u
2 2
= sin θ 2
+ cos θ 2
− 2 cos θ sin θ − .
r ∂θ ∂x ∂y ∂x∂y r ∂r
∂2u 1 ∂2u ∂2u ∂2u 1 ∂u
Therefore, ∂r2
+ r2 ∂θ2
= ∂x2
+ ∂y 2
− r ∂r
and, hence,
∂ 2u 1 ∂ 2 u 1 ∂u
∆u = 2 + 2 2 + .
∂r r ∂θ r ∂r
Further, in three dimension cylindrical coordinates, the Laplacian is given as
1 ∂2 ∂2
1 ∂ ∂
∆ := r + 2 2+ 2
r ∂r ∂r r ∂θ ∂z
∂2
1 ∂ 2 ∂ 1 ∂ ∂ 1
∆ := 2 r + 2 sin φ + 2 2
r ∂r ∂r r sin φ ∂φ ∂φ r sin φ ∂θ2
where r ∈ [0, ∞), φ ∈ [0, π] (zenith angle or inclination) and θ ∈ [0, 2π)
(azimuth angle).
d2 v(r) (n − 1) dv(r)
∆u(x) = + .
dr2 r dr
Proof. Note that
p
∂r ∂|x| ∂( x21 + . . . + x2n ) 1 xi
= = = (x21 + . . . + x2n )−1/2 (2xi ) = .
∂xi ∂xi ∂xi 2 r
LECTURE 8. THE LAPLACIAN 43
Thus,
n n
X ∂ ∂u(x) X ∂ dv(r) xi
∆u(x) = =
i=1
∂xi ∂xi i=1
∂xi dr r
n
X ∂ 1 dv(r) n dv(r)
= xi +
i=1
∂xi r dr r dr
n
x2i d dv(r) 1
X n dv(r)
= +
i=1
r dr dr r r dr
n
x2i 1 d2 v(r)
X 1 dv(r) n dv(r)
= − +
i=1
r r dr2 r2 dr r dr
r2 1 d2 v(r)
1 dv(r) n dv(r)
= 2
− 2 +
r r dr r dr r dr
2
d v(r) 1 dv(r) n dv(r)
= − +
dr2 r dr r dr
d2 v(r) (n − 1) dv(r)
= + .
dr2 r dr
Hence the result proved.
More generally, the Laplacian in Rn may be written in polar coordinates
as
∂2 n−1 ∂ 1
∆ := 2 + + 2 ∆Sn−1
∂r r ∂r r
where ∆Sn−1 is a second order differential operator in angular variables only.
The angular part of Laplacian is called the Laplace-Beltrami operator acting
on Sn−1 (unit sphere of Rn ) with Riemannian metric induced by the standard
Euclidean metric in Rn .
(ii) (Neumann condition) ∇u·ν = g, where ν(x) is the unit outward normal
of x ∈ ∂Ω;
The second equality is called the compatibility condition. Thus, for an inho-
mogeneous Laplace equation with Neumann boundary condition, the given
data f, g must necessarily satisfy the compatibility condition. Otherwise, the
Neumann problem does not make any sense.
The aim of this chapter is to solve, for any open bounded subset Ω ⊂ Rn ,
−∆u(x) = f (x) in Ω
one of the above inhomogeneous boudary condition on ∂Ω.
and w is a solution of
−∆w(x) = f (x) in Ω
one of the above homogeneous boudary condition on ∂Ω.
Properties of Harmonic
Functions
47
LECTURE 9. PROPERTIES OF HARMONIC FUNCTIONS 48
hence it should be attained at some point x? ∈ ∂Ω, on the boundary. For all
x ∈ Ω,
The above inequality is true for all ε > 0. Thus, u(x) ≤ maxx∈∂Ω u(x), for all
x ∈ Ω. Therefore, maxΩ u ≤ maxx∈∂Ω u(x). and hence we have equality.
(b) (Stability) |u1 (x) − u2 (x)| ≤ maxy∈∂Ω |g1 (y) − g2 (y)| for all x ∈ Ω.
Proof. The fact that there is atmost one solution to the Dirichlet problem
follows from the Theorem 9.0.2. Let w = u1 − u2 . Then w is harmonic.
(a) Note that w = g1 − g2 ≥ 0 on ∂Ω. Since g1 (x0 ) > g2 (x0 ) for some
x0 ∈ ∂Ω, then w(x) > 0 for all x ∈ ∂Ω. This proves the comparison
result.
We remark that the uniqueness result is not true for unbounded domains.
Example 9.1. Let u ∈ C 2 (Ω) ∩ C(Ω) be a solution of the Dirichlet problem
∆u(x) = 0 x∈Ω
(9.0.1)
u(x) = g(x) x ∈ ∂Ω.
Note that Ω is the unit ball in R3 with a sharp inward cusp, called Lebesgue
spine, at the origin (0, 0, 0).
Example 9.4. There are domains with inward cusps for which the classical
problem is solvable. For instance, consider
for any positive integer k. The proof of this fact involves the theory of
capacities, beyond the scope of this course.
Sturm-Liouville Problems
51
LECTURE 10. STURM-LIOUVILLE PROBLEMS 52
p0 = p(x)P (x).
R
Hence, p(x) = e P (x) dx . Thus, by setting q(x) = p(x)Q(x) and r(x) =
p(x)R(x), we have the other form.
The function y(x) and λ are unknown quantities. The pair of boundary
conditions given above is called separated. The boundary conditions corre-
sponds to the end-point a and b, respectively. Note that both c1 and c2
cannot be zero simultaneously and, similar condition on d1 and d2 .
Definition 10.2.1. The Sturm-Liouville problem with separated boundary
conditions is said to be regular if:
(a) p, p0 , q, r : [a, b] → R are continuous functions
We say the S-L problem is singular if either the interval (a, b) is un-
bounded or one (or both) of the regularity condition given above fails.
We say the S-L problem is periodic if p(a) = p(b) and the separated
boundary conditions are replaced with the periodic boundary condition y(a) =
y(b) and y 0 (a) = y 0 (b).
(a)
−y 00 (x) = λy(x) x ∈ (0, a)
y(0) = y(a) = 0.
We have chosen c1 = d1 = 1 and c2 = d2 = 0. Also, q ≡ 0 and p ≡ r ≡ 1.
(b)
−y 00 (x) = λy(x) x ∈ (0, a)
y 0 (0) = y 0 (a) = 0.
We have chosen c1 = d1 = 0 and c2 = d2 = 1. Also, q ≡ 0 and p ≡ r ≡ 1.
(c)
−y 00 (x) = λy(x) x ∈ (0, a)
y 0 (0) = 0
cy(a) + y 0 (a) = 0,
(d) 0
− (x2 y 0 (x)) = λy(x) x ∈ (1, a)
y(1) = 0
y(a) = 0,
where p(x) = r(x) = x, q(x) = −n2 /x. This equation is not regular
because p(0) = r(0) = 0 and q is not continuous in the closed interval
[0, a], since q(x) → −∞ as x → 0. Note that there is no boundary
condition corresponding to 0.
Spectral Results
We shall now state without proof the spectral theorem for regular S-L prob-
lem. Our aim, in this course, is to check the validity of the theorem through
some examples.
Theorem 11.0.1. For a regular S-L problem, there exists an increasing se-
quence of eigenvalues 0 < λ1 < λ2 < λ3 < . . . < λk < . . . with λk → ∞, as
k → ∞.
Exercise 4. Let Wk = Wλk be the eigen space corresponding λk . Show that
for regular S-L problem Wk is one dimensional, i.e., corresponding to each
λk , there cannot be two or more linearly independent eigen vectors.
Example 11.1. Consider the boundary value problem,
y 00 + λy = 0 x ∈ (0, a)
y(0) = y(a) = 0.
This is a second order ODE with constant coeffcients. √ Its characteristic
equation is m2 + λ = 0. Solving for m, we get m = ± −λ. Note that
the λ can be either zero, positive or negative. If λ = 0, then y 00 = 0 and
the general solution is y(x) = αx + β, for some constants α and β. Since
y(0) = y(a) = 0 and a 6= 0, we get α = β = 0. Thus, we have no non-trivial
solution corresponding to λ = 0. √ √
If λ < 0, then ω = −λ > 0. Hence y(x) = αe ωx + βe− ωx . Using the
boundary condition y(0) = y(a) = 0, we get α = β = 0 and hence we have
no non-trivial solution corresponding
√ to negative√
λ’s. √
If λ > 0, then m = ±i λ and y(x) = α cos( λx) + β sin( λx). √ Using
the boundary condition y(0) = 0, we get α = 0 and y(x) = β sin( λx).
55
LECTURE 11. SPECTRAL RESULTS 56
√
Using y(a) = 0 (and β = 0 yields trivial solution), we assume sin( λa) = 0.
Thus, λ = (kπ/a)2 for each non-zero k ∈ N (since λ > 0). Hence, for each
k ∈ N, there is a solution (yk , λk ) with
kπx
yk (x) = sin ,
a
(i) We have discrete set of λ’s such that 0 < λ1 < λ2 < λ3 < . . . and
λk → ∞, as k → ∞.
√ √ √
If λ > 0, then m = ±i λ and y(x) = α cos( λx) + β sin( λx). Using
the boundary condition, we get
√ √ √ √
α cos(− λπ) + β sin(− λπ) = α cos( λπ) + β sin( λπ)
and
√ √ √ √
−α sin(− λπ) + β cos(− λπ) = −α sin( λπ) + β cos( λπ).
√ √
Thus, β sin( λπ) = α sin( λπ) = 0. √
For a non-trivial solution, we must have sin( λπ) = 0. Thus, λ = k 2 for
each non-zero k ∈ N (since λ > 0).
Hence, for each k ∈ N ∪ {0}, there is a solution (yk , λk ) with
and λk = k 2 .
LECTURE 11. SPECTRAL RESULTS 58
Lecture 12
Singular Sturm-Liouville
Problem
59
LECTURE 12. SINGULAR STURM-LIOUVILLE PROBLEM 60
The end points x = ±1 are regular singular point. The coefficients P (x) =
−2x λ
1−x2
and R(x) = 1−x 2 are analytic at x = 0, with radius of convergence 1.
(k(k + 1) − λ)ak
ak+2 = .
(k + 2)(k + 1)
Thus, the constants a0 and a1 can be fixed arbitrarily and the remaining
constants are defined as per the above relation. For instance, if a1 = 0, we
get the non-trivial solution of the Legendre equation as
∞
X
y 1 = a0 + a2k x2k
k=1
provided the series converge. Note from the recurrence relation that if a
coefficient is zero at some stage, then every alternate coefficient, subsequently,
is zero. Thus, there are two possibilities of convergence here:
Suppose the series does not terminate, say for instance, in y1 . Then
a2k 6= 0, for all k. Consider the ratio
a2(k+1) x2(k+1) 2k(2k + 1)x2 2kx2
lim = x2 .
= k→∞
lim = lim
k→∞ a2k x2k (2k + 2)(2k + 1) k→∞ (2k + 2)
The term involving λ tends to zero. Therefore, by ratio test, y1 converges in
x2 < 1 and diverges in x2 > 1. Also, it can be shown that when x2 = 1 the
series diverges (beyond the scope of this course).
Since, Legendre equation is a singular S-L problem, we try to find solution
y such that y and its derivative y 0 are continuous in the closed interval [−1, 1].
LECTURE 12. SINGULAR STURM-LIOUVILLE PROBLEM 61
Thus, the only such possible solutions will be terminating series becoming
polynomials.
Note that, for k ≥ 2,
(k(k + 1) − λ)ak
ak+2 = .
(k + 2)(k + 1)
Hence, for any n ≥ 2, if λ = n(n+1), then an+2 = 0 and hence every alternate
term is zero. Also, if λ = 1(1 + 1) = 2, then a3 = 0. If λ = 0(0 + 1) = 0, then
a2 = 0. Thus, for each n ∈ N ∪ {0}, we have λn = n(n + 1) and one of the
solution y1 or y2 is a polynomial. Thus, for each n ∈ N ∪ {0}, we have the
eigen value λn = n(n + 1) and the Legendre polynmial Pn of degree n which
is a solution to the Legendre equation.
As before, since this is a singular S-L problem we shall look for solutions y
such that y and its derivative y 0 are continuous in the closed interval [0, a].
We shall assume that the eigenvalues are all real1 . Thus, λ may be zero,
positive or negative.
When λ = 0, the given ODE reduces to the Cauchy-Euler form
0 n2
− (xy 0 (x)) + y(x) = 0
x
or equivalently,
x2 y 00 (x) + xy 0 (x) − n2 y(x) = 0.
The above second order ODE with variable coefficients can be converted to
an ODE with constant coefficients by the substitution x = es (or s = ln x).
Then, by chain rule,
dy dy ds dy
y0 = = = e−s
dx ds dx ds
1
needs proof
LECTURE 12. SINGULAR STURM-LIOUVILLE PROBLEM 62
and
0
d2 y dy
00 −s dy −s d −s dy −2s
y =e =e e =e − .
ds ds ds ds2 ds
Therefore,
y 00 (s) − n2 y(s) = 0,
where y is now a function of the new variable s. For n = 0, the general
solution is y(s) = αs+β, for some arbitrary constants. Thus, y(x) = α ln x+
β. The requirement that both y and y 0 are continuous on [0, a] forces α = 0.
Thus, y(x) = β. But y(a) = 0 and hence β = 0, yielding the trivial solution.
Now, let n > 0 be positive integers. Then the general solution is y(s) =
αens + βe−ns . Consequently, y(x) = αxn + βx−n . Since y and y 0 has to be
continuous on [0, a], β = 0. Thus, y(x) = αxn . Now, using the boundary
condition y(a) = 0, we get α = 0 yielding the trivial solution. Therefore,
λ = 0 is not an eigenvalue for all n = 0, 1, 2, . . ..
When λ > 0, the given ODE reduces to
x2 y 00 (x) + xy 0 (x) + (λx2 − n2 )y(x) = 0.
√
Using the change of variable s2 = λx2 , we get y 0 (x) = λy 0 (s) and y 00 (x) =
λy 00 (s). Then the given ODE is transformed into the Bessel’s equation
s2 y 00 (s) + sy 0 (s) + (s2 − n2 )y(s) = 0.
Using the power series form of solution, we know that the general solution
of the Bessel’s equation is
y(s) = αJn (s) + βYn (s),
where Jn and Yn are the Bessel
√ functions
√ of first and second kind, respectively.
Therefore, y(x) = αJn ( λx) + βYn ( λx). √ The continuity assumptions of
0
y and y force that√β = 0, because Yn ( λx) is discontinuous at x = 0.
√ y(x) = αJn ( λx). Using the boundary condition y(a) = 0, we get
Thus,
Jn ( λa) = 0.
Theorem 12.0.2. For each non-negative integer n, Jn has infinitely many
positive zeroes.
Orthogonality of Eigen
Functions
Observe that for a regular S-L problem the differential operator can be writ-
ten as
−1 d d q(x)
L= p(x) − .
r(x) dx dx r(x)
Let V denote the set of all solutions of (10.2.1). Necessarily, 0 ∈ V and
V ⊂ C 2 (a, b). We define the inner product 1 h·, ·i : V × V → R on V as,
Z b
hf, gi := r(x)f (x)g(x) dx.
a
Theorem 13.0.2. With respect to the inner product defined above in V , the
eigen functions corresponding to distinct eigenvalues of the S-L problem are
orthogonal.
63
LECTURE 13. ORTHOGONALITY OF EIGEN FUNCTIONS 64
Thus,
(λi −λj )hyi , yj i = p(b) yj0 (b)yi (b) − yi0 (b)yj (b) −p(a) yj0 (a)yi (a) − yi0 (a)yj (a) .
For regular S-L problem, the boundary condition corresponding to the end-
point b is the system of equations
d1 yi (b) + d2 yi0 (b) = 0
d1 yj (b) + d2 yj0 (b) = 0
such that d21 + d22 = 0. Therefore, the determinant of the coefficient matrix
yi (b)yj0 (b) − yj (b)yi0 (b) = 0. Similar, argument is also valid for the boundary
condition corresponding to a. Thus, (λi − λj )hyi , yj i = 0. But λi − λj 6= 0,
hence hyi , yj i = 0.
For periodic S-L problem, p(a) = p(b), yk (a) = yk (b) and yk0 (a) = yk0 (b),
for k = i, j. Then the RHS vanishes and hyi , yj i = 0.
For singular S-L problems such that either p(a) = 0 or p(b) = 0 or both
happens, then again RHS vanishes. This is because if p(a) = 0, we drop the
boundary condition corresponding to the end-point a.
LECTURE 13. ORTHOGONALITY OF EIGEN FUNCTIONS 65
y 00 + λy = 0 x ∈ (0, a)
y(0) = y(a) = 0
to be (yk , λk ) where
kπx
yk (x) = sin
a
and λk = (kπ/a)2 , for each k ∈ N. For m, n ∈ N such that m 6= n, we need
to check that ym and yn are orthogonal. Since r ≡ 1, we consider
Z a mπx nπx
hym (x), yn (x)i = sin sin dx
0 a a
Exercise 5. Show that, for any n ≥ 0 and m positive integer,
(i) (
Z π
π, for m = n
cos nt cos mt dt =
−π 0, for m 6= n.
(ii) (
Z π
π, for m = n
sin nt sin mt dt =
−π 0, for m 6= n.
(iii) Z π
sin nt cos mt dt = 0.
−π
cos
√ kt sin
√ kt
Consequently, show that π
and π
are of unit length.
and
cos((n − m)t) = cos nt cos mt + sin nt sin mt. (13.0.2)
LECTURE 13. ORTHOGONALITY OF EIGEN FUNCTIONS 66
(ii) Subtract (13.0.1) from (13.0.2) and use similar arguments as above.
(iii) Arguments are same using the identities (13.0.1) and (13.0.2) corre-
sponding to sin.
where zni is the i-th positive zero of the Bessel function (of order n) Jn .
Fourier Series
69
LECTURE 14. FOURIER SERIES 70
1
similar idea will work for any T -periodic function
2
note the uniform convergence hypothesis
LECTURE 14. FOURIER SERIES 71
Hence, Z π
1
a0 = f (t) dt.
2π −π
Similar argument, after multiplying by sin kt, gives the formula for bk . Thus,
we have derived , for all k ∈ N,
1 π
Z
ak = f (t) cos kt dt
π −π
1 π
Z
bk = f (t) sin kt dt
π −π
Z π
1
a0 = f (t) dt.
2π −π
where Z T
2 2πkt
ak = f (t) cos dt (14.2.2a)
T 0 T
LECTURE 14. FOURIER SERIES 72
Z T
2 2πkt
bk = f (t) sin dt (14.2.2b)
T 0 T
1 T
Z
a0 = f (t) dt. (14.2.2c)
T 0
The above discussion motivates us to give the following definition.
Definition 14.2.1. If f : R → R is any T -periodic integrable function then
we define the Fourier coefficients of f , a0 , ak and bk , for all k ∈ N, by (14.2.2)
and the Fourier series of f is given by
∞
X 2πkt 2πkt
f (x) ≈ a0 + ak cos + bk sin . (14.2.3)
k=1
T T
Note the use of “≈” symbol in (14.2.3). This is because we have the
following issues once we have the definition of Fourier series of f , viz.,
(a) Will the Fourier series of f always converge?
(b) If it converges, will it converge to f ?
(c) If so, is the convergence point-wise or uniform3 .
Answering these question, in all generality, is beyond the scope of this
course. However, we shall state some results, in the next section, that will
get us in to working mode. We end this section with some simple examples
on computing Fourier coefficients of functions.
Example 14.2. Consider the constant function f ≡ c on (−π, π). Then
Z π
1
a0 = c dt = c.
2π −π
For each k ∈ N, Z π
1
ak = c cos kt dt = 0
π −π
and Z π
1
bk = c sin kt dt = 0.
π −π
3
because our derivation of formulae for Fourier coefficients assumed uniform conver-
gence of the series
LECTURE 14. FOURIER SERIES 73
Example 14.3. Consider the trigonometric function f (t) = sin t on (−π, π).
Then Z π
1
a0 = sin x dt = 0.
2π −π
For each k ∈ N, Z π
1
ak = sin t cos kt dt = 0
π −π
and (
π
0 k 6= 1
Z
1
bk = sin t sin kt dt =
π −π 1 k = 1.
Similarly, for f (t) = cos t on (−π, π), all Fourier coefficients are zero, except
a1 = 1.
Example 14.4. Consider the function f (t) = t on (−π, π). Then
Z π
1
a0 = t dt = 0.
2π −π
For each k ∈ N,
1 π
Z Z π
1
ak = t cos kt dt = − sin kt dt + (π sin kπ − (−π) sin k(−π))
π −π kπ −π
For each k ∈ N,
2 π
Z Z π
2
ak = t cos 2kt dt = − sin 2kt dt + (π sin 2kπ − 0)
π 0 2kπ 0
1 1
= (cos 2kπ − cos(0)) = 0
kπ 2k
and
2 π
Z Z π
2
bk = t sin 2kt dt = cos 2kt dt − (π cos 2kπ − 0)
π 0 2kπ 0
1 1 −1
= (sin 2kπ − sin(0)) − π = .
kπ 2k k
Note that difference in Fourier expansion of the same function when the
periodicity changes.
Exercise 9. Find the Fourier coefficients and Fourier series of the function
(
0 if t ∈ (−π, 0]
f (t) =
t if t ∈ (0, π).
Proof. Observe that |ak | and |bk | are bounded sequences, since
Z π
{|ak |, |bk |} ≤ |f (t)| dt < +∞.
−π
LECTURE 14. FOURIER SERIES 75
By the uniform continuity of f on [−π, π], the maximum will tend to zero as
k → ∞. Hence |bk | → 0. Exactly, similar arguments hold for ak .
LECTURE 14. FOURIER SERIES 76
Lecture 15
77
LECTURE 15. FOURIER SERIES: CONTINUED... 78
Then, Z π
1 c
a0 = c dt = .
2π 0 2
For each k ∈ N, Z π
1
ak = c cos kt dt = 0
π 0
and
π
c(1 + (−1)k+1 )
Z
1 c 1
bk = c sin kt dt = (− cos kπ + cos(0)) = .
π 0 π k kπ
Therefore,
∞
c X c(1 + (−1)k+1 )
f (t) ≈ + sin kt.
2 k=1 kπ
The point t0 = 0 is a non-smooth point of the function f . Note that the
right limit of f at t0 = 0 is c and the left limit of f at t0 = 0 is 0. Note that
the Fourier series of f ay t0 = 0 converges to c/2, the average of c and 0.
where the convergence is uniform. Use the integral formulae from Exercise 6
to show that, for all k ∈ Z,
Z π
1
ck = f (t)e−ıkt dt.
2π −π
Proof. Fix a k. To find the coefficient ck , multiply both sides of (15.2.1) by
e−ıkt and integrate from −π to π.
Using the real Fourier coefficients one can write down the complex Fourier
coefficients using the relations
a0 ak − ıbk ak + ıbk
c0 = , ck = and c−k =
2 2 2
LECTURE 15. FOURIER SERIES: CONTINUED... 80
and if one can compute directly the complex Fourier series of a periodic
function f , then one can write down the real Fourier coefficients using the
formula,
a0 = 2c0 , ak = ck + c−k and bk = ı(ck − c−k ).
Exercise 12. Find the complex Fourier coefficients (directly) of the function
f (t) = t for t ∈ (−π, π] extended to R periodically with period 2π. Use the
complex Fourier coefficients to find the real Fourier coefficients of f .
(−1)k+1
a0 = 0, ak = (ck + c−k ) = 0 and bk = ı(ck − c−k ) = 2 .
k
15.3 Orthogonality
Let V be the class of all 2π-periodic real valued continuous function on R.
Exercise 13. Show that V is a vector space over R.
We introduce an inner product on V . For any two elements f, g ∈ V , we
define: Z π
hf, gi := f (t)g(t) dt.
−π
1 cos kt sin kt
e0 (t) = √ , ek (t) = √ and fk (t) = √ .
2π π π
Example 15.4. e0 , ek and fk are all of unit length. he0 , ek i = 0 and he0 , fk i =
0. Also, hem , en i = 0 and hfm , fn i = 0, for m 6= n. Further, hem , fn i = 0
for all m, n. Check and compare these properties with the standard basis
vectors of Rn !
In this new formulation, we can rewrite the formulae for the Fourier
coefficients as:
1 1 1
a0 = √ hf, e0 i, ak = √ hf, ek i and bk = √ hf, fk i.
2π π π
Example 15.5. All constant functions are even functions. For all k ∈ N, sin kt
are odd functions and cos kt are even functions.
Exercise 15. Any odd function is always orthogonal to an even function.
The Fourier series of an odd or even functions will contain only sine or
cosine parts, respectively. The reason being that, if f is odd
∞
X πkt
f (t) = bk sin where (15.4.1)
k=1
T
LECTURE 15. FOURIER SERIES: CONTINUED... 83
1 T
Z
1 πkt πkt
bk = fo , sin = fo (t) sin dt
T T T −T T
Z 0 Z T
1 πkt πkt
= −f (−t) sin dt + f (t) sin dt
T −T T 0 T
Z 0 Z T
1 πkt πkt
= −f (t) sin dt + f (t) sin dt
T T T 0 T
2 T
Z
πkt
= f (t) sin dt.
T 0 T
Example 15.6. Let us consider the function f (t) = t on (0, π). To compute the
Fourier sine series of f , we extend f to (−π, π) as an odd function fo (t) = t
on (−π, π). For each k ∈ N,
2 π
Z Z π
2
bk = t sin kt dt = cos kt dt − (π cos kπ − 0)
π 0 kπ 0
(−1)k+1 2
2 1 k+1
= (sin kπ − sin(0)) + π(−1) = .
kπ k k
where Z T
2 πkt
ak = f (t) cos dt
T 0 T
and
1 T
Z
a0 = f (t) dt.
T 0
Example 15.7. Let us consider the function f (t) = t on (0, π). To compute
the Fourier cosine series of f , we extend f to (−π, π) as an even function
fe (t) = |t| on (−π, π). Then,
1 π
Z
π
a0 = t dt = .
π 0 2
For each k ∈ N,
2 π
Z Z π
2
ak = t cos kt dt = − sin kt dt + (π sin kπ − 0)
π 0 kπ 0
2[(−1)k − 1]
2 1
= (cos kπ − cos(0)) = .
kπ k k2π
Therefore, the Fourier cosine series expansion of f (t) = t on (0, π) is
∞
π X (−1)k − 1
t≈ +2 2π
cos kt.
2 k=1
k
Compare the result with the Fourier series of the function f (t) = |t| on
(−π, π).
where
1 ∞
Z
a(ξ) = f (t) cos ξt dt
π −∞
1 ∞
Z
b(ξ) = f (t) sin ξt dt.
π −∞
LECTURE 15. FOURIER SERIES: CONTINUED... 86
Lecture 16
u(0, t) = u(L, t) = 0.
We are also given the initial position u(x, 0) = g(x) (at time t = 0) and initial
velocity of the string at time t = 0, ut (x, 0) = h(x). Given g, h : [0, L] → R
such that g(0) = g(L) = 0 and h(0) = h(L), we need to solve the initial value
87
LECTURE 16. STANDING WAVES: SEPARATION OF VARIABLE 88
problem
utt (x, t) − c2 uxx (x, t) =0 in (0, L) × (0, ∞)
u(x, 0) = g(x) in [0, L]
ut (x, 0) = h(x) in [0, L] (16.0.1)
u(0, t) = φ(t) in (0, ∞)
u(L, t) = ψ(t) in (0, ∞),
and
g(L) = ψ(0), g 00 (L) = ψ 00 (0), h(L) = ψ 0 (0).
Let φ = ψ ≡ 0. Let us seek for solutions u(x, t) whose variables can be
separated. Let u(x, t) = v(x)w(t). Differentiating and substituting in the
wave equation, we get
Hence
w00 (t) v 00 (x)
= .
c2 w(t) v(x)
Since RHS is a function of x and LHS is a function t, they must equal a
constant, say λ. Thus,
v 00 (x) w00 (t)
= 2 = λ.
v(x) c w(t)
Using the boundary condition u(0, t) = u(L, t) = 0, we get
v(0)w(t) = v(L)w(t) = 0.
such that α = (c1 + c2 )/2 and β = (c1 − c2 )/2. Using the boundary condition
v(0) = 0, we get c1 = 0 and hence
√
v(x) = c2 sinh( λx).
√
Now using v(L) = 0, we have c2 sinh λL = 0. Thus, c2 = 0 and v(x) = 0.
We have seen this cannot be a solution. √
Finally, if λ < 0, then set ω = −λ. We need to solve the simple
harmonic oscillator problem
00
v (x) + ω 2 v(x) = 0 x ∈ (0, L)
v(0) = v(L) = 0.
for some constant bk and λk = −(kπ/L)2 . It now remains to solve w for each
of these λk . For each k ∈ N, we solve for wk in the ODE
Note that the solution is expressed as series, which raises the question of
convergence of the series. Another concern is whether all solutions of (16.0.1)
have this form. We ignore these two concerns at this moment.
Since we know the initial position of the string as the graph of g, we get
∞
X kπx
g(x) = u(x, 0) = ak sin .
k=1
L
This expression is again troubling and rises the question: Can any arbitrary
function g be expressed as an infinite sum of trigonometric functions? An-
swering this question led to the study of “Fourier series”. Let us also, as
usual, ignore this concern for time being. Then, can we find the the con-
lπx
stants ak with knowledge of g. By multiplying sin L both sides of the
expression of g and integrating from 0 to L, we get
Z L Z L "X ∞ #
lπx kπx lπx
g(x) sin dx = ak sin sin dx
0 L 0 k=1
L L
∞ Z L
X kπx lπx
= ak sin sin dx
k=1 0 L L
LECTURE 16. STANDING WAVES: SEPARATION OF VARIABLE 91
2 L
Z
kπx
ak = g(x) sin .
L 0 L
and hence Z L
2 kπx
bk = h(x) sin .
kcπ 0 L
Proof. We begin by looking for solution u(x, y) whose variables are separated,
i.e., u(x, y) = v(x)w(y). Substituting this form of u in the Laplace equation,
we get
v 00 (x)w(y) + v(x)w00 (y) = 0.
Hence
v 00 (x) w00 (y)
=− .
v(x) w(y)
Since LHS is function of x and RHS is function y, they must equal a constant,
say λ. Thus,
v 00 (x) w00 (y)
=− = λ.
v(x) w(y)
LECTURE 16. STANDING WAVES: SEPARATION OF VARIABLE 92
such that α = (c1 + c2 )/2 and β = (c1 − c2 )/2. Using the boundary condition
v(0) = 0, we get c1 = 0 and hence
√
v(x) = c2 sinh( λx).
√
Now using v(a) = 0, we have c2 sinh λa = 0. Thus, c2 = 0 and v(x) = 0.
We have seen this cannot √ be a solution.
If λ < 0, then set ω = −λ. We need to solve
00
v (x) + ω 2 v(x) = 0 x ∈ (0, a)
(16.1.1)
v(0) = v(a) = 0.
The constant δk are obtained by using the boundary condition u(x, b) = h(x)
which yields
∞
X kπb kπx
h(x) = u(x, b) = δk sinh sin .
k=1
a a
Since h(0) = h(a) = 0, the function h admits a Fourier Sine series. Thus
δk sinh kπb
a
is the k-th Fourier sine coefficient of h, i.e.,
−1 Z a
kπb 2 kπx
δk = sinh h(x) sin .
a a 0 a
Proof. Given the nature of the domain, we shall use the Laplace operator in
polar coordinates,
1 ∂2
1 ∂ ∂
∆ := r + 2 2
r ∂r ∂r r ∂θ
LECTURE 16. STANDING WAVES: SEPARATION OF VARIABLE 94
where U (r, θ) = u(r cos θ, r sin θ), G : [0, 2π) → R is G(θ) = g(cos θ, sin θ).
Note that both U and G are 2π periodic w.r.t θ. We will look for solution
U (r, θ) whose variables can be separated, i.e., U (r, θ) = v(r)w(θ) with both
v and w non-zero. Substituting it in the polar form of Laplacian, we get
v d2 w
w d dv
r + 2 2 =0
r dr dr r dθ
and hence
1 d2 w
−r d dv
r = .
v dr dr w dθ2
Since LHS is a function of r and RHS is a function of θ, they must equal a
constant, say λ. We need to solve the eigen value problem,
00
w (θ) − λw(θ) = 0 θ∈R
w(θ + 2π) = w(θ) ∀θ.
To find the constants, we must use U (R, θ) = G(θ). If G ∈ C 1 [0, 2π], then G
admits Fourier series expansion. Therefore,
∞
a0 X k
R ak cos(kθ) + Rk bk sin(kθ)
G(θ) = +
2 k=1
where Z π
1
ak = k G(θ) cos(kθ) dθ,
R π −π
LECTURE 16. STANDING WAVES: SEPARATION OF VARIABLE 96
Z π
1
bk = k G(θ) sin(kθ) dθ.
R π −π
Using this in the formula for U and the uniform convergence of Fourier series,
we get
" ∞
#
1 π
Z
1 X r k
U (r, θ) = G(η) + (cos kη cos kθ + sin kη sin kθ) dη
π −π 2 k=1 R
" ∞
#
1 π
Z
1 X r k
= G(η) + cos k(η − θ) dη.
π −π 2 k=1 R
k=1
R k=1
R 1 − Rr ei(η−θ)
2
R − rR cos(η − θ)
= −1
R2
+ r2 − 2rR cos(η − θ)
rR cos(η − θ) − r2
=
R2 + r2 − 2rR cos(η − θ)
in U (r, θ) we get
π
R2 − r 2
Z
G(η)
U (r, θ) = dη.
2π −π R2 + r2 − 2rR cos(η − θ)
Note that the formula derived above for U (r, θ) can be rewritten in Carte-
sian coordinates and will have the form
R2 − |x|2
Z
g(y)
u(x) = 2
dy.
2πR SR (0) |x − y|
This can be easily seen, by setting y = R(x10 cos η +x20 sin η), we get dy = Rdη
and |x − y|2 = R2 + r2 − 2rR cos(η − θ). This is called the Poisson formula.
More generally, the unique solution to the Dirichlet problem on a ball of
radius R centred at x0 in Rn is given by Poisson formula
R2 − |x − x0 |2
Z
g(y)
u(x) = n
dy.
ωn R SR (x0 ) |x − y|
dw d2 w dw
w0 (φ) = − sin φ and w00 (φ) = sin2 φ 2 − cos φ
dx dx dx
LECTURE 16. STANDING WAVES: SEPARATION OF VARIABLE 98
where c is a constant.
Proof. We begin with the ansatz that u(x, t) = v(x)w(t) (variable separated).
Substituting u in separated form in the equation, we get
and, hence,
w0 (t) v 00 (x)
= .
c2 w(t) v(x)
Since LHS, a function of t, and RHS, a function x, are equal they must be
equal to some constant, say λ. Thus,
w0 (t) v 00 (x)
= = λ.
c2 w(t) v(x)
99
LECTURE 17. PARABOLIC: HEAT EQUATION 100
where c is a constant.
LECTURE 17. PARABOLIC: HEAT EQUATION 101
Proof. Note that u(θ, t) is 2π-periodic in θ-variable, i.e., u(θ + 2π, t) = u(θ, t)
for all θ ∈ R and t ≥ 0. We begin with ansatz u(θ, t) = v(θ)w(t) with
variables separated. Substituting for u in the equation, we get
w0 (t) v 00 (θ)
= = λ.
c2 w(t) v(θ)
For each k ∈ N ∪ {0}, the pair (λk , vk ) is a solution to the eigenvalue problem
where λk = −k 2 and
We now use the initial temperature on the circle to find the constants. Since
u(θ, 0) = g(θ),
∞
a0 X
g(θ) = u(θ, 0) = + [ak cos(kθ) + bk sin(kθ)] .
2 k=1
ut (x, t) − c2 ∆u(x, t) = f (x, t) in Ω × (0, T )
u(x, t) = 0 in ∂Ω × (0, T ) (17.1.1)
u(x, 0) = 0 in Ω.
As a first step, for each s ∈ (0, ∞), consider w(x, t; s) as the solution of the
homogeneous problem (auxiliary)
s
wt (x, t) − c2 ∆ws (x, t) = 0 in Ω × (s, T )
ws (x, t) = 0 in ∂Ω × (s, T )
ws (x, s) = f (x, s) on Ω × {s}.
wt (x, r) − c2 ∆w(x, r) = 0 in Ω × (0, T − s)
w(x, r) = 0 in ∂Ω × (0, T − s)
w(x, 0) = f (x, s) on Ω.
Z t Z t
s
u(x, t) := w (x, t) ds = w(x, t − s) ds
0 0
solves (17.1.1).
LECTURE 17. PARABOLIC: HEAT EQUATION 103
∂ t
Z
ut (x, t) = w(x, t − s) ds
∂t 0
Z t
d(t)
= wt (x, t − s) ds + w(x, t − t)
0 dt
d(0)
− w(x, t − 0)
dt
Z t
= wt (x, t − s) ds + w(x, 0)
0
Z t
= wt (x, t − s) ds + f (x, t).
0
Similarly, Z t
∆u(x, t) = ∆w(x, t − s) ds.
0
Thus,
Z t
2
wt (x, t − s) − c2 ∆w(x, t − s) ds
ut − c ∆u = f (x, t) +
0
= f (x, t).
LECTURE 17. PARABOLIC: HEAT EQUATION 104
Lecture 18
Travelling Waves
Consider the wave equation utt = c2 uxx on R × (0, ∞), describing the vibra-
tion of an infinite string. We have already seen in Chapter 5 that the equation
is hyperbolic and has the two characteristics x ± ct= a constant. Introduce
the new coordinates w = x + ct, z = x − ct and set u(w, z) = u(x, t). Thus,
we have the following relations, using chain rule:
ux = uw wx + uz zx = uw + uz
ut = uw wt + uz zt = c(uw − uz )
uxx = uww + 2uzw + uzz
utt = c2 (uww − 2uzw + uzz )
105
LECTURE 18. TRAVELLING WAVES 106
Now that we have derived the general form of the solution of wave equa-
tion, we return to understand the physical system of a vibrating infinite
string. The initial shape (position at initial time t = 0) of the string is given
as u(x, 0) = g(x), where the graph of g on R2 describes the shape of the
string. Since we need one more data to identify the arbitrary functions, we
also prescribe the initial velocity of the string, ut (x, 0) = h(x).
Another interesting property that follows from the general solution is
that for any four points A, B, C and D that form a rectangle bounded by
characteristic curves in R × R+ then u(A) + u(C) = u(B) + u(D) because
u(A) = F (α) + G(β), u(C) = F (γ) + G(δ), u(B) = F (α) + G(δ) and u(D) =
F (γ) + G(β).
Theorem 18.0.1. Given g ∈ C 2 (R) and h ∈ C 1 (R), there is a unique C 2
solution u of the Cauchy initial value problem (IVP) of the wave equation,
utt (x, t) − c2 uxx (x, t) = 0 in R × (0, ∞)
u(x, 0) = g(x) in R (18.0.1)
ut (x, 0) = h(x) in R,
g 0 (x) + h(x)/c. Similarly, 2G0 (x) = g 0 (x) − h(x)/c. Integrating2 both these
equations, we get
1 x
Z
1
F (x) = g(x) + h(y) dy + c1
2 c 0
2
assuming they are integrable and the integral of their derivatives is itself
LECTURE 18. TRAVELLING WAVES 107
and Z x
1 1
G(x) = g(x) − h(y) dy + c2 .
2 c 0
Since F (x) + G(x) = g(x), we get c1 + c2 = 0. Therefore, the solution to the
wave equation is given by (18.0.2).
Aliter. Let us derive the d’Alembert’s formula in an alternate way. Note
that the wave equation can be factored as
∂ ∂ ∂ ∂
+c −c u = utt − c2 uxx = 0.
∂t ∂x ∂t ∂x
∂ ∂
We set v(x, t) = ∂t − c ∂x u(x, t) and hence
vt (x, t) + cvx (x, t) = 0 in R × (0, ∞).
Notice that the above first order PDE obtained is in the form of homogeneous
linear transport equation (cf. (??)), which we have already solved. Hence,
for some smooth function φ,
v(x, t) = φ(x − ct)
and φ(x) := v(x, 0). Using v in the original equation, we get the inhomoge-
neous transport equation,
ut (x, t) − cux (x, t) = φ(x − ct).
Recall the formula for inhomogenoeus transport equation (cf. (??))
Z t
u(x, t) = g(x − at) + φ(x − a(t − s), s) ds.
0
Since u(x, 0) = g(x) and a = −c, in our case the solution reduces to,
Z t
u(x, t) = g(x + ct) + φ(x + c(t − s) − cs) ds
0
Z t
= g(x + ct) + φ(x + ct − 2cs) ds
0
−1 x−ct
Z
= g(x + ct) + φ(y) dy
2c x+ct
1 x+ct
Z
= g(x + ct) + φ(y) dy.
2c x−ct
LECTURE 18. TRAVELLING WAVES 108
But φ(x) = v(x, 0) = ut (x, 0) − cux (x, 0) = h(x) − cg 0 (x) and substituting
this in the formula for u, we get
1 x+ct
Z
u(x, t) = g(x + ct) + (h(y) − cg 0 (y)) dy
2c x−ct
1
= g(x + ct) + (g(x − ct) − g(x + ct))
2
1 x+ct
Z
+ h(y) dy
2c x−ct
1 x+ct
Z
1
= (g(x − ct) + g(x + ct)) + h(y) dy
2 2c x−ct
109
Appendix A
Divergence Theorem
111
APPENDIX A. DIVERGENCE THEOREM 112
(i) Z Z
∂u
(v∆u + ∇v · ∇u) dx = v dσ,
Ω ∂Ω ∂ν
∂u
where ∂ν
:= ∇u · ν;
(ii) Z Z
∂u ∂v
(v∆u − u∆v) dx = v −u dσ.
Ω ∂Ω ∂ν ∂ν
Hint. Apply divergence theorem to V = v∇u to get the first formula. To get
second formula apply divergence theorem for both V = v∇u and V = u∇v
and subtract one from the other.
2
J. L. Lagrange might have discovered this, before Gauss, in 1762
Appendix B
113
APPENDIX B. NORMAL VECTOR OF A SURFACE 114
Appendix C
Duhamel’s Principle
Notice that x0 e−at is a solution of the homogeneous ODE. Thus, the solution
x(t) can be given as
Z t
x(t) = S(t)x0 + S(t − s)f (s) ds
0
where S(t) is a solution operator of the linear equation, given as S(t) = e−at .
Consider the second order inhomogeneous ODE
00
x (t) + a2 x(t) = f (t) in (0, ∞)
x(0) = x0 (C.0.2)
0
x (0) = x1 .
115
APPENDIX C. DUHAMEL’S PRINCIPLE 116
x0 (t) = ay(t).
Then
f (t)
y 0 (t) = − ax(t)
a
and the second order ODE can be rewritten as a system of first order ODE
with the initial condition X0 := X(0) = (x0 , x1 /a). We introduce the matrix
(At)n
exponential eAt = ∞ At
P
n=1 n! . Then, multiplying the integration factor e
both sides, we get
[eAt X(t)]0 = eAt F (t)
and Z t
−At
X(t) = X0 e + eA(s−t) F (s) ds.
0
Notice that X0 e−At is a solution of the homogeneous ODE. Thus, the solution
X(t) can be given as
Z t
X(t) = S(t)X0 + S(t − s)F (s) ds
0
where S(t) is a solution operator of the linear equation, given as S(t) = e−At .
Bibliography
117
BIBLIOGRAPHY 118
Index
Hadamard, 9
wellposed, 9
harmonic function, 45
Hessian matrix, 3
hyperbolic PDE, 25
integral curve, 14
integral surface, 14
Laplace operator, 41
Laplace-Beltrami operator, 43
Laplacian, 3
maximum principle
strong, 48
weak, 47
119