Sunteți pe pagina 1din 28

ADEs Notes Alejandro Cantarero

Notes on Dierential Equations


Alejandro Cantarero
These are my personal notes on topics that commonly show up on the Applied Dierential Equations (ADEs)
qualifying exam given by the Mathematics Department at UCLA. Various sections include a subsection called
Qual Problems. This is a list of problems related to the section that have been completed and may be
illustrative as generally examples are not included in the notes. Solutions to these problems can be found in
the solutions set.
1
ADEs Notes Alejandro Cantarero
Contents
1 General Denitions and Key Theorems 4
2 Ordinary Dierential Equations 4
2.1 Methods for Solving Ordinary Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 Integrating Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Phase Plane Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.1 Identifying Stationary Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.2 Classifying Stationary Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.3 Sketching the Phase Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Lyapunovs Second Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Qual Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 The Fourier Transform 9
3.1 Uses of the Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Qual Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Method of Characteristics 10
4.1 Semilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Quasilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.3 General Nonlinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.4 Qual Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5 Elliptic Partial Dierential Equations 12
5.1 Laplaces Equation and Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2 Fundamental Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2.1 The Greens Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.2.2 Constructing Greens Functions by the Method of Reections . . . . . . . . . . . . . . 13
5.3 Properties of Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.4 Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.5 Eigenvalues and Eigenfunctions of the Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.5.1 Constructing Solutions to the Poisson Equation . . . . . . . . . . . . . . . . . . . . . . 16
5.6 Energy Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.7 Weak Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.7.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.7.2 Well-Posedness of the Weak Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.8 Qual Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6 Parabolic Equations 19
6.1 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.1.1 Useful Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.2 Solution Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.2.1 Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.3 Maximum and Comparison Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.4 Energy Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.5 Qual Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7 Wave Equations 22
7.1 Solution Techniques for the Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7.1.1 DAlemberts Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7.1.2 Weak Solutions and the Parallelogram Rule . . . . . . . . . . . . . . . . . . . . . . . . 23
7.1.3 The Nonhomogeneous Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.1.4 Two Dimensions and Method of Descent . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2
ADEs Notes Alejandro Cantarero
7.1.5 Three Dimensions and Kirchhos Formula . . . . . . . . . . . . . . . . . . . . . . . . 25
7.2 Eigenfunction Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.3 Energy Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.4 Hyperbolic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.5 Qual Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
8 Calculus of Variations 27
8.1 Optimization with Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
8.2 Qual Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
9 References 28
3
ADEs Notes Alejandro Cantarero
1 General Denitions and Key Theorems
Denition 1.1. (Multi-index Notation) A multi-index is a vector = (
1
,
2
, . . . ,
n
) of order || =
1
+

2
+ +
n
. Given a multi-index , we make the following denitions:
1. D

1
x1

2
x2

1
x1
2. (a)

= (a)
1
(a)
2
(a)
n
for any scalar or vector a.
Theorem 1.1. (Divergence Theorem)
_

div v =
_

v n (1.1)
where n is the exterior unit normal.
Theorem 1.2. (Integration by Parts)
_

u
xi
v =
_

uv
xi
+
_

uv
i
(1.2)
where
i
is the unit normal vector pointing in the appropriate direction.
Theorem 1.3. (Poincares Inequality) Let 1 p < and let be a bounded open set. Then there exists
c
p,
> 0 such that v W
1,p
0
()
c
p,
v
L
p
()
v
L
p
()
. (1.3)
Theorem 1.4. (Gronwalls Inequality) If y(t) 0 and
dy
dt
y with (t) 0 and L
1
((0, T)), then
y(t) y(0)exp
__
t
0
(s) ds
_
. (1.4)
2 Ordinary Dierential Equations
2.1 Methods for Solving Ordinary Dierential Equations
In this section, we recall some common methods used for solving ODEs. Seperable equations can be solved
using separation of variables (which we will not discuss here due to its simplicity). Another simple method
for solving ODEs is to recall the equivalent integral formulation of the equation.
_
y

= f(t, y)
y(t
0
) = y
0
y(t) = y
0
+
_
t
t0
f(s, y(s)) ds
2.1.1 Integrating Factor
We use an integrating factor to solve ODEs of the form
dy
dx
+p(x)y(x) = q(x) (2.1)
Now we let
v(x) =
_
p(x) dx. (2.2)
The integrating factor is then given by e
v(x)
. Multiplying both sides of the equation by the integrating factor
we nd
d
dx
_
e
v(x)
y(x)
_
= e
v(x)
_
dy
dx
+p(x)y(x)
_
= e
v(x)
q(x). (2.3)
4
ADEs Notes Alejandro Cantarero
Now, integrating both sides, we nd that
e
v(x)
y(x) =
_
e
v(x)
q(x) dx (2.4)
y(x) = e
v(x)
_
e
v(x)
q(x) dx. (2.5)
2.2 Boundary Value Problems
Problems related to two-point boundary value problems frequently occur on past exams. We will look at
the study of these problems by considering a (somewhat) general theory of linear operators perspective.
We start by considering some general nth order linear dierential operator
Lu = p
0
u
(n)
+p
1
u
(n1)
+ +p
n
u (2.6)
with p
0
= 0 on the interval [a, b]. We also need to enforce boundary conditions, B
i
u = 0, j = 1, . . . , n, which
we summarize as Bu = 0. In this section, we wish to consider the eigenvalue problem
Lu = u, Bu = 0. (2.7)
If L is a linear operator, recall the following denition.
Denition 2.1. (Adjoint) The adjoint, L

, of a linear operator L is given by


(Lu, v) = (u, L

v) (2.8)
for u, v H, a Hilbert space.
Note here that we are not being particularly general (or careful), but rather simply summarizing enough
information to understand the concepts that arise in material related to the ADEs exam. Also of importance
is the following denition:
Denition 2.2. (Self-Adjoint) A linear operator L is called self-adjoint if L = L

.
We are particularly interested in self-adjoint operators due to the following theorem.
Theorem 2.1. Let the eigenvalue problem (2.7) be self-adjoint. Then the eigenvalues are real and are at
most countable with no nite cluster point. Further, eigenfunctions corresponding to distinct eigenvalues are
orthogonal.
In fact, when the eigenvalue problem is self-adjoint, we have that the eigenfunctions of L form an orthogonal
basis for the appropriate space, and hence we can use them to construct solutions to (2.7), demonstrating
the existence of solutions.
Occasionally, the operator may not be self-adjoint in the appropriate L
p
space, however, it may be
possible to construct (or choose) a dierent space where we can nd a set of orthogonal basis functions. One
common solution is to try a weighted L
p
space, dened by the inner product
(u, v)

=
_
b
a
uv. (2.9)
In fact, in some situations, we can actually nd the weighting function by trying to force the dierential
operator to be self-adjoint in the weighted inner product,
(Lu, v)

=
_
b
a
Lu(v) (2.10)
=
_
b
a
_
p
0
u
(n)
+p
1
u
(n1)
+ +p
n
u
_
(v) (2.11)
=
_
b
a
p
0
u
(n)
(v) +p
1
u
(n1)
(v) + +p
n
u(v). (2.12)
5
ADEs Notes Alejandro Cantarero
Now we integrate by parts to remove all the derivatives from the u functions, and place them onto the
function v. If we expand out these terms and then factor out the u function and a , we will be left with
a series of conditions on and its derivatives needed to make (Lu, v)

= (u, Lv)

.
Now, as mentioned earlier, once we have a space where the dierential operator is self-adjoint, we have
the existence of a solution. Uniqueness can be demonstrated if we can show that the operator has a trivial
nullspace. To demonstrate this, we proceed by looking for a contradiction and start by taking the inner
product of the operator and a function in the nullspace, hence (Lu, u) = 0. Working from here, we simply
show that we must have u = 0.
It is also important to recall here that we can obtain some information on the eigenvalues of the dierential
operator. In particular,

min
= inf
(Lu, u)
(u, u)
(2.13)

max
= sup
(Lu, u)
(u, u)
(2.14)
2.3 Phase Plane Analysis
In this section we cover the basics of phase plane analysis for systems of ODEs. We look specically at
systems of two equations (or equivalently second order ODEs). The process for analyzing such problems can
be broken down into three main steps:
1. Identify the stationary points.
2. Classify the stationary points.
3. Sketch the phase portrait.
Alternatively, we may see the terminology critical points or equilibrium points. We will now discuss each of
these topics in more detail, considering a two-dimensional autonomous system,
y

= g(y). (2.15)
2.3.1 Identifying Stationary Points
The stationary points are simply those points a R
2
such that
g(a) = 0. (2.16)
Finding these points is a very straightforward exercise.
2.3.2 Classifying Stationary Points
In order to classify the stationary points we have found, we start by Taylor expanding the system about the
stationary point, a, keeping only the linear terms,
y

= Ay +h(y). (2.17)
The behaviour of the stationary point is characterized by the eigenvalues of the matrix A. Once we have
the eigenvalues, we can write down the matrix B = T
1
AT. For real eigenvalues, we have the following four
cases for the matrix B (in all cases we assume = 0 = ):
Improper Node
_
0
0
_
(2.18)
with either < < 0 or 0 < <
Proper Node
_
0
0
_
(2.19)
6
ADEs Notes Alejandro Cantarero
Saddle Point
_
0
0
_
(2.20)
with < 0 <
Improper Node (with dimension one
eigenspace)
_
1
0
_
(2.21)
If the eigenvalues are complex, of the form i, we write B in the forms (assuming = 0):
Spiral
_


_
(2.22)
with = 0
Center
_
0
0
_
(2.23)
2.3.3 Sketching the Phase Space
The general behaviour of each type of stationary point is illustrated below.
Improper Node Proper Node
Saddle Point Improper Node (with 1 eigenvector)
7
ADEs Notes Alejandro Cantarero
Spiral Center
Note that the sign of the eigenvalues indicates whether the stationary point is stable or unstable (for
nodes) or which manifold of a saddle point is the stable manifold and which is the unstable. Negative
eigenvalues indicate stability and positive indicate that a manifold or node is unstable. The position of the
structure is centered on the stationary point and oriented by the eigenvectors. In the case of improper nodes,
the magnitude of the eigenvalues indicate which eigenvector the solution curves are attracted to (or repelled
from) faster.
We illustrate two examples below with an improper node (left) and a saddle (right). For these pictures,
we take points (x, y). In the case of the improper node, the stationary point is at (2, 1). The rst eigenvector
is (1, 0)
t
with eigenvalue 5, the second (1, 1)
t
with eigenvalue 1. For the saddle point, suppose the critical
point is at (1, 1) with eigenvectors (1, 1)
t
with eigenvalue 1 and (1, 2)
t
with eigenvalue 1.
Recall that uniqueness of solutions in the phase space tells us that solution trajectories cannot cross.
This is an important observation that can be used to locate invariant regions in the phase space.
2.4 Lyapunovs Second Method
Here we consider the use of Lyapunovs second method for the stability analysis of autonomous systems of
ODEs. So, we will consider the problem
y

= f (y) (2.24)
in a region D containing the origin where the origin is an isolated critical point of the system. We then
dene the Lyapunov function V (y) where V : R
n
R is continuous. Now we have the following important
denition.
8
ADEs Notes Alejandro Cantarero
Denition 2.3. The derivative of V with respect to the system y

= f (y) is given by
V

(y) = V (y) f (y). (2.25)


Theorem 2.2. If there exists a scalar function V (y) that is positive denite and for which V

(y) is negative
denite on some region containing the origin, then the zero solution of y

= f (y) is asymptotically stable.


We also have the following important result.
Theorem 2.3. If there exists a scalar function V (y), V (0) = 0, such that V

(y) is either positive or negative


denite on some region containing the origin and if in every neighborhood N of the origin, there exists
at least one point a = 0 such that V (a) has the same sign as V

(y), then the zero solution of y

= f (y) is
unstable.
Finally, the most powerful result is given by the following theorem.
Theorem 2.4. Let there exist a scalar function V (y) such that
(i) V (y) is positive denite and V (y) as y ;
(ii) V

(y) 0 on R
n
;
(iii) the origin is the only invariant subset of {y : V

(y) = 0}.
Then the zero solution of y

= f(y) is globally asymptotically stable.


2.5 Qual Problems
Boundary Value Problems
Fall 2005, #2
Fall 2000, #5
Fall 1999, #7
Phase Plane Analysis
Winter 2003, #1
Spring 2002, #5
Fall 2000, #7
Lyapunovs Method
Winter 2005, #7
3 The Fourier Transform
Denition 3.1. (Fourier Transform) For f L
1
(R
n
), we dene its Fourier transform by

f() =
_
e
2ix
f(x)dx. (3.1)
Denition 3.2. (Fourier Inversion) The inverse Fourier Transform is given by

f(x) =
_
e
2ix
f()d. (3.2)
Note that

f = f if f S where S is the Schwartz class.


Several key properties of the Fourier transform include:
1. For f, g L
1

(f g) =

f g (3.3)
2. f L
1
, f
a
f(x +a), then

f
a
() = e
2ia

f() (3.4)
3.

f = (2i)


f() (3.5)
9
ADEs Notes Alejandro Cantarero
4.

x
n
f(x) =
_
i
2
_
n
d
n
d
n

f() (3.6)
Theorem 3.1. (Plancherel)
f(x)
2
L
2 =

f()
2
L
2 (3.7)
Denition 3.3. (Fourier Transform on T
n
)

f(k) =
_
T
n
e
2ikx/a
f(x) dx (3.8)
f(x) =

kZ
n
e
2ikx

f(k). (3.9)
If our periodic domain is not simply the unit interval, [0, 1]
n
, we have to make a small change to the above
denitions. Consider T
n
a
= [0, a]
n
, then

f(k) =
_
T
n
a
e
2ikx
f(x) dx (3.10)
f(x) =
1
a

kZ
n
e
2ikx/a

f(k). (3.11)
3.1 Uses of the Fourier Transform
In this section we briey describe some of the applications of the Fourier transform as they appear in problems
on past qualifying exams.
Uniqueness of Solutions We can use the Fourier transform to convert a PDE in space into an algebraic
equation or a PDE is time and space into an ODE, allowing us to very easily obtain a solution in Fourier
space. Taking the inverse transform may prove quite dicult, however if we can verify that the solution in
Fourier space is a function in L
2
, then we know that its inverse transform exists (and is unique). This can
prove to be a good method for demonstrating uniqueness of solutions to certain problems.
3.2 Qual Problems
Winter 2005, #6
Spring 2002, #4
4 Method of Characteristics
In this section, we outline the use of the method of characteristics for solving rst order equations of the
form
F(Du, u, x) = 0, x U R
n
(4.1)
where U is an open subset of R
n
. We wish to solve this PDE subject to
u = g on (4.2)
where will be dened as a parameterization of our initial condition. We will now give some specic
examples of classes of rst-order PDEs that can be solved via this method in two dimensions.
10
ADEs Notes Alejandro Cantarero
4.1 Semilinear Equations
In two dimensions, a semilinear equation has the form
a(x, y)u
x
+b(x, y)u
y
= c(x, y, u) (4.3)
where a, b, and c are all continuous and is parameterized by (f(s), g(s), h(s)). The characteristic equations
are then given by
dx
dt
= a(x, y)
dy
dt
= b(x, y)
dz
dt
= c(x, y, u) (4.4)
x(s, 0) = f(s) y(s, 0) = g(s) z(s, 0) = h(s). (4.5)
Note that after solving this system of ODEs, we will then need to solve for s and t in terms of x and y in
order to nd a solution to the original PDE. In order for this to be possible, we need the Jacobian matrix
to be non-singular,
J det
_
x
s
y
s
x
t
y
t
_
= 0. (4.6)
In particular, if the initial curve satises the above condition, then we know that a solution exists for some
time. If J = 0 at some later time, the solution may develop a singularity. Also, if the equation for dz/dt is
non-linear, the solution may cease to exist even if J is non-zero.
4.2 Quasilinear Equations
Quasilinear equations in two dimensions have the form
a(x, y, u)u
x
+b(x, y, u)u
y
= c(x, y, u) (4.7)
again with a, b, and c continuous and parameterized by (f(s), g(s), h(s)). In this case, the equations for
dx/dt and dy/dt are not going to decouple from the equation for dz/dt. In order to obtain a solution, we
need to solve a system for the equations that do not decouple by nding eigenvalues and eigenvectors.
4.3 General Nonlinear Equations
In two dimensions, we rewrite the general nonlinear equation F(u
x
, u
y
, u, x, y) = 0 as
F(p, q, z, x, y) = 0 (4.8)
and we assume that F
2
p
+ F
2
q
= 0 (which ensures that we actually have a rst order equation at any point
(x
0
, y
0
, z
0
)). For these problems, the charactersitic equations are given by
dx
dt
= F
p
dy
dt
= F
q
dz
dt
= pF
p
+qF
q
(4.9)
dp
dt
= F
x
F
z
p
dq
dt
= F
y
F
z
q.
Here we note that we only have intial conditions for the equations for x, y and z coming from which we
again parameterize by (f(s), g(s), h(s)). So, we need to additional functions, (s) and (s), that provide
initial conditions for our equations for p and q. We can nd these by plugging the initial condition into the
original equation and from the ODE for z,
F((s), (s), h(s), f(s), g(s)) = 0 (4.10)
h

(s) = (s)f

(s) +(s)g

(s). (4.11)
11
ADEs Notes Alejandro Cantarero
4.4 Qual Problems
A list of past qual problems that can be solved using the method of characteristics.
Winter 2005, #2
Fall 2004, #7
Winter 2003, #5
5 Elliptic Partial Dierential Equations
A general elliptic operator for x R
n
has the form
L =
n

i,j=1
a
ij
(x)
xi

xj
+
n

i=1
b
i
(x)
xi
+c(x) (5.1)
which satises the ellipticity condition
n

i,j=1
a
ij
(x)
i

j
||
2
(5.2)
for R
n
and > 0. An operator that satises the ellipticity condition is called uniformly elliptic.
5.1 Laplaces Equation and Coordinate Systems
Sometimes, it is easier to solve Laplaces equation in something other than Cartesian coordinates. In par-
ticular, in two dimensions, we can write Laplaces equation in polar coordinates as

2
u
r
2
+
1
r
u
r
+
1
r
2

2
u

2
= 0 (5.3)
with appropriate boundary conditions. Note that if we have additional information such as the fact that the
solution has radial symmetry, then we know derivatives with respect to will vanish, leaving us with
0 =

2
u
r
2
+
1
r
u
r
(5.4)
=
1
r

r
_
r
u
r
_
. (5.5)
5.2 Fundamental Solutions
Consider the problem
Lu = . (5.6)
A solution u = K of (5.6) is called a fundamental solution of L. Note that K is a distribution solution to
the PDE. Once we have obtained a fundamental solution of the operator L, we can then nd a distribution
solution to the PDE
Lu = f (5.7)
by computing
u(x) =
_
K(x y)f(y) dy. (5.8)
For the Laplace operator, the fundamental solution is given by,
12
ADEs Notes Alejandro Cantarero
K(x) =
_
1
2
log r if n = 2
1
(2n)n
r
2n
if n 3
(5.9)
where
n
is the measure of the (n1)-dimensional sphere in R
n
. Whe then have that a distribution solution
of Laplaces equation would be given by
u(x) =
_
R
n
K(x y)u(y) dy. (5.10)
If we are on a smooth, bounded domain, R
n
, then we have that
u(x) =
_

K(x y)u(y) dy +
_

_
u(y)
K(x y)

y
K(x y)
u(y)

_
dS
y
. (5.11)
5.2.1 The Greens Function
The Greens function for the Dirichlet problem on is given by
G(x, ) = K(x ) +

(x) (5.12)
where

(x) is harmonic in and satises w

(x) = K(x ) for all x . Note that G(x, ) = 0 for all


x . A similar argument can be gone through to obtain a function satisfying the Neumann properties,
but we will not do that here. The Greens function is also a fundamental solution in and so we have that
u() =
_

G(x, )u dx +
_

_
u(x)
G(x, )

x
G(x, )
u(x)

_
dS
x
(5.13)
which simplies to
u() =
_

G(x, )u dx +
_

u(x)
G(x, )

x
. (5.14)
From here, we can dene the Poisson Kernel
H(x, ) =
G(x, )

x
. (5.15)
Now for the problem u = 0 in with u = g on , we have that
u() =
_

H(x, )g(x) dS
x
. (5.16)
5.2.2 Constructing Greens Functions by the Method of Reections
The method of reections allows us to construct Greens functions for a couple simple domains. First we
consider the half-space, = R
n
+
= {(x
1
, . . . , x
n
) R
n
: x
n
> 0}. In this case, the Greens function is given
by
G(x, ) = K(x ) K(x

) (5.17)
where

= (
1
, . . . ,
n
) for R
n
+
. We can also construct the Greens function on a ball, = B
a
(0) =
{x R
n
: |x| < a} where we dene our reection as follows: for let

= a
2
/||
2
. Now the Greens
function is given as
G(x, ) =
_
_
_
K(x )
1
2
log
_
||
a
|x

|
_
if n = 2
K(x )
_
a
||
_
n2
K(x

) if n > 2.
(5.18)
where
13
ADEs Notes Alejandro Cantarero
K(x ) =
_
_
_
1
2
log
_
||
a
|x

|
_
if n = 2
_
a
||
_
n2
K(x

) if n > 2.
. (5.19)
5.3 Properties of Harmonic Functions
Here we list some very useful properties of solutions to Laplaces (and sometimes Poissons) equation. First
we note a consequence of Greens identities:
_

u dx =
_

dS. (5.20)
The following theorem gives us that the value of a harmonic function, u, at the center of a ball is the average
of the function on the surface of the sphere.
Theorem 5.1. (Gauss Mean Value Theorem) If u C
2
() is harmonic in , let and r > 0 so that
B
r
() = {x : |x | r} . Then
u() =
1

n
=
_
|x|=1
u( +rx) dS
x
(5.21)
where
n
is the measure of the (n 1)-dimensional sphere in R
n
.
Theorem 5.2. (Smoothness) If u C
2
() is harmonic, then u C

().
Theorem 5.3. (Harnack Inequality) Let u C
2
() is harmonic and nonnegative and that
1
is a
bounded domain. Then there exists a constant c
1
(depending only on
1
) such that
sup
x1
c
1
inf
x1
u(x). (5.22)
Theorem 5.4. (Liouvilles Theorem) A bounded harmonic function dened on all of R
n
must be a constant.
Theorem 5.5. (Analyticity) If u C
2
() is harmonic, then u is real analytic in .
5.4 Maximum Principles
We now consider elliptic operators satisfying
Lu 0. (5.23)
Theorem 5.6. (Weak Elliptic Maximum Principle) Let u C
2
() C() and Lu 0 with c(x) 0 for
x . Then
max
x
u(x) max
x
u(x). (5.24)
Corollary 5.1. Let u C
2
() C() and Lu 0 with c(x) 0 for x , then
max
x
u(x) max
x
u
+
(x) (5.25)
where u
+
(x) = max(u(x), 0).
The main importance of these statements is that we can use them to prove the uniqueness for the Dirichlet
problem. We can obtain even stronger statements if we assume that the boundary of is C
2
. In particular
14
ADEs Notes Alejandro Cantarero
Theorem 5.7. (Strong Elliptic Maximum Principle) Suppose M = sup{u(x) : x } < .
(a) If u(x
0
) = M for some x
0
, then u is a constant in .
(b) If u(x) is not constant in , but u(x
0
) = M for some x
0
and
u

exists at x
0
, then
u

(x
0
) > 0. (5.26)
Theorem 5.8. If is bounded and has a C
2
boundary, L is a uniformly elliptic operator, and u C
2
()
satises Lu 0, then the strong maximum principle holds if:
(a) c(x) 0;
(b) c(x) 0 and M 0;
(c) c(x) arbitrary and M = 0.
5.5 Eigenvalues and Eigenfunctions of the Laplacian
The (Dirichlet) eigenvalue problem for the Laplacian is given as
_
u +u = 0 in
u = 0 on
(5.27)
where is a bounded domain and C. Note here that the convention is not to consider u = u as the
eigenvalue problem given above will produce positive eigenvalues, while the more standard problem would
result in negative eigenvalues. We now give several useful properties of the eigenvalues of (5.27) (without
proof):
1. The set of eigenvalues is countable.
2. The eigenvalues are all positive and tend towards innity.
3. The eigenspaces associated with each eigenvalue are nite.
4. The rst (principal) eigenvalue is simple and its eigenfunction does not change sign in .
5. Eigenfunctions corresponding to distinct eigenvalues are orthogonal.
6. The eigenfunctions form a basis for L
2
().
These properties provide enough information for us to make the following (important) observation about the
eigenvalues of (5.27),
0 <
1
<
2

3
. (5.28)
Note that here, as with the boundary value problems for ODEs, we can nd a variational characterization
of the eigenvalues for this problem. In particular, lets start from

1
= inf
uX
(Lu, u)
(u, u)
, (5.29)
which will allow us to handle any type of boundary condition. For the standard Dirichlet boundary condition,
note that we nd

1
= inf
uX
_
X
|u|
2
_
X
u
2
. (5.30)
15
ADEs Notes Alejandro Cantarero
5.5.1 Constructing Solutions to the Poisson Equation
We can also use the eigenfunctions of the operator to construct solutions to the Poisson equation. For
example, consider the problem
_
u = f in
u = 0 on .
(5.31)
Assuming that f is suciently regular, we can easily construct solutions to this problem from the eigenfunc-
tions of the Laplacian. First, we start by noting that we can expand f in terms of the eigenfunctions,
f =

i=1
a
i

i
with a
i
=
_

f
i
. (5.32)
Assuming a solution of the form u =

i=1
c
i

i
, we plug this into the PDE and nd that the solution is
given by
u =

i=1
a
i

i
. (5.33)
5.6 Energy Methods
Energy methods for elliptic equations are slighty dierent than what we see for hyperbolic and parabolic
equations since there is no time dependence. However, as in these other cases, we can still use them to
prove uniqueness of solutions in a very simple and direct way. For elliptic problems, we simply multiply
the dierential operator by the solution and then integrate over the domain. For specic examples, see the
solutions to problems listed under energy methods at the end of the section on elliptic problems.
5.7 Weak Formulations
In addition to nding classical solutions to elliptic problems, we can also look for weak solutions. In this
section, we outline the process of nding weak solutions to general second-order elliptic pdes as well as
discuss the well-posedness of the weak form. We note that this material is taken from the section on nite
element methods in the notes prepared for the numerical analysis exam and is included here as a reminder
(and of course, for completeness).
We now discuss the weak formulation for second-order elliptic operators. This generalizes in an obvious
way to higher-order problems. Let be a domain in R
d
and let L be a dierential operator of the form
Lu = ( u) + u +u (5.34)
where , and are functions dened over taking values in R
d,d
, R
d
and R, respectively. Let f be a
function, f : R. We wish to nd a function u : R such that
_
Lu = f in ,
Bu = g on
(5.35)
where B is an operator that describes the boundary conditions for the problem. We now derive some weak
formulations of this problem.
5.7.1 Examples
To nd a weak formulation of the problem, we multiply the equation by a test function v, chosen in an
appropriate space, and then integrate over . Integrating by parts, we will then obtain a weak formulation
of the problem.
Dirichlet Boundary Conditions We start with homogeneous Dirichlet boundary conditions. In this
case, the equation (5.35) becomes
16
ADEs Notes Alejandro Cantarero
_
Lu = f in ,
u = 0 on .
(5.36)
We choose our space of test functions to be H
1
0
() since we have the condition u = 0 on . Now we have
_

( u)v +v ( u) +uv =
_

fv (5.37)
_

v u +v ( u) +uv
_

:
0
(n u)v =
_

fv. (5.38)
Lets dene the following bilinear form
a(u, v) =
_

v u +v ( u) +uv. (5.39)
The weak formulation is then given by
_
Seek u H
1
0
() such that
a(u, v) =
_

fv, v H
1
0
().
(5.40)
In the case where the Dirichlet boundary conditions are not homogeneous, say we have u = g on
where g : R, we have to handle the equation slightly dierently. We rst assume g is suciently
smooth so that there exists a lifting, u
g
of u in H
1
(). That is, u
g
H
1
() and u
g
= g on . Then, we
can write u = u
g
+ where H
1
0
(). Following the same method as above, we obtain the weak form
_

_
Seek u H
1
() such that
u = u
g
+, H
1
0
(),
a(, v) =
_

fv a(u
g
, v), v H
1
0
().
(5.41)
Neumann Boundary Conditions Now we consider boundary conditions of the form n u = g on
where g : R. Now, we take our test functions in H
1
() since we no longer are looking for functions
that are zero on . Deriving the weak form, we nd
_

( u) + u +u =
_

fv (5.42)
_

v u +v ( u) +uv
_

(n u)v =
_

fv (5.43)
_

v u +v ( u) +uv =
_

fv +
_

gv (5.44)
giving
_
Seek u H
1
() such that
a(u, v) =
_

fv +
_

gv, v H
1
().
(5.45)
Mixed Dirichlet-Neumann Boundary Conditions Suppose we have a partition of our boundary,
=
D

N
, where we impose a Dirichlet boundary condition on
D
and a Neumann condition on

N
,
_
u = 0, on
D
n u = g, on
N
(5.46)
17
ADEs Notes Alejandro Cantarero
for a function g :
N
R. The appropriate space for the test functions is now given by
H
1

D
() = {u H
1
() : u = 0 on
D
}. (5.47)
When nding the weak form, we now split the integral on into two parts on the dierent portions of the
boundary, yielding the following weak form
_
Seek u H
1

D
() such that
a(u, v) =
_

fv +
_

N
gv, v H
1

D
().
(5.48)
Robin Boundary Conditions A Robin boundary condition has the form u + n u = g on
where , g : R. If we compute the integral in equation (5.37), we nd the weak formulation
_
Seek u H
1
() such that
a(u, v) +
_

uv =
_

fv +
_

gv, v H
1
().
(5.49)
General Form The generic form of these methods is given by
_
Seek u V such that
a(u, v) = f(v), v V,
(5.50)
where V is a Hilbert space satisfying
H
1
0
() V H
1
(). (5.51)
We have a bilnear form a(, ) on V V and a linear form, f() dened on V .
5.7.2 Well-Posedness of the Weak Form
Here we consider the well-posedness of the problem (5.50). We begin with some denitions.
Denition 5.1. (H
1
-Norm) For u H
1
(), its norm is given by
u
H
1
()
=
__

u
2
+|u|
2
_1
2
. (5.52)
We specically note that from this denition, we can conclude that u
L
2
()
u
H
1
()
. This will be
useful in proving the following properties:
Denition 5.2. (Coercivity) The bilinear form a(, ) is coercive if > 0 such that
a(u, u) u
2
V
, u V. (5.53)
Denition 5.3. (Continuity of the bilinear form) a(, ) is continuous if > 0 such that
|a(u, v)| u
V
v
V
, u, v V. (5.54)
Denition 5.4. (Continuity of the linear form) f() is continuous if > 0 such that
|f(v)| v
V
, v V. (5.55)
In order to demonstrate the well-posedness of the weak variational form, we need to verify the conditions in
the Lax-Milgram Lemma , i.e. the bilinear form is continuous (bounded) and coercive and that the linear
form is continuous (bounded).
18
ADEs Notes Alejandro Cantarero
Lemma 5.1. (Lax-Milgram) Let a be a coercive, bounded bilinear form on a Hilbert space V and f be a
bounded linear form on V , then there exists a unique u V such that
a(u, v) = f(v) v V. (5.56)
Further, for all f V , we have
u
V

1

f
V
(5.57)
where is the constant from the coercivity bound.
We now give several inequalities that are useful for nding the bounds needed to verify the Lax-Milgram
Lemma.
Theorem 5.9. Let be a bounded connected open set in R
n
, with suciently regular boundary. Then we
have for u H
1
(), such that
_

u(x)dx = 0,
u
2
L
2
()
P()u
2
L
2
()
. (5.58)
Corollary 5.2. |u|
H
1
()
= u
L
2
()
is a norm equivalent with the norm u
H
1
()
on the subspace V
0
(closed in H
1
()) dened by
V
0
= {u H
1
() :
_

u(x)dx = 0}. (5.59)


Corollary 5.3. Let be a bounded connected open set in R
n
with suciently regular boundary . Suppose
=
1

2
with length (area) of
2
> 0. Let
V
2
= {u H
1
() : u|
2
= 0}. (5.60)
Then V
2
is a closed subspace of H
1
() and |u|
H
1
()
= u
L
2
()
is a norm equivalent with the norm
u
H
1
()
on the subspace V
2
.
Corollary 5.4. Let be an open and bounded domain with Lipschitz-continuous boundary = . Then
there is a positive constant C such that
u|

L
2
()
Cu
H
1
()
. (5.61)
5.8 Qual Problems
Eigenvalue Problems
Winter 2004, #1
Fall 2003, #2
Winter 2003, #2
Spring 2002, #8
Spring 2001, #4
Maximum Principle
Spring 2006, #6
Spring 2002, #8(b)
Energy Methods
Fall 2002, #8
Fall 2000, #1(a)
Weak Solutions
Fall 2000, #1(b)
6 Parabolic Equations
In this section we discuss relevant techniques for parabolic equations that show up on old qualifying exams.
Since the heat equation is the most common, we will begin there.
19
ADEs Notes Alejandro Cantarero
6.1 The Heat Equation
We begin with the pure initial value problem for the heat equation, given by
_
u
t
= u for t > 0, x R
n
u(x, 0) = g(x) for x R
n
.
(6.1)
We can construct solutions to this problem from the heat kernel (which we can nd via the Fourier transform
or as the fundamental solution to the heat equation, but which we will simply state here),
K(x, t) =
1
(4t)
n/2
e
|x|
2
4t
. (6.2)
Now, assuming that our initial condition is bounded and continuous, we have the following solution to (6.1)
u(x, t) = K(x, t) g =
_
R
n
K(x y, t)g(y) dy. (6.3)
6.1.1 Useful Information
For lack of a better title, we call this section Useful Information. Useful information the rst, integrating
Gaussians. Recall that
_
R
ae
(x+b)
2
/c
2
dx = ac

. (6.4)
We compute this by rst making the subsitution y = x +b and then z =
y
c
and recalling that
_
R
e
z
2
dz =

. (6.5)
6.2 Solution Techniques
We have several options for solving parabolic equations, which we summarize below based on the spatial
domain.
Separation of Variables/Eigenfunction Expansions: Bounded domains.
Fourier Transform: Problems on R
n
or bounded, periodic domains.
Laplace Transform: Problems on any domain.
In 7.2, we discuss solving wave equations using eigenfunction expansions. The technique for parabolic
equations is similar, and we will not repeat it here.
6.2.1 Laplace Transform
We begin with the denition of the Laplace transform,
Denition 6.1. (Laplace Transform) Let u L
1
(R
+
). Then its Laplace transform is given by
L(u) = u(s) =
_

0
e
st
u(t) dt (6.6)
with s 0.
Note that when we take the Laplace transform of a PDE, we remove the time component, and are left with
the spatial components. In particular, the time relationships become algebraic, so we are left with an ODE
to solve in space. We can then compute the inverse transform to go back to time and obtain a solution to
our PDE. Here we note a few important properties of Laplace transforms, related to their application to
PDEs.
20
ADEs Notes Alejandro Cantarero
1.
L(u
t
) = su u(x, 0) (6.7)
2.
L(u
x
) =
x
u (6.8)
Computing inverse Laplace transforms is dicult, and will not be discussed here due to time constraints.
6.3 Maximum and Comparison Principles
Like elliptic equations, parabolic equations also satisfy a maximum principle. As before, we let L be the
second order elliptic operator given by
L =
n

i,j=1
a
ij
(x, t)

2
x
i
x
j
+
n

i=1
b
i
(x, t)

x
i
+c(x, t) (6.9)
where L satises the ellipticity condition
n

i,j=1
a
ij
(x, t)
i

j
||
2
, for (x, t) (0, T) = U
T
(6.10)
with > 0 a constant and a
ij
, b
i
, and c bounded functions on U
T
. Parabolic equations have the form
u
t
= Lu +f. (6.11)
We specically consider equations satisfying the property
Lu u
t
in U
T
. (6.12)
Lets dene the parabolic boundary as
T
= ( [0, T]) ( {0}). Then we have that
Theorem 6.1. (Weak Parabolic Maximum Principle) Let u C
2;1
(U
T
) C(U
T
) be such that it satises
(6.12) with c(x, t) 0. Then
max
(x,t)U
T
u(x, t) max
(x,t)
T
u(x, t). (6.13)
Parabolic equations also satisfying a strong maximum principle, similar to that for elliptic equations.
Theorem 6.2. (Strong Parabolic Maximum Principle) Let M = sup{u(x, t) : (x, t) U
T
} < . If the
bounded domain has C
2
-boundary and u C
2;1
(U
T
) satises (6.12) and one of the following cases holds:
(i) c(x, t) 0,
(ii) c(x, t) 0 and M 0,
(iii) c(x, t) arbitrary and M = 0,
then
(a) If u(x
0
, t
0
) = M for some (x
0
, t
0
) U
T
, then u(x, t) M for all x and 0 < t t
0
.
(b) If u(x
0
, t
0
) = M for some x
0
and 0 < t
0
< T, but u(x, t) < M for all x , 0 < t < t
0
, then
u

(x
0
, t
0
) > 0 (6.14)
provided that the derivative exists.
From the strong maximum principle, we can also obtain some comparison principles, which depend on
the type of boundary condition. We will state these theorems for the heat equation (L = ), though similar
relations hold for more general parabolic equations.
21
ADEs Notes Alejandro Cantarero
Theorem 6.3. (Dirichlet Comparison) Suppose is a bounded domain, f C
1
, and u, v C
2;1
(U
T
)
C(U
T
) satisfy
u
t
u f(u) v
t
v f(v) in U
T
u(x, 0) v(x, 0) for x
u(x, t) v(x, t) for x , 0 < t < T.
Then u(x, t) v(x, t) for all (x, t) U
T
.
Theorem 6.4. (Neumann Comparison) Suppose is a bounded domain with C
2
-boundary, f : R R is
C
1
, and u, v C
2;1
(U
T
) C
1
(U
T
) satisfy
u
t
u f(u) v
t
v f(v) in U
T
u(x, 0) > v(x, 0) for x
u

(x, t)
v

(x, t) for x , 0 < t < T.


Then u(x, t) > v(x, t) for all (x, t) U
T
.
6.4 Energy Estimates
As with elliptic problems, we can use energy estimates (or the maximum principle) to demonstrate the
uniqueness of solutions to parabolic equations. For parabolic equations, the energy has the form
E(t) =
1
2
_

u
2
dx (6.15)
where u is the solution to the PDE. For energy arguments, we start by dierentiating the energy with respect
to time, nding that
E

(t) =
_

uu
t
dx. (6.16)
We can now use the PDE that u satises to substitute in for u
t
, and then manipulate the energy (primarily
through integration by parts) to demonstrate that we can bound the energy in the form E

(t) (t)E(t)
where (t) 0. Then using Gronwalls inequality, Theorem 1.4, we can get a bound for E(t) involving E(0)
which is equal to zero, and uniqueness will follow (see the example in the solution set).
6.5 Qual Problems
Eigenfunction Expansions
Spring 2006, #3
Winter 2003, #3(a)
Fall 2002, #6
Maximum/Comparison Principles
Winter 2003, #3(b)
Energy Estimates
Fall 2000, #2
Laplace Transform
Fall 2000, #6
7 Wave Equations
The standard homogeneous wave equation is given by
u
tt
c
2
u = 0 (7.1)
with appropriate initial and boundary conditions. We will now go through a few specic cases and their
solution methods.
22
ADEs Notes Alejandro Cantarero
7.1 Solution Techniques for the Wave Equation
7.1.1 DAlemberts Formula
We begin with the one-dimensional wave equation on all of space, given by
_
u
tt
c
2
u
xx
= 0
u(x, 0) = g(x) u
t
(x, 0) = h(x).
(7.2)
The solution to this problem is given by dAlemberts formula,
u(x, t) =
1
2
(g(x ct) +g(x +ct)) +
1
2c
_
x+ct
xct
h() d. (7.3)
7.1.2 Weak Solutions and the Parallelogram Rule
We can also construct weak solutions to the one-dimensional wave equation (on either all of space or on a
bounded domain) using the parallelogram rule. Consider the parallelogram below, where the sides are all
segments of characteristics for the equation.
In this situation, solutions to the wave equation now satisfy
u(A) +u(C) = u(B) +u(D). (7.4)
Now consider applying this technique to a bounded domain, [0, L]. By drawing the domain and the charac-
teristics starting from the corners of the domain and reecting them o of the boundaries, we can partition
the domain into regions R = R
1
R
2
R
3
as shown in the diagram below. Using dAlemberts for-
mula, we can construct a solution in R
1
. Now using the parallelogram rule, we can construct a solution
in R
2
and R
3
using a parallelogram like that shown in region R
2
. Here the only unknown quantity is
u(A) = u(B) +u(D) u(C). Continuing in this manner, we can construct a weak solution in all of R.
23
ADEs Notes Alejandro Cantarero
7.1.3 The Nonhomogeneous Equation
Consider the nonhomogeneous problem with homogeneous initial conditions,
_
u
tt
c
2
u
xx
= f(x, t)
u(x, 0) = u
t
(x, 0) = 0.
(7.5)
We can solve this equation using Duhamels principle in which we look for a solution to the following problem
_

_
U
tt
c
2
U
xx
= 0
U(x, 0, s) = 0
U
t
(x, 0, s) = f(x, s)
(7.6)
which we need to solve for each s > 0. Duhamels principle then states that if U(x, t, s) is C
2
is x, C
1
in t,
and C
0
is s and solves the above PDE, then
u(x, t) =
_
t
0
U(x, t s, s) ds (7.7)
is a solution of the original nonhomogeneous problem. We can use DAlemberts formula to solve for U(x, t, s),
U(x, t, s) =
1
2c
_
x+ct
xct
f(, s) d (7.8)
giving us that
u(x, t) =
_
t
0
1
2c
_
x+c(ts)
xc(ts)
f(, s) d ds. (7.9)
If the nonhomogeneous problem also has nonhomogeneous initial data, we then construct a solution of the
form
u(x, t) = u
h
(x, t) +u
p
(x, t) (7.10)
where u
h
(x, t) is a solution of the homogeneous problem (from DAlemberts formula) and u
p
(x, t) is obtained
exactly as done above using Duhamels principle.
24
ADEs Notes Alejandro Cantarero
7.1.4 Two Dimensions and Method of Descent
Here we consider the problem
_

_
u
tt
= c
2
u
u(x, 0) = g(x)
u
t
(x, 0) = h(x)
(7.11)
for x R
2
and t > 0. Solutions to this problem are obtained by the method of descent and are given by
u(x
1
, x
2
, t) =
1
4

t
_
2t
_

2
1
+
2
2
<1
1
_
1
2
1

2
2
g(x
1
+ct
1
, x
2
+ct
2
) d
1
d
2
_
(7.12)
+
t
4
_
2
_

2
1
+
2
2
<1
1
_
1
2
1

2
2
h(x
1
+ct
1
, x
2
+ct
2
) d
1
d
2
_
.
The domain of dependence for a point (x, t) in two dimensions is the interior of the circle {x +ct : || = 1}
in R
2
. The range of inuence of a point x
0
R
2
is the interior of the cone {(x, t) : |x x
0
| ct}.
7.1.5 Three Dimensions and Kirchhos Formula
We now look at solving (7.11) in R
3
for t > 0. In this case, the solution is given by Kirchhos Forumla,
u(x, t) =
1
4

t
_
t
_
||=1
g(x +ct) dS

_
+
t
4
_
||=1
h(x +ct) dS

. (7.13)
In this case the domain of dependence for a point (x, t) is the surface of the sphere {x +ct : || = 1}. The
range of inuence for a point x
0
R
3
is the forward light cone, {(x, t) : |xx
0
| = ct}. Note that this means
in three-dimensions, signals can be sharp.
7.2 Eigenfunction Expansions
In this section, we consider solving the wave equation using the eigenfunctions of the Laplacian. We will
step through a specic example. Consider the equation
_

_
u
tt
= u for x and t > 0
u(x, 0) = g(x) u
t
(x, 0) = h(x) for x
u(x, t) = 0 for x and t > 0.
(7.14)
Now, if our set of eigenfunctions is {
n
(x)}, lets assume that u has an expansion in eigenfunction. Since
the eigenfunctions only depend on space (and our solution must depend on both space and time), we let the
coecients of the expansion depend on time, u(x, t) =

n=1
a
n
(t)
n
(x). Plugging this into the PDE (and
recalling that the eigenvalue problem is dened as u = u), we nd that the coecients are given by
a

n
(t) +
n
a
n
(t) = 0 (7.15)
which has solution given by a
n
(t) = A
n
cos

n
t + B
n
sin

n
t. Note that at t = 0 we can solve for a
n
(0)
and a

n
(0). Further, we also have eigenfunction expansions for the initial conditions u(x, 0) and u
t
(x, 0).
Skipping the details (as they are very straightforward to work through), combining these facts will lead us
to the fact that
A
n
=
_

g(x)
n
(x) dx and B
n
=
1

n
_

h(x)
n
(x) dx, (7.16)
which in turn gives us our nal solution to the original PDE. We summarize eigenfunction expansions for
the wave equation in the following way:
25
ADEs Notes Alejandro Cantarero
1. Find the eigenfunctions for the Laplacian on the domain, , satisfying the boundary conditions.
2. Expand the solution and initial conditions in terms of the eigenfunctions.
3. Plug the solution into the PDE and nd an ODE for the coecients.
4. Use the initial conditions as well as facts about the resulting ODE at time 0 to nd equations for the
coecients in the eigenfunction expansion.
7.3 Energy Methods
The energy for equation (7.1) at time t is given by
E(t) =
1
2
_
R
n
(u
2
t
+c
2
|u|
2
)dx. (7.17)
Note that the constants in the energy equation may change. For example, if the original wave equation has a
term au
tt
than the energy will have the term au
2
t
and the same goes for the constant weighting the Laplacian.
Generally we are interested in how the energy changes in time. Is it conserved, or non-increasing/decreasing,
so we generally consider
dE
dt
=
_
R
n
_
u
t
u
tt
+c
2
n

i=1
u
xi
u
xit
_
dx. (7.18)
7.4 Hyperbolic Systems
Here we discuss systems of linear rst-order partial dierential equations,
u
t
+
n

j=1
B
j
u
xj
= f in R
n
(0, ) (7.19)
with initial condition u = g on R
n
{t = 0}. Here u = (u
1
, . . . , u
m
) and B
j
: R
n
[0, ) M
mm
.
Problems most likely to occur on the qualify exam will have the form
u
t
+Au
x
= 0 (7.20)
where A M
mm
has entries that are constants.
Denition 7.1. (Hyperbolic System) The system of equations (7.20) is called hyperbolic if the matrix A
is diagonalizable.
Denition 7.2. (Strictly Hyperbolic System) The system of equations (7.20) is called strictly hyperbolic if
all the eigenvalues of A are distinct.
Constructing Solutions
Since the system is diagonalizable, we can write
u
t
+PP
1
u
x
= f (7.21)
P
1
u
t
+ P
1
u
x
= P
1
f (7.22)
where is a diagonal matrix containing the eigenvalues of A. Now, letting v = P
1
u we have a decoupled
system of equations,
v
t
+ v
x
= g (7.23)
26
ADEs Notes Alejandro Cantarero
where g = P
1
f. We can now solve each of these equations separately via the method of characteristics.
Our nal solution is then given by u = Pv. See the solution set (Spring 2001, #2) for an example of carrying
out this calculation.
Use of the Fourier Transform
When the system has constant coecients (which is true in our case), we can also construct solutions
using the Fourier transform. Consider taking the Fourier transform of (7.20). Then, we nd
u
t
+ 2iA u = 0 (7.24)
which for u = g at t = 0, has solution
u = ge
2iAt
. (7.25)
Transforming back to space, we nd
u(x, t) =
_
R
n
e
2ix
ge
2iAt
d. (7.26)
7.5 Qual Problems
Energy Methods
Fall 2004, #3
Winter 2003, #8(a,b)
Hyperbolic Systems
Fall 2002, #3
Spring 2001, #2
Eigenfunctions
Winter 2003, #8(c)
8 Calculus of Variations
By calculus of variations, we really simply mean computing the Euler-Lagrange equations for a given func-
tional on a Banach space. So, we wish to consider the following problem: Let F : X R be a continuous
functional, with X being a Banach space. Now we wish to minimize this functional. More specically,
the minimum of this functional will satisfy some dierential equation, which we wish to nd. Since we are
looking for the critical point of a functional, we start by computing its derivative at u in the direction v,
given by
F

(u)v = lim
0
F(u +v) F(u)

. (8.1)
Further, u
0
is a critical point of F if F

(u
0
) = 0 or, if F

(u
0
)v = 0 for all v X. This is the Euler-Lagrange
equation for F. For these problems we follow these simple steps:
1. Compute the derivative of the functional from the denition, (8.1).
2. Possibly integrate by parts to remove derivatives from v.
3. Factor out the function v.
4. Using the remaining conditions to enforce F

(u)v = 0, obtaining a dierential equation (and possibly


boundary conditions).
See the solutions set for details on the computation of Euler-Lagrange Equations.
27
ADEs Notes Alejandro Cantarero
8.1 Optimization with Constraints
If we are given constraints on the optimization problem, things become a little more complicated. The
constraint is generally given as a C
1
function, G : R
n
R such that G(x) = 0. As with the function that
we are trying to minimize, this will most likely be given as integral equation. Now the minimum must either
satisfy F

(x
0
) = 0 (that is, the constraint set contains the minimum for F) or
F

(x
0
) = G

(x
0
) (8.2)
for R, where is the Lagrange multiplier. The following theorem is very useful in solving constrained
minimization problems,
Theorem 8.1. (Lagrange) Suppose F and G are C
1
functionals on a Banach space X, G(x
0
) = 0 and x
0
is a local extremum of F when constrained to the set C = {x X : G(x) = 0}. Then either
(i) G

(x
0
)v = 0 for all v X or
(ii) there exists R such that F

(x
0
)v = G

(x
0
)v for all v X.
8.2 Qual Problems
Winter 2004, #2
Fall 2000, #4
Constrained Minimization
Spring 2006, #7
9 References
The material for these notes was taken primarily from the sources listed below.
Ordinary Dierential Equations
F. Brauer and J. A. Nohel, The Qualitative Theory of Ordinary Dierential Equations, An Introduction,
Dover Publications, 1969.
Boundary Value Problems
E. Coddington and N. Levinson, Theory of Ordinary Dierential Equations, McGraw-Hill, 1955.
Partial Dierential Equations
R. C. McOwen, Partial Dierential Equations, Methods and Applications, Prentice Hall, second ed., 2003.
L. C. Evans, Partial Dierential Equations, American Mathematical Society, 1998.
28

S-ar putea să vă placă și