Sunteți pe pagina 1din 52

DUBLIN INSTITUTE OF TECHNOLOGY, KEVIN STREET

SCHOOL OF MATHEMATICAL SCIENCES


A COURSE IN
Numerical Analysis
John S Butler butler@maths.tcd.ie
www.maths.tcd.ie/butler
Contents
1 Numerical Solutions to Partial Dierential Equations 4
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Classication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Dierence Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Parabolic equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.1 An explicit method for the heat eqn . . . . . . . . . . . . . . . 10
1.5 Crank Nicholson Implicit method . . . . . . . . . . . . . . . . . . . . 12
1.6 The Theta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.7 Derivative Boundary Conditions . . . . . . . . . . . . . . . . . . . . . 14
1.8 Local Truncation Error and Consistency . . . . . . . . . . . . . . . . 19
1.8.1 Local Truncation . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.8.2 Consistency and Compatibility . . . . . . . . . . . . . . . . . 20
1.8.3 Convergence and Stability . . . . . . . . . . . . . . . . . . . . 22
1.8.4 Analytical treatment of convergence . . . . . . . . . . . . . . . 22
1.8.5 Stability by the Fourier Series method (von Newmanns method) 30
1.9 Elliptic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.9.1 The ve point scheme . . . . . . . . . . . . . . . . . . . . . . 31
1.10 Consistency and Convergence . . . . . . . . . . . . . . . . . . . . . . 34
1.11 Hyperbolic equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.11.1 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . 41
1.11.2 Finite Dierence Method for Hyperbolic equations . . . . . . . 42
1.12 Analysis of the Finite Dierence Methods . . . . . . . . . . . . . . . . 43
1.12.1 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.12.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2
1.12.3 CFL Condition . . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.13 Variational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.14 Ritz -Galerkin Method . . . . . . . . . . . . . . . . . . . . . . . . . . 46
1.15 Finite Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
1.15.1 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3
Chapter 1
Numerical Solutions to Partial
Dierential Equations
1.1 Introduction
Partial Dierential Equations (PDE), occur frequently in maths, natural science and
engineering.
PDEs are problems involving rates of change of functions of several variables.
The following involve 2 independent variables:

2
u =

2
u
x
2


2
u
y
2
= f(x, y) Poisson Eqn
u
t
+ v
u
x
= 0 Advection Eqn
u
t
D

2
u
x
2
= 0 Heat Eqn

2
u
t
2
c
2

2
u
x
2
= 0 Wave Equation
Here v, D, c are real positive constants. In these cases x, y are the space coordinates
and t, x are often viewed as time and space coordinates, respectively.
These are only examples and do not cover all cases. In real occurrences PDEs usually
have 3 or 4 variables.
4
1.2 Classication
PDEs in two independent variables x and y have the form

_
x, y, u,
u
x
,
u
y
,

2
u
x
2
, ...
_
= 0
where the symbol stands for some functional relationship.
As we saw with BVP this is too general a case so we must dene new classes of the
general PDE.
Denition The order of a PDE is the order of the highest derivative that appears.
ie Poisson is 2nd order, Advection eqn is 1st order.
Most of the mathematical theory of PDEs concerns linear equations of rst or second
order.
After order and linearity (linear or non-linear), the most important classication
scheme for PDEs involves geometry.
Introducing the ideas with an example:
Example
(t, x)
u
t
+
u
x
= (t, x) (1.1)
A solution u(t, x) to this PDE denes a surface {t, x, u(t, x)} lying over some region
of the (t, x)-plane.
Consider any smooth path in the (t, x)-plane lying below the solution {t, x, u(t, x)}.
Such a path has a parameterization (t(s), x(s)) where the parameter s measures
progress along the path.
What is the rate of change
du
ds
of the solution as we travel along the path (t(s), x(s)).
The chain rule provides the answer
dt
ds
u
t
+
dx
ds
u
x
=
du
ds
(1.2)
5
Equation (1.2) holds for an arbitrary smooth path in the (t, x)-plane. Restricting
attention to a specic family of paths leads to a useful observation: When
dt
ds
= (t, x) and
dx
ds
= (t, x) (1.3)
the simultaneous validity of (1.1) and (1.2) requires that
du
ds
= (t, x). (1.4)
Equation (1.4) denes a family of curves (t(s), x(s)) called characteristic curves in
the plane (t, x).
Equation (1.4) is an ode called the characteristic equation that the solution must
satisfy along only the characteristic curve.
Thus the original PDE collapses to an ODE along the characteristic curves. Char-
acteristic curves are paths along which information about the solution to the PDE
propagates from points where the initial value or boundary values are known.
Consider a second order PDE having the form
(x, y)

2
u
x
2
+ (x, y)

2
u
xy
+ (x, y)

2
u
y
2
= (x, y, u,
u
x
,
u
y
) (1.5)
Along an arbitrary smooth curve (x(s), y(s)) in the (x, y)-plane, the gradient
_
u
x
,
u
y
_
of the solution varies according to the chain rule:
dx
ds

2
u
yx
+
dy
ds

2
u
yx
=
d
ds
_
u
x
_
dx
ds

2
u
xy
+
dy
ds

2
u
y
2
=
d
ds
_
u
y
_
if the solution u(x, y) is continuously dierentiable then these relationships together
with the original PDE yield the following system:
_
_
_
_
_

dx
ds
dy
ds
0
0
dx
ds
dy
ds
_
_
_
_
_
_
_
_
_
_

2
u
x
2

2
u
xy

2
u
y
2
_
_
_
_
_
=
_
_
_
_
_

d
ds
_
u
x
_
d
ds
_
u
y
_
_
_
_
_
_
(1.6)
6
By analogy with the rst order case we determine the characteristic curves bu where
the PDE is redundant with the chain rule. This occurs when the determinant of the
matrix in (1.6) vanishes that is when

_
dy
ds
_
2

_
dy
ds
__
dx
ds
_
+
_
dx
ds
_
2
= 0
eliminating the parameter s reduces this equation to the equivalent condition

_
dy
dx
_
2

_
dy
dx
_
+ = 0
Formally solving this quadratic for
dy
dx
, we nd
dy
dx
=

_

2
4
2
This pair of ODEs determine the characteristic curves. From this equation we divide
into 3 classes each dened with respect to
2
4.
1. HYPERBOLIC

2
4 > 0 This gives two families of real characteristic curves.
2. PARABOLIC

2
4 = 0 This gives exactly one family of real characteristic curves.
3. ELLIPTIC

2
4 < 0 This gives no real characteristic equations.
Example The wave equation
c
2

2
u
x
2


2
u
t
2
= 0
now equating this with our formula for the characteristics we have
dt
dx
=
0

0 + 4c
2
2
= c
this implies that the characteristics are x + ct = const and x ct = const. This
means that the eects travel along the characteristics.
7
Laplace equation

2
u
x
2
+

2
u
y
2
= 0
from this we have 4(1)(1) < 0 which implies it is elliptic.
This means that information at one point aects all other points.
Heat equation

2
u
x
2

u
t
= 0
from this we have
2
4 = 0 this implies that the equation is parabolic thus we
have
t
x
= 0

We can also state that hyperbolic and parabolic are Boundary value problems and
initial value problems. While, elliptic problems are boundary value problems.
1.3 Dierence Operators
Through out this chapter we will use U to denote the exact solution and w to denote
the numerical (approximate) solution.
1-D dierence operators
D
+
U
i
=
U
i+1
U
i
h
i+1
Forward
D

U
i
=
U
i
U
i1
h
i
Backward
D
0
U
i
=
U
i+1
U
i1
x
i+1
x
i1
Centered
For 2-D Dierences Schemes is similar when dealing with the x-direction we hold
the y-direction constant and then dealing with the y-direction hold the x-direction
8
constant.
D
+
x
U
ij
=
U
i+1j
U
ij
x
i+1
x
i
Forward in the x-direction
D
+
y
U
ij
=
U
ij+1
U
ij
y
i+1y
i
Forward in the y-direction
D

x
U
ij
=
U
ij
U
i1j
x
i
x
i1
Backward in the x direction
D

y
U
ij
=
U
ij
U
ij1
y
i
y
i1
Backward in the y direction
D
0
x
U
ij
=
U
i+1j
U
i1j
x
i+1
x
i1
Centered in the x direction
D
0
y
U
ij
=
U
ij+1
U
ij1
y
i+1
y
i1
Centered in the y direction
Second derivatives

2
x
U
ij
=
2
x
i+1
x
i1
_
U
i+1j
U
ij
x
i+1
x
i

U
ij
U
i1j
x
i
x
i1
_
Centered in x direction

2
y
U
ij
=
2
y
i+1
y
i1
_
U
ij+1
U
ij
y
i+1
y
i

U
ij
U
ij1
y
i
y
i1
_
Centered in y direction
1.4 Parabolic equations
We will look at the heat equation as our sample parabolic equation.
U
T
= K

2
U
X
2
on
and
U = g(x, y) on the boundary
this can be transformed without loss of generality by a non-dimensional transforma-
tion to
U
t
=

2
U
x
2
(1.7)
9
1.4.1 An explicit method for the heat eqn
The equation dierence equation of the dierential equation (1.7) is
w
ij+1
w
ij
t
j+1
t
j
=
w
i+1j
2w
ij
+ w
i1j
h
2
(1.8)
when approaching this we have divided up the area into two uniform meshes one in
the x direction and the other in the t-direction. We dene t
j
= jk where k is the
step size in the time direction.
We dene x
i
= ih where h is the step size in the space direction.
w
ij
denotes the numerical approximation of U at (x
i
, t
j
).
Rearranging the equation we get
w
ij+1
= rw
i1j
+ (1 2r)w
ij
+ rw
i+1j
(1.9)
where r =
k
h
2
.
This gives the formula for the unknown term w
ij+1
at the (ij + 1) mesh points in
terms of terms along the jth time row.
Hence we can calculate the unknown pivotal values of w along the rst row t = k or
j = 1 in terms of the known boundary conditions.
Example In this case we look at a rod of unit length with each end in ice.
The rod is heat insulated along its length so that temp changes occur through heat
conduction along its length and heat transfer at its ends, where w denotes temp.
Simple case
Given that the ends of the rod are kept in contact with ice and the initial temp
distribution is non dimensional form is
1. U = 2x for 0 x
1
2
2. U = 2(1 x) for
1
2
x 1
In other words we are seeking a numerical solution of
U
t
=

2
U
x
2
which satises
10
1. U=0 at x=0 for all t > 0 (the boundary condition)
2. U = 2x for 0 x
1
2
for t=0 U = 2(1 x) for
1
2
x 1 for t=0 (the initial
condition)
The problem is symmetric with respect to x = 0.5 so we need the solution only for
0 x
1
2
Case 1 Let h =
1
10
and k =
1
1000
so that r =
k
h
2
=
1
10
dierence equation (1.9) becomes
w
ij+1
=
1
10
(w
i1j
+ 8w
ij
+ w
i+1j
)
to solve for w
51
we have
w
51
=
1
10
(w
40
+ 8w
50
+ w
60
) =
1
10
(0.8 + 8 1 + 0.8) = 0.96
j/x 0 0.1 0.2 0.3 0.4 0.5 0.6
0 0 0.2 0.4 0.6 0.8 1.0 0.8
1 0 0.2 0.4 0.6 0.8 0.96 0.8
The analytical solution of the PDE satisfying these conditions is
U =
8

n=1
1
n
2
sin(
1
2
n)sin(x)e
n
2

2
t
Comparison to the solution with the dierence solution it is reasonably accurate.
Case 2 Let h =
1
10
and k =
1
200
so that r =
k
h
2
=
1
2
dierence equation (1.9) becomes
w
ij+1
=
1
2
(w
i1j
+ w
i+1j
)
This method also gives an acceptable approximation to the solution of the PDE.
Case 3 Let h =
1
10
and k =
1
100
so that r =
k
h
2
= 1 dierence equation (1.9) becomes
w
ij+1
= w
i1j
w
ij
+ w
i+1j
Considered as a solution to the PDE this is meaningless although it is the correct
solution of the dierence equation with respect to the initial conditions and the
11
boundary conditions.
This will be discussed later.

1.5 Crank Nicholson Implicit method


Since the implicit method requires that k
1
2
h
2
a new method was needed which
would work for all nite values of r.
They considered the partial dierential equation as being satised at the midpoint
{ih, (j +
1
2
)k} and replace

2
U
x
2
by the mean of its nite dierence approximations at
the jth and (j+1)th time levels. In other words they approximated the equation
_
U
t
_
i,j+
1
2
=
_

2
U
x
2
_
i,j+
1
2
by
w
i,j+1
w
ij
k
=
1
2
_
w
i+1j+1
2w
ij+1
+ w
i1j+1
h
2
+
w
i+1j
2w
ij
+ w
i1j
h
2
_
giving
rw
i1j+1
+ (2 + 2r)w
ij+1
rw
i+1j+1
= rw
i1j
+ (2 2r)w
ij
+ rw
i+1j
(1.10)
with r =
k
h
2
.
In general the LHS contains 3 unknowns and the RHS 3 known pivotal values. Fig-
ure. See notes.
If there are N intervals mesh points along each row then for j=0 and i=1,..,N it gives
N sitimesaneous equations for N unknown pivotal values along the rst row.
Example
U
t
=

2
U
x
2
0 < x < 1 t > 0
where
U = 0, x = 0 and x = 1 t 0;
12
U = 2x 0 x
1
2
and t = 0;
U = 2(1 x)
1
2
x 1 and t = 0;
Choosing h =
1
10
and k =
1
100
and r = 1, while all nite values of r are valid a large
value of h will lead to inaccurate solution.
(1.10) becomes
w
i1j+1
+ 4w
ij+1
w
i+1j+1
= w
i1j
w
i+1j
Due to symmetry U
4j
= U
6j
so we only need consider w
0j
, ..., w
5j
and for notation
simplicity we drop the j when dealing with a specic case.
Considering when j=0.
0 + 4w
1
w
2
= 0 + 0.4
w
1
+ 4w
2
w
3
= 0.2 + 0.6
w
2
+ 4w
3
w
4
= 0.4 + 0.8
w
3
+ 4w
4
w
5
= 0.6 + 1.0
2w
4
+ 4w
5
= 0.8 + 0.8
using the Thomas algorithm we get w
1
= 0.1989, w
2
= 0.3956, w
3
= 0.5834, w
4
=
0.7381 and w
5
= 0.7691, and so forth.
This yields a good numerical solution.
1.6 The Theta Method
The Theta Method is a generalization of the Crank-Nicholson method and expresses
our partial dierential equation as
w
i,j+1
w
ij
k
=
_

_
w
i+1j+1
2w
ij+1
+ w
i1j+1
h
2
_
+ (1 )
_
w
i+1j
2w
ij
+ w
i1j
h
2
__
(1.11)
when = 0 we get the explicit scheme,
when =
1
2
we get the Crank-Nicholson scheme,
and = 1 we get fully implicit backward nite dierence method.
13
The equations are unconditionally valid for
1
2
1. For 0 <
1
2
we must have
r
1
2(1 2)
.
1.7 Derivative Boundary Conditions
Boundary conditions expressed in terms of derivatives occur frequently. Example
U
x
= H(U v
0
) at x = 0
where H is a positive constant and v
0
is the surrounding temp.
How do we deal with this type of boundary condition?
1. By using forward dierence for
U
x
, we have
w
1j
w
0j
h
x
= H(w
0j
v
0
)
where h
x
= x
1
x
0
. This gives us one extra equation for the temp w
ij
.
2. If we wish to represent
U
x
more accurately at x=0, we use a central dierence
formula. It is necessary to introduce a ctitious temp w
1j
at the external mesh
points (h
x
, jk). The temp w
1j
is unknown and needs another equation. This
is obtained by assuming that the heat conduction equation is satised at the
end points. The unknown w
1j
can be eliminated between these equations.
Example
Solve for the equation
U
t
=

2
U
x
2
14
satisfying the initial condition
U = 1 for 0 x 1 when t = 0
and the boundary conditions
U
x
= U at x = 0 for all t
U
x
= U at x = 1 for all t.
Case 1 Using forward dierence approximation for the derivative boundary condition
and the explicit method to approximate the PDE.
Our dierence equation is,
w
i,j+1
w
ij
k
=
w
i+1j
2w
ij
+ w
i1j
h
2
w
ij+1
= w
ij
+ r(w
i1j
2w
ij
+ w
i+1j
) (1.12)
where r =
k
h
2
x
.
At i=1, (1.12) is,
w
1j+1
= w
1j
+ r(w
0j
2w
1j
+ w
2j
) (1.13)
The boundary condition at x=0 is
U
x
= U in terms of forward dierence this is
w
1j
w
j0
h
x
= w
0j
rearranging
w
0j
=
w
1j
1 + h
x
(1.14)
Using (1.14) and (1.13) to eliminate we get,
w
1j+1
=
_
1 2r +
r
1 + h
x
_
w
1j
+ rw
2j
.
It will be proven, that the scheme is valid for 0 r
1
2
. Choose h
s
=
1
10
and
k =
1
400
such that r =
1
4
.
15
The equations become
w
1j+1
=
8
11
w
1j
+
1
4
w
2j
w
0j+1
=
10
11
w
1j+1
w
ij+1
=
1
4
(w
i1j
+ 2w
ij
+ w
i+1j
) i = 2, 3, 4
and
w
5j+1
=
1
4
(2w
4j
+ 2w
5j
) by symmetry
Solving with an initial guess of U=1, at j=0 we have
w
11
=
8
11
1 +
1
4
1 = 0.9773
w
01
=
10
11
0.9773 = 0.8884
w
i1
=
1
4
(1 + 2 + 1) = 1 i = 2, 3, 4
and
w
5j+1
=
1
4
(2 + 2) = 1 by symmetry
Case 2 Using central dierence approximation for the derivative boundary condition
and the explicit method to approximate the PDE.
Our dierence equation is as in (1.12).
At i=0 we have
w
0j+1
= w
0j
+ r(w
1j
2w
0j
+ w
1j
) (1.15)
The boundary condition at x=0, in terms of central dierences can be written
as
w
1j
w
1j
2h
x
= w
0j
(1.16)
Using (1.16) and (1.15) to eliminate the ctitious term w
1j
we get,
w
0j+1
= w
0j
+ 2r((1 h
x
)w
0j
+ w
1j
)
16
As before let h
x
= 0.1. Then at x=1 out dierence equation becomes
w
10j+1
= w
10j
+ r(w
9j
2w
10j
+ w
11j
)
and the boundary condition is
w
11j
w
9j
2h
x
= w
10j
Eliminating the ctitious term w
11j
we have
w
10j+1
= w
10j
+ 2r(w
9j
(1 + h
x
)w
10j
)
Choosing r =
1
4
we have
w
1j+1
=
1
2
(0.9w
0j
+ w
1j
)
w
ij+1
=
1
4
(w
i1j
+ 2w
ij
+ w
i+1j
) i = 1, 2, 3, 4
and
w
5j+1
=
1
4
(2w
4j
+ 2w
5j
) by symmetry
With the initial condition U=1 and k=0.025, at j=0 we have
w
01
=
1
2
(0.9 + 1) = 0.95
w
i1
=
1
4
(1 + 2 + 1) = 1 i = 1, 2, 3, 4
and
w
5j+1
=
1
4
(2 + 2) = 1 by symmetry
While this method yields more accurate results, both are acceptable.
Case 3 Using central dierence approximation for the derivative boundary condition
and the Crank-Nicolson method to approximate the PDE.
The dierence equation is,
w
i,j+1
w
ij
k
=
1
2
_
w
i+1j+1
2w
ij+1
+ w
i1j+1
h
2
+
w
i+1j
2w
ij
+ w
i1j
h
2
_
17
giving
rw
i1j+1
+ (2 + 2r)w
ij+1
rw
i+1j+1
= rw
i1j
+ (2 2r)w
ij
+ rw
i+1j
(1.17)
with r =
k
h
2
.
The boundary condition at x=0, in terms of central dierences can be written
as
w
1j
w
1j
2h
x
= w
0j
Rearranging we have
w
1j
= w
1j
2h
x
w
0j
(1.18)
and
w
1j+1
= w
1j+1
2h
x
w
0j+1
(1.19)
Let j=0 and i=0 the dierence equation becomes
rw
11
+ (2 + 2r)w
01
rw
11
= rw
10
+ (2 2r)w
00
+ rw
10
(1.20)
Using, (1.18), (1.19) and (1.20) we can eliminate the ctious terms w
1j
and
w
1j+1
. Also due to symmetry around
1
2
we have w
4j
= w
6j
. Choosing h
x
= 0.1,
k = 0.01 and r = 1, our equations become
2.1w
0,j+1
w
1j+1
= 0.1w
0j
+ w
1j
w
i1j+1
+ 4w
ij+1
w
i+1j+1
= w
i1j
+ w
i+1j
i = 1, 2, 3, 4
w
4j+1
+ 2w
5j+1
= w
4j
For the time step j=0 we have
2.1w
0
w
1
= 0.9
w
i1
+ 4w
i
w
i+1
= 2 i = 1, 2, 3, 4
w
4
+ 2w
5
= 1
This method yields very good results.
18
1.8 Local Truncation Error and Consistency
1.8.1 Local Truncation
Let F
ij
(w) represent the dierence equation approximating the PDE at the i jth point
with exact solution w.
If w is replaced by U at the mesh points of the dierence equation where U is the
exact solution of the PDE the value of F
ij
(U) is the local truncation error T
ij
in at
the ij mesh pont.
Using Taylor expansions it is easy to express T
ij
in terms of h
x
and k and partial
derivatives of U at (ih
x
, jk).
Although U and its derivatives are generally unknown it is worthwhile because it
provides a method for comparing the local accuracies of dierent dierence schemes
approximating the PDE.
Example The local truncation error of the classical explicit dierence approach to
U
t


2
U
x
2
= 0
with
F
ij
(w) =
w
ij+1
w
ij
k

w
i+1j
2w
ij
+ w
i1j
h
2
x
= 0
is
T
ij
= F
ij
(U) =
U
ij+1
U
ij
k

U
i+1j
2U
ij
+ U
i1j
h
2
x
= 0
By Taylors expansions we have
U
i+1j
= U((i + 1)h
x
, jk) = U(x
i
+ h, t
j
)
= U
ij
+ h
x
_
U
x
_
ij
+
h
2
x
2
_

2
U
x
2
_
ij
+
h
3
x
6
_

3
U
x
3
_
ij
+ ...
U
i1j
= U((i 1)h
x
, jk) = U(x
i
h, t
j
)
= U
ij
h
x
_
U
x
_
ij
+
h
2
x
2
_

2
U
x
2
_
ij

h
3
x
6
_

3
U
x
3
_
ij
+ ...
U
ij+1
= U(ih
x
, (j + 1)k) = U(x
i
, t
j
+ k)
= U
ij
+ k
_
U
t
_
ij
+
k
2
2
_

2
U
t
2
_
ij
+
k
3
6
_

3
U
t
3
_
ij
+ ...
19
substitution into the expression for T
ij
then gives
T
ij
=
_
U
t


2
U
x
2
_
ij
+
k
2
_

2
U
t
2
_
ij

h
2
x
12
_

4
U
x
4
_
ij
+
k
2
6
_

3
U
t
3
_
ij

h
4
x
360
_

6
U
x
6
_
ij
+ ...
But U is the solution to the dierential equation so
_
U
t


2
U
x
2
_
ij
= 0
the principal part of the local truncation error is
k
2
_

2
U
t
2
_
ij

h
2
x
12
_

4
U
x
4
_
ij
.
Hence
T
ij
= O(k) + O(h
2
x
)

1.8.2 Consistency and Compatibility


It is sometimes possible to approximate a parabolic or hyperbolic equation with a
nite dierence scheme that is stable but which does not converge to the solution
of dierential equation as the mesh lengths tend to zero. Such a scheme is called
inconsistent or incompatible.
This is useful when considering the theorem which states that is a linear nite dier-
ence equation is consistent with a properly posed linear IVP then stability guarantees
convergence of w to U as the mesh lengths tend to zero.
Denition Let L(U) = 0 represent the PDE in the independent variables x and t
with the exact solution U.
Let F(w) = 0 represent the approximate nite dierence equation with exact solution
w.
Let v be a continuous function of x and t with sucient derivatives to enable L(v)
to be evaluated at the point (ih
x
, jk). Then the truncation error T
ij
(v) at (ih
x
, jk)
20
is dened by
T
ij
(v) = F
ij
(v) L(v
ij
)
If T
ij
(v) 0 as h 0, k 0 the dierence equation is said to be consistent or
compatible with the with the PDE.
Looking back at the previous example it follows that the classical explicit approxi-
mation to
U
t
=

2
U
x
2
is consistent with the dierence equation.
Example The equation
U
t


2
U
x
2
= 0
is approximated by the dierence equation
w
ij+1
w
ij1
2k

w
i+1j
2(w
ij+1
+ (1 )w
ij1
) + w
i1j
h
2
x
has a truncation error of
T
ij
=
_
U
t


2
U
x
2
_
ij
+
_
k
2
6

3
U
t
3

h
2
x
12

4
U
x
4
+ (2 1)
r
h
2
x
U
t

k
2
h
2
x

2
U
t
2
_
ij
+
O(
k
3
h
2
x
, h
4
x
, k
4
)
1. Case (i) k = rh As h 0
T
ij
= F
ij
(U)
_
k
2
6

3
U
t
3

h
2
x
12

4
U
x
4
+ (2 1)
r
h
2
x
U
t

k
2
h
2
x

2
U
t
2
_
ij
When =
1
2
the third term tends to innity.
When =
1
2
the limiting values of T
ij
is
U
t


2
U
x
2
r
2

2
U
t
2
= 0
Hence the dierence equation is always inconsistent with
U
t


2
U
x
2
= 0 when
k = rh.
21
2. case (ii)

1.8.3 Convergence and Stability


Denition By convergence we mean that the results of the method approach the
analytical solution as k and h
x
tends to zero.
Denition By stability we mean that errors at one stage of the calculations do not
cause increasingly large errors as the computations are continued.
1.8.4 Analytical treatment of convergence
Assuming no round o error the only dierence between the exact and the numerical
result is error. Consider the equation
U
t
=

2
U
x
2
0 < x < 1 t > 0
Let U
ij
represent the exact solution and w
ij
be the numerical solution of the dierence
equation. Assuming no roundo error the only dierence between the two will be the
error e
ij
.
Example Explicit nite dierence approximation is
w
ij+1
w
ij
k
=
w
i1j
2w
ij
+ w
ij+1
h
2
x
at the mesh points
w
ij
= U
ij
e
ij
w
ij+1
= U
ij+1
e
ij+1
Sub into the dierence equation
e
ij+1
= re
i1j
+ (1 2r)e
ij
+ re
i+1j
+ U
ij+1
U
ij
+ r(2U
ij
U
i1j
U
i+1j
)
22
By Taylors Thm we have
U
i+1j
= U(x
i
+ h, t
j
) = U
ij
+ h
2
x
_
U
x
_
ij
+
h
2
x
2

2
U
x
2
(x
i
+
1
h, t
j
)
U
i1j
= U(x
i
+ h, t
j
) = U
ij
h
2
x
_
U
x
_
ij
+
h
2
x
2

2
U
x
2
(x
i

2
h, t
j
)
U
i+1j
= U(x
i
+ h, t
j
) = U
ij
+ k
U
x
(x
i
, t
j
+
3
k)
Where 0 <
1
,
2
,
3
< 1. Subbing into the original equation gives
e
ij+1
= re
i1j
+ (1 2r)e
ij
+ re
i+1j
+
_
U
x
(x
i
, t
j
+
3
k)

2
U
x
2
(x
i

4
h, t
j
)
_
Where 1 <
4
,
3
< 1.
This is a dierence equation for e
ij
which we need not solve. Let E
j
denote the max
value along the jth time row and M the maximum modulus of the expression in {}.
When r
1
2
all coecients of e in the eqn are positive oe zero so
|e
ij+1
| r|e
i1j
| + (1 2r)|e
ij
| + r|e
i+1j
| + kM
rE
j
+ (1 2r)E
j
+ rE
j
+ kM
= E
j
+ kM
Also
E
j+1
E
j
+ kM (E
j1
+ kM) + kM ... E
0
+ (j + 1)kM = t
j+1
M
Since we are dealing with non derivative boundary conditions E
0
= 0. When h 0
k = rh
2
also tends to 0 and M tends to
_
U
t


2
U
x
2
_
ij
Since the numerical method is consistent the value of M and therefore E
j+1
0.
As |U
ij
w
ij
| E
j
we can say w U as h tends to 0 when r
1
2
.
When r >
1
2
it can be shown that the complimentary function tends to as h tend
to 0.

23
This can also be applied to other methods. Another approach is to look at the matrix
form.
A more Analytical Argument
Let the solution domain of the PDE be the nite rectangle 0 x 1 and 0 t T
and subdivide it into a uniform rectangular mesh by the lines x
i
= ih for i = 0 to
N and t
j
= jk for j = 0 to J it will be assumed that h is related to k by some
relationship such as k = rh or k = rh
2
with r > 0 and nite so that as h 0 as
k 0.
Assume that the nite dierence equation relating the mesh point values along the
(j + 1)th and jth row is
b
i1
w
i1j+1
+ b
i
w
ij+1
+ b
i+1
w
i+1j+1
= c
i1
w
i1j
+ c
i
w
ij
+ c
i+1
w
i+1j
where the coecients are constant. If the boundary values at i = 0 and N for j > 0
are known these (N 1) equations for i = 1 to N 1 can be written in matrix form.
_
_
_
_
_
_
_
_
_
_
_
_
_
b
1
b
2
0 . . . .
b
1
b
2
b
3
0 . . .
0 b
2
b
3
b
4
0 . .
. . . . . . .
. . . . b
N3
b
N2
b
N1
. . . . . b
N2
b
N1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
w
1j+1
w
2j+1
.
.
w
N2j+1
w
N1j+1
_
_
_
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
_
c
1
c
2
0 . . . .
c
1
c
2
c
3
0 . . .
0 c
2
c
3
c
4
0 . .
. . . . . . .
. . . . c
N3
c
N2
c
N1
. . . . . c
N2
c
N1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
w
1j
w
2j
.
.
w
N2j
w
N1j
_
_
_
_
_
_
_
_
_
_
_
_
_
+
_
_
_
_
_
_
_
_
_
_
_
_
_
c
0
w
0j
b
0
w
0j+1
0
.
.
0
c
N
w
Nj
b
N
w
Nj+1
_
_
_
_
_
_
_
_
_
_
_
_
_
Which can be written as
Bw
j+1
= Cu
j
+d
j
24
Where B and C are of order (N 1) w
j
denotes a column vector and d
j
denotes a
column vector of boundary values.
Hence
w
j+1
= B
1
Cw
j
+ B
1
d
j
.
Expressed in a more conventional manner as
w
j+1
= Aw
j
+f
j
Where A = B
1
C and f
j
= B
1
d
j
. Applied recursively this leads to
w
j+1
= Aw
j
+f
j
= A(Aw
j1
+f
j1
) +f
j
= A(Aw
j1
+f
j1
) +f
j
= A
2
w
j1
+ Af
j1
+f
j
.
.
.
= A
j+1
w
0
+ A
j
f
0
+ A
j1
f
1
+ ... +f
j
where w
0
is the vector with initial values and f
0
, f
1
, ..., f
j
are vectors of known bound-
ary values. When we are concerned with a numerical solution, the constant vectors
can be eliminated by investigating the error. Perturb the initial solution w
0
to w
0
.
The solution at the jth time row will be given by
w
j
= A
j
w
0
+ A
j1
f
0
+ A
j1
f
1
+ ... +f
j
If the error e is dened by
e = ww
It follows that
e
j
= w
j
w
j
= A
j
( w
0
w
0
) A
j
e
0
25
In other words a perturbation e
0
of the initial values will propagate according to the
equation
e
j
= Ae
j1
= A
2
e
j2
= ... = A
j
e
0
Hence, for compatible matrix and vector norms
||e
j
| ||A
j
||||e
0
||
Lax and Ritchmyer dene the dierence scheme to be stable when there exists a
positive number M independent of j, k such that
||A
j
|| M
The clearly limits the amplication of any initial perturbation and of any arbitrary
initial rounding errors.
||e
j
|| M||e
0
||
Since
||A
j
|| = ||AA
j1
|| ||A||||A
j1
|| ... ||A||
j
It follows that the Lax-Ritchmyer denition of stability is satised by
||A|| 1
This is the necessary and sucient condition for the dierence equation to be stable
then the solution of the PDE does not increase as t T.
When the condition is satised it follows automatically that the spectral radius
(A) 1
since (A) ||A||. If, however (A) 1 it does not imply that ||A|| 1.
Example Consider the classical explicit equation
w
ij+1
= rw
i1j
+ (1 2r)w
ij
+ rw
i+1j
i = 1, .., N 1
26
for which A is
_
_
_
_
_
_
_
_
_
_
_
_
_
1 2r r 0 . . . .
r 1 2r r 0 . . .
0 r 1 2r r 0 . .
. . . . . . .
. . . . r 1 2r r
. . . . . r 1 2r
_
_
_
_
_
_
_
_
_
_
_
_
_
where r =
k
h
2
x
> 0 and it is assumed that the boundary values w
0j
and w
Nj
are known
for j = 1, 2, .... When 1 2r 0 then 0 r
1
2
and
||A||

= r + 1 2r + r = 1.
When 1 2r < 0 then r >
1
2
then |1 2r| = 2r 1 and
||A||

= r + 2r 1 + r = 4r 1 > 1
there fore the scheme is unstable for r >
1
2
and stable for 0 < r
1
2
.
Alternatively since A is real and symmetric
||A||
2
= (A) = max
j
|
j
|
where |
j
| is the jth eigenvalue of A. Now A can be written as
_
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 . . . .
0 1 0 0 . . .
0 0 1 0 0 . .
. . . . . . .
. . . . 0 1 0
. . . . . 0 1
_
_
_
_
_
_
_
_
_
_
_
_
_
+ r
_
_
_
_
_
_
_
_
_
_
_
_
_
2 1 0 . . . .
1 2 1 0 . . .
0 1 2 1 0 . .
. . . . . . .
. . . . 1 2 1
. . . . . 1 2
_
_
_
_
_
_
_
_
_
_
_
_
_
= I
N1
+ rT
N1
where I
N1
is the unit matrix of order N-1 and T
N1
is an (N 1) (N 1) matrix
27
whose eigen values
j
are given by

j
= 4sin
2
(
j
2N
)
Hence the eigenvalues of A are

j
= 1 4rsin
2
(
j
2N
)
therefore the equation will be stable when
||A||
2
= max
s
|1 4rsin
2
(
s
2N
)| 1
ie
1 1 4rsin
2
(
s
2N
) 1 s = 1, ..N 1
the left hand side of the inequality gives
r
1
2
sin
2
(
(N 1)
2N
)
as h 0N and sin
2
(
(N1)
2N
) 1. Hence r 1.
Therefore it is only stable when 0 < r
1
2
.
Example The Crank Nicholson equation
rw
i1j+1
+(2 +2r)w
ij+1
rw
i+1j+1
= rw
i1j
+(2 2r)w
ij
+rw
i+1j
i = 1, .., N 1
28
In matrix form
_
_
_
_
_
_
_
_
_
_
_
_
_
2 + 2r r 0 . . . .
r 2 + 2r r 0 . . .
0 r 2 + 2r r 0 . .
. . . . . . .
. . . . r 2 + 2r r
. . . . . r 2 + 2r
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
w
1j+1
w
2j+1
.
.
w
N2j+1
w
N1j+1
_
_
_
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
_
2 2r r 0 . . . .
r 2 2r r 0 . . .
0 r 2 2r r 0 . .
. . . . . . .
. . . . r 2 2r r
. . . . . r 2 2r
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
w
1j
w
2j
.
.
w
N2j
w
N1j
_
_
_
_
_
_
_
_
_
_
_
_
_
+b
j
where b
j
is a vector of known boundary conditions. This can be written as
(2I
N1
rT
N1
)w
j+1
= (2I
N1
+ rT
N1
)w
j
+b
j
from which it follows that A is of the form
A = (2I
N1
rT
N1
)
1
(2I
N1
+ rT
N1
)
T
N1
has the eigenvalues
s
= 4sin
2
(
s
2N
) for s = 1, ..., N 1. It follows that the
eigenvalues of A are

s
=
2 4rsin
2
(
s
2N
)
2 + 4rsin
2
(
s
2N
)
.
Thus
||A||

= (A) = max
s

2 4rsin
2
(
s
2N
)
2 + 4rsin
2
(
s
2N
)

< 1
therefore Crank-Nicholson is unconditionally stable.
29
1.8.5 Stability by the Fourier Series method (von Newmanns method)
This method uses a Fourier series to express w
pq
= w(ph
x
, qk) which is
w
pq
= e
ix

q
where = e
k
in this case i denotes the complex number i =

1 and for values


of needed to satisfy the initial conditions. is known as the amplication factor.
The nite dierence equation will be stable if |w
pq
| remains bounded for all q as
h 0, k 0 and all .
If the exact solution does not increase exponentially with time then a necessary and
sucient condition is that
|| 1
Example Investigating the stability of the fully implicit dierence equation
1
k
(w
pq+1
w
pq
) =
1
h
2
x
(w
p1q+1
2w
pq+1
+ w
p+1q+1
)
approximating
U
t
=

2
U
x
2
at (ph
x
, qk). Substituting w
pq
= e
ix

q
into the dierence
equation
e
iph

q+1
e
iph

q
= r{e
i(p1)h

q+1
2e
iph

q+1
+ e
i(p+1)h

q+1
}
where r =
k
h
2
x
. Divide across by e
i(p)h

q
leads to
1 = r(e
i(1)h
2 + e
ih
)
= r(2cos(h) 2)
= 4r(sin
2
(
h
2
)
Hence
=
1
1 + 4rsin
2
(
h
2
)
0 < 1 for all r > 0 and all therefore the equation is unconditionally stable.
30
1.9 Elliptic PDEs
The Poisson equation is

2
U(x, y) = f(x, y) (x, y) = (0, 1) (0, 1)
with boundary conditions
u(x, y) = g(x, y)(x, y) boundary
1.9.1 The ve point scheme
First we put a grid on out domain the unit square

= [0, 1] [0, 1] =

. We
will only consider a uniform grid.
= {(x
i
, y
j
) [0, 1] [0, 1] : x
i
= ih, y
j
= jh}
The interior nodes are

h
= {(x
i
, y
j
) : 1 i, j N 1}
The boundary nodes are

h
= {(x
0
, y
j
), (x
N
, y
j
), (x
i
, y
0
), (x
i
, y
N
) : 1 i, j N 1}
Our equations is discretised using central dierencing
(
2
x
w
ij
+
2
y
w
ij
) = f
ij
(x
i
, y
j
)
h
W
ij
) = g
ij
(x
i
, y
j
)
h
Using the central dierencing we have

2
x
=
1
h
2
(w
i+1j
2w
ij
+ w
i1j
)
and

2
y
=
1
h
2
(w
ij+1
2w
ij
+ w
ij1
)
31
There for the dierence equation looks like
(w
i1j
+ w
ij1
4w
ij
+ w
ij+1
+ w
i+1j
) = h
2
f
ij
.
The stencil looks like a plus.
Example Look at the Laplace equation

2
U
x
2
+
U
y
2
= 0
with the boundary conditions
U(x, 0) = 0, U(x, 0.5) = 0, U(0, y) = 0, U(1, y) = 100
letting h=0.25 we have the equation
1
(0.25)
2
(0 + 0 + w
2,1
4w
1,1
) = 0
1
(0.25)
2
(w
1,1
+ w
3,1
4w
2,1
) = 0
1
(0.25)
2
(w
2,1
+ 0 + 100 +4w
3,1
) = 0
this gives us w
1,1
= 1.786, w
2,1
= 7.143, w
3,1
= 26.786.
Unlike the Parabolic equation we cannot solve an Elliptic equation by holding one
variable constant and then stepping on we must solve all points at the same time.
We need to look at the matrix of the system of equations for the parabolic case we
dealt with (N 1) equations at a time, for the Hyperbolic case we must deal with
(N 1) (N 1) equations.
32
The matrix has the following block tridiagonal structure
_
_
_
_
_
_
_
_
_
_
_
_
_
T I 0 0 . . .
I T I 0 0 . .
0 . . . 0 . .
. . . . . . .
. . . 0 I T I
. . . . 0 I T
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
w
1
w
2
.
.
w
N2
w
N1
_
_
_
_
_
_
_
_
_
_
_
_
_
= h
2
_
_
_
_
_
_
_
_
_
_
_
_
_
r
1
r
2
.
.
r
N2
r
N1
_
_
_
_
_
_
_
_
_
_
_
_
_
Where I denotes an N 1 N 1 identity matrix and T is
T =
_
_
_
_
_
_
_
_
_
_
_
_
_
4 1 0 0 . . .
1 4 1 0 0 . .
0 . . . 0 . .
. . . . . . .
. . . 0 1 4 1
. . . . 0 1 4
_
_
_
_
_
_
_
_
_
_
_
_
_
.
w
j
represents the vector of approximations
w
j
=
_
_
_
_
_
_
_
_
_
_
_
_
_
w
j1
w
j2
.
.
w
jN2
w
jN1
_
_
_
_
_
_
_
_
_
_
_
_
_
and r
j
= b
j
f
j
, where b is the vector of boundary conditions
b
j
=
_
_
_
_
_
_
_
_
_
_
_
_
_
g
j0
0
.
.
0
g
jN
_
_
_
_
_
_
_
_
_
_
_
_
_
33
for j = 2, .., N 2, for j=1 and j=N-1 we have
b
1
=
_
_
_
_
_
_
_
_
_
_
_
_
_
g
01
+ g
10
g
02
.
.
g
0N2
g
0N1
+ g
1N
_
_
_
_
_
_
_
_
_
_
_
_
_
b
N1
=
_
_
_
_
_
_
_
_
_
_
_
_
_
g
N1
+ g
N10
g
N2
.
.
g
NN2
g
NN1
+ g
N1N
_
_
_
_
_
_
_
_
_
_
_
_
_
and
f
j
=
_
_
_
_
_
_
_
_
_
_
_
_
_
f
j1
f
j2
.
.
f
jN2
f
jN1
_
_
_
_
_
_
_
_
_
_
_
_
_
The matrix has a unique solution. For sparse matrices of this form an iterative
method is used as it would be to computationally expensive to compute the inverse.
1.10 Consistency and Convergence
We now ask how well the grid function determined by the ve point scheme approx-
imates the exact solution of the Poisson problem.
Denition Let L
h
denote the nite dierence approximation associated with the grid

h
having the mesh size h, to a partial dierential operator L dened on a simply
connected, open set D R
2
. For a given function C

(D), the truncation error


of L
h
is

h
(x) = (L L
h
)(x)
The approximation L
h
is consistent with L if
lim
h0

h
(x) = 0,
for all x D and all C

(D). The approximation is consistent to order p if


34

h
(x) = O(h
p
).
While we have seen this denition a few times it is always interesting how the terms
are denoted and expressed but the ideas are always the same.
Proposition 1.10.1 The ve-point dierence analog
2
h
is consistent to order 2
with
2
.
Proof Pick C

(D), and let (x, y) be a point such that (xh, y), (x, yh)

. By the Taylor Theorem


(x h, y) = (x, y) h

x
(x, y) +
h
2
2!

x
2
(x, y)
h
3
3!

x
3
(x, y) +
h
4
4!

x
4
(

, y)
where

(x h, x + h). Adding this pair of equation together and rearranging ,


we get
1
h
2
[(x +h, y) 2(x, y) +(x h, y)]

x
2
(x, y) =
h
2
4!
_

x
4
(
+
, y) +

4

x
4
(

, y)
_
By the intermediate value theorem
_

x
4
(
+
, y) +

4

x
4
(

, y)
_
= 2

x
4
(, y),
for some (x h, x + h). Therefore,

2
x
(x, y) =

2

x
2
(x, y) +
h
2
4!

x
4
(, y)
Similar reasoning shows that

2
y
(x, y) =

2

y
2
(x, y) +
h
2
4!

y
4
(x, )
for some (y h, y +h). We conclude that
h
(x, y) = (
h
)(x, y) = O(h
2
).
Consistency does not guarantee that the solution to the dierence equations approx-
imates the exact solution to the PDE.
Denition Let L
h
w(x
j
) = f(x
j
) be a nite dierence approximation, dened on
a grid mesh size h, to a PDE LU(x) = f(x) on a simply connected set D R
n
.
Assume that w(x, y) = U(x, y) at all points (x,y) on the boundary . The nite
35
dierence scheme converges (or is convergent) if
max
j
|U(x
j
) w(x
j
)| 0 as h 0.

For the ve point scheme there is a direct connection between consistency and conver-
gence. Underlying this connection is an argument based on the following principle:
Theorem 1.10.2 (DISCRETE MAXIMUM PRINCIPLE). If
2
h
V
ij
0 for all
points (x
i
, y
j
)
h
, then
max
(x
i
,y
j
)
h
V
ij
max
(x
i
,y
j
)
h
V
ij
If
2
h
V
ij
0 for all points (x
i
, y
j
)
h
, then
min
(x
i
,y
j
)
h
V
ij
min
(x
i
,y
j
)
h
V
ij
In other words, a grid function V for which
2
h
V is nonnegative on
h
attains its
maximum on the boundary
h
of the grid. Similarly, if
2
h
V is nonpositive on
h
,
then V attains its minimum on the boundary
h
.
Proof The proof is by contradiction. We argue for the case
2
h
V
ij
0, reasoning for
the case
2
h
V
ij
0 begin similar.
Assume that V attains its maximum value M at an interior grid point (x
I
, y
J
) and
that max
(x
i
,y
j
)
h
V
ij
< M. The hypothesis
2
h
V
ij
0 implies that
V
IJ

1
4
(V
I+1J
+ V
I1J
+ V
IJ+1
+ V
IJ1
)
This cannot hold unless
V
I+1J
= V
I1J
= V
IJ+1
= V
IJ1
= M.
If any of the corresponding grid points (x
I+1
, y
L
), (x
J1
, y
L
), (x
I
, y
L+1
), (x
I
, y
L1
) lies
in
h
, then we have reached the desired contradiction.
Otherwise, we continue arguing in this way until we conclude that V
I+iJ+j
= M for
some point (x
I+iJ+j
) , which again gives a contradiction.
36
This leads to interesting results
Proposition 1.10.3 1. The zero grid function (for which U
ij
= 0 for all (x
i
, y
j
)

h
is the only solution to the nite dierence problem

2
h
U
ij
= 0 for (x
i
, y
j
)
h
,
U
ij
= 0 for (x
i
, y
j
)
h
.
2. For prescribed grid functions f
ij
and g
ij
, there exists a unique solution to the
problem

2
h
U
ij
= f
ij
for (x
i
, y
j
)
h
,
U
ij
= g
ij
for (x
i
, y
j
)
h
.
Denition For any grid function V :
h

h
R,
||V ||

= max
(x
i
,y
j
)
h
|V
ij
|,
||V ||

= max
(x
i
,y
j
)
h
|V
ij
|.

Lemma 1.10.4 If the grid function V :


h

h
R satises the boundary con-
dition V
ij
= 0 for (x
i
, y
j
)
h
, then
||V
|
|


1
8
||
2
h
V ||

Proof Let = ||
2
h
V ||

. Clearly for all points (x


i
, y
j
)
h
,

2
h
V
ij
(1.21)
Now we dene W :
h

h
R by setting W
ij
=
1
4
[(x
i

1
2
)
2
+ (y
j

1
2
)
2
], which
is nonnegative. Also
2
h
W
ij
= 1 and that ||W||

=
1
8
. The inequality (1.21) implies
that, for all points (x
i
, y
j
)
h
,

2
h
(V
ij
+ W
ij
) 0
37

2
h
(V
ij
W
ij
) 0
By the discrete minimum principle and the fact that V vanishes on
h
V
ij
V
ij
+ W
ij
||W||

V
ij
V
ij
W
ij
||W||

Since ||W||

=
1
8
||V
|
|


1
8
=
1
8
||
2
h
V ||

Finally we prove that the ve point scheme for the Poisson equation is convergent.
Theorem 1.10.5 Let U be a solution to the Poisson equation and let w be the grid
function that satises the discrete analog

2
h
w
ij
= f
ij
for (x
i
, y
j
)
h
,
w
ij
= g
ij
for (x
i
, y
j
)
h
.
Then there exists a positive constant K such that
||U w||

KMh
2
where
M =
_

4
U
x
4

4
U
x
3
y

, , ...,

4
U
y
4

_
The statement of the theorem assumes that U C
4
(

). This assumption holds if f


and g are smooth enough.
Proof Following from the proof of the Proposition we have
(
2
h

2
)U
ij
=
h
2
12
_

4
U
x
4
(
i
, y
j
) +

4
U
y
4
(x
i
,
j
)
_
for some (x
i1
, x
i+1
) and
j
(y
j1
, y
j+1
). Therefore,

2
h
U
ij
= f
ij

h
2
12
_

4
U
x
4
(
i
, y
j
) +

4
U
y
4
(x
i
,
j
)
_
.
38
If we subtract from this the identity equation
2
h
w
ij
= f
ij
and note that U w
vanishes on
h
, we nd that

2
h
(U
ij
w
ij
) =
h
2
12
_

4
U
x
4
(
i
, y
j
) +

4
U
y
4
(x
i
,
j
)
_
.
It follows that
||U w||


1
8
||
2
h
(U w)||

KMh
2

39
1.11 Hyperbolic equations
First-order scalar equation
U
t
= a
U
x
x R t > 0
U(x, 0) = U
0
(x) x R
(1.22)
where a is a positive real number. Its solution is given by
U(x, t) = U
0
(x at) t 0
and represents a traveling wave with velocity a. The curves (x(t), t) in the plane
(x, t) are the characteristic curves. They are the straight lines x(t) = x
0
+ at, t > 0.
The solution of (1.22) remains constant along them.
For the more general problem
U
t
+ a
U
x
+ a
0
= f x R t > 0
U(x, 0) = U
0
(x) x R
(1.23)
where a, a
0
and f are given functions of the variables (x, t), the characteristic curves
are still dened as before. In this case the solutions of (1.23) satisfy along the
characteristics the following dierential equation
du
dt
= f a
0
u on (x(t), t)
Example Burgers equation
u
t
+ u
u
x
= 0
This is a non-trivial non-linear hyperbolic equation. Taking initial condition
u(x, 0) = u
0
(x) =
_

_
1 x
0
0
1 x 0 x
0
1
0 x
0
1
40
the characteristic line issuing from the point (x
0
, 0) is given by
x(t) = x
0
+ tu
0
(x
0
) =
_

_
x
0
+ t x
0
0
x
0
+ t(1 x
0
) 0 x
0
1
x
0
x
0
1
1.11.1 The Wave Equation
Consider the second-order hyperbolic equation

2
U
t
2

2
U
x
2
= f x (, ), t > 0 (1.24)
with initial data
U(x, 0) = u
0
(x) and
U
t
(x, 0) = v
0
(x), x (, )
and boundary data
U(, t) = 0 and U(, t) = 0, t > 0
In this case, U may represent the transverse displacement of an elastic vibrating string
of length , xed at the endpoints and is a coecient depending on the specic
mass of the string and its tension. The spring is subject to a vertical force of density f.
The functions u
0
(x) and v
0
(x) denote respectively the initial displacement and
initial velocity of the string.
The change of variables

1
=
U
x
,
2
=
U
t
transforms (1.24) into

t
+ A

x
= 0
where
=
_
_

1

2
_
_
Since the initial conditions are
1
(x, 0) = u

0
(x) and
2
(x, 0) = v
0
(x).
41
Aside
Notice that replacing

2
u
t
2
by t
2
,

2
u
x
2
by x
2
and f by 1, the wave equation becomes
t
2

2
x
2
= 1
which represents an hyperbola in (x, t) plane. Proceeding analogously in the case of
the heat equation we end up with
t x
2
= 1
which represents a parabola in the (x, t) plane. Finally, for the Poisson equation we
get
x
2
+ y
2
= 1
which represents an ellipse in the (x, y) plane.
Due to the geometric interpretation above, the corresponding dierential operators
are classied as hyperbolic, parabolic and elliptic.
1.11.2 Finite Dierence Method for Hyperbolic equations
As always we discretise the domain by space-time nite dierence. To this aim, the
half-plane {(x, y) : < x < , t > 0} is discretised by choosing a spatial grid size
x, a temporal step t and the grid points (x
j
, t
n
) as follows
x
j
= jx j Z, t
n
= nt n N
and let
=
t
x
.
Discretisation of the scalar equation
Here are some explicit methods
Forward Euler/centered
u
n+1
j
= u
n
j


2
a(u
n
j+1
u
n
j1
)
42
Lax-Fredrichs
u
n+1
j
=
u
n
j+1
+ u
n
j1
2


2
(u
n
j+1
u
n
j1
)
Lax-Wendro
u
n+1
j
= u
n
j


2
a(u
n
j+1
u
n
j1
) +

2
a
2
(u
n
j+1
2u
n
j
+ u
n
j1
)
Upwind
u
n+1
j
= u
n
j


2
(u
n
j+1
u
n
j1
) +

2
|a|(u
n
j+1
2u
n
j
+ u
n
j1
)
The last three methods can be obtained from the forward Euler/centered method
by adding a term proportional to a numerical approximation of a second derivative
term so that they can be written in the equivalent form
u
n+1
j
= u
n
j


2
a(u
n
j+1
u
n
j1
) +
1
2
k
(u
n
j+1
2u
n
j
+ u
n
j1
)
(x)
2
where k is an articial viscosity term.
An example of an implicit method is the backward Euler/ centered scheme
u
n+1
j
+

2
a(u
n+1
j+1
u
j+1
j1
) = u
n
j
1.12 Analysis of the Finite Dierence Methods
1.12.1 Consistency
Looking at the forward Euler
1.12.2 Stability
1.12.3 CFL Condition
43
1.13 Variational Methods
Variational methods are based on the fact that the solutions of some
(p(x)u

(x))

+ q(x)u(x) = g(x, u(x))


u(a) = u(b) =
(1.25)
Under the assumptions
p C
1
[a, b], p(x) p
0
> 0
q C
1
[a, b], q(x) 0
g C
1
([a, b] R), g
u
(x, u)
0
(1.26)
If u(x) is the solution of (1.25), then y(x) = u(x) l(x) with
l(x) =
b x
b a
+
a b
a b
l(a) = , l(b) =
is the solution of a boundary value problem
(p(x)y

(x))

+ q(x)y(x) = f(x)
y(a) = 0 y(b) = 0
(1.27)
with vanishing boundary values. Without loss of generality we can just consider
problems of the form (1.27).
(D) Classical Problem
(p(x)u

(x))

+ q(x)u(x) = f(x)
u(a) = 0 u(b) = 0
Now we relax the assumptions on the problem. We let f L
2
([0, 1]), and look for a
solution
u(x) D
L
= {u C
2
[a, b]|u(a) = 0, u(b) = 0}
We form
_
b
a
[(p(x)u

(x))

+ q(x)u(x)]v(x)dx =
_
b
a
f(x)v(x)dx
44
where v D
L
We integrate by parts to get
_
b
a
[p(x)u

(x)v

(x) + q(x)u(x)v(x)]dx =
_
b
a
f(x)v(x)dx
We make the denition
Denition (Bilinear Form)
a(u, v) =
_
b
a
[p(x)u

(x)v

(x) + q(x)u(x)v(x)]dx
The Weak form of the ODE problem (D) is then given by
(W) Weak Form Let f L
2
([a, b]). Find u D
L
such that
a(u, v) = (f, v).
Equivalently, the Variational or Minimisation form of the problem is given by
Where
(f, v) =
_
b
a
f(x)v(x)dx
(M) Variational/Minimization form: Let f L
2
([a, b]) and let F(v) =
1
2
a(v, v) (f, v). Find v D
L
such that
F(u) F(v) all v D
L
ie nd the function u that minimizes F over D
L
.
Theorem 1.13.1 We have the following relationships between the solutions to the
three problems (D), (W) and (M).
1. If the function u solves (D), then u solves (W).
2. The function u solves (W) if and only if u solves (M).
3. If f C([0, 1]) and u C
2
([0, 1]) solves (W), then u solves (D).
Proof 1. Let u be the solution to (D); then u solves (W) is obvious , since W
derives directly from (D).
45
2. (a) Show (W) (M).
Let u solve (W), and dene v(x) = u(x) + z(x), u, z D
L
. By linearity
F(v) =
1
2
a(u + z, u + z) (f, u + z)
= F(u) +
1
2
a(z, z) + a(u, z) (f, z)
= F(u) +
1
2
a(z, z)
which implies that F(v) F(u), and therefore u solves (M).
(b) Show (W) (M).
Let u solve (M) and choose R, v D
L
. Then F(u) F(u + v), since
u + v D
L
. Now F(u + v) is a quadratic form in and its minimum
occurs at = 0 ie
0 =
dF(u+v)
d
|
=0
= a(u, v) (f, v)
It follows that u solves (W).
3. Is immediate.
1.14 Ritz -Galerkin Method
This is a classical approach which we exploit to ned discrete approximation to
the problem (W) / (M). We look for a solution u
S
in a nite dimensional subspace
S of D
L
such that u
S
is an approximation to the solution of the continuous problem.
u
S
= u
1

1
+ u
2

2
+ ... + u
n

n
(W
S
) Discrete Weak Form Find u
S
S = span{
1
,
2
, ...,
n
}, n < such that
a(u
S
, v) = (f, v).
u u
S
= u
1

1
+ u
2

2
+ ... + u
n

n
46
(M
S
) Discrete Variational/Minimization form: Find u
S
S = span{
1
,
2
, ...,
n
}, n <
such that F(v) =
1
2
a(v, v) (f, v). Find v D
L
such that
F(u
S
) F(v) all v S
Theorem 1.14.1 Given f L
2
([0, 1]), then (W
S
) has a unique solution.
Proof We write u
S
=

n
1
u
j

j
(x) and look for constants u
j
, j = 1, ..., n to solve the
discrete problem. We dene
A = {A
ij
} = {a(
i
,
j
)} =
_
b
a
[p(x)

j
+ q(x)
i

j
]dx
and

F = {F
j
} = {(f,
j
)} = {
_
b
a
f
i
dx}
Then we require
a(u
S
, v) = a(
n

1
u
j

j
(x), v) = (f, v) all v S
Hence, for each basis function
i
S we muse have
a(u
S
,
i
) = a(
n

1
u
j

j
(x),
i
) = (f,
i
) all i = 1, ..., n S
ie
_

_
a(
1
,
1
) ... a(
n
,
1
)
. . .
. . .
. . .
a(
1
,
n
) ... a(
n
,
n
)
_

_
_

_
u
1
.
.
.
u
n
_

_
=
_

_
(f,
1
)
.
.
.
(f,
n
)
_

_
ie
A u =

F
Hence u
S
is found by the solution to a matrix equation. We now show existence/uniqueness
of the solution to the algebraic problem. We show by contradiction that A is full-rank
ie that the only solution to A u = 0 is u = 0.
47
We suppose that there exists a vector v = {v
j
} = 0 such that A v = 0 and construct
v(x) =

n
1
v
j

j
S. Then
A v = 0

j
a(
j
,
k
)v
j
= a(v,
k
) = 0 all k


k
a(v,
k
)v
j
= a(v,

v
k

k
) = a(v, v) = 0
v = 0
Therefore a contradiction.

Classically, in the Ritz-Galerkin method, the basis functions are chosen to be con-
tinuous functions over the entire interval [a, b], for example, {sinmx, cosmx} give us
trigonometric polynomial approximations to the solutions of the ODEs.
1.15 Finite Element
We choose the basis functions {
i
}
n
1
to be piecewise polynomials with compact sup-
port. In the simplest case
i
is linear. We divide the region in to n intervals or
elements,
a = x
0
< x
1
< ... < x
n
= b
and let E
i
denote the element [x
i1
, x
i
], h
i
= x
i
x
i1
.
Denition Let S
h
D be the space of functions such that v(x) C[0, 1], v(x) is
linear on E
i
and v(a) = v(b) = 0 ie
S
h
= {v(x) : piecewise linear on [0, 1], v(a) = v(b) = 0}
The basis functions
i
(x) for S
h
are dened such that
i
(x) is linear on E
i
, E
i+1
and

i
(x
j
) =
ij
. DRAWING HAT
We now show that the hat functions
i
form a basis for the space S
h
.
Lemma 1.15.1 The set of functions {
i
}
n
i
is a basis for the space S
h
.
Proof We show rst that the set {
i
}
n
1
is linearly independent. If

n
1
c
i

i
(x) = 0
for all x [a, b], then taking x = x
j
, implies c
j
= 0 for each value of j, and hence the
48
functions are independent.
To show S
h
= span{
i
}, we only need to show that
v(x) = v
I
=

v
j

j
, all v(x) S
h
This is proved by construction. Since (v v
I
) is linear on [x
i1
, x
i
] and v v
I
= 0 at
all points x
j
, it follows that v = v
I
on E
i
.
We now consider the matrix A u =

F in the case where the basis functions are chosen
to be the hat functions. In this case the elements of A can be found We have

i
= 0,

i
= 0, for x / [x
i1
, x
i+1
) = E
i
_
E
i+1

i
=
x x
i1
x
i
x
i1
=
1
h
i
(x x
i1
),

i
=
1
h
i
, on E
i
and

i
=
x
i+1
x
x
i+1
x
i
=
1
h
i+1
(x
i+1
x),

i
=
1
h
i+1
, on E
i+1
Therefore
A
i,i
=
_
x
i
x
i1
1
h
2
i
p(x)dx +
_
x
i+1
x
i
1
h
2
i+1
p(x)dx
+
_
x
i
x
i1
1
h
2
i
(x x
i1
)
2
q(x)dx +
_
x
i+1
x
i
1
h
2
i+1
(x
i+1
x)
2
q(x)dx
A
i,i+1
=
_
x
i+1
x
i
1
h
2
i+1
p(x)dx +
_
x
i+1
x
i
1
h
2
i+1
(x
i+1
x)(x x
i
)q(x)dx
A
i,i1
=
_
x
i
x
i1
1
h
2
i
p(x)dx +
_
x
i+1
x
i
1
h
2
i
(x
i
x)(x x
i1
)q(x)dx
and
F
i
=
_
x
i
x
i1
1
h
i
(x x
i1
)f(x)dx +
_
x
i+1
x
i
1
h
i+1
(x
i+1
x)f(x)dx
1.15.1 Error
Lemma 1.15.2 Assume u
S
solves (W
S
). Then
a(u u
S
, w) = 0, for all x S
Proof
a(u
S
, w) = (f, w)
49
a(u, w) = (f, w)
for all w S. Since a is bilinear, taking the dierences gives
a(u u
S
, w) = 0
The error bounds we care interested in will be in term of the energy norm,
||v||
E
= [a(v, v)]
1
2
for all v D
L
. The function satises the properties:
||v||
E
= ||a||
E
, ||v + z||
E
||v||
E
+||z||
E
Theorem 1.15.3 To show u
S
is the best t we show that
||u u
S
||
E
= min
vS
||u v||
E
Proof By the Cauchy -Schwartz Lemma, we have |a(u, v)| ||u||
E
||v||
E
. Let w =
u
S
v S. The using the previous leamma we obtain
||u u
S
||
2
E
= a(u u
s
, u u
s
)
a(u u
s
, u u
s
) + a(u u
s
, w)
a(u u
s
, u u
s
+ w) = a(u u
s
, u v)
||u u
s
||
E
||u v||
E
If ||u u
S
||
E
= 0, then the theorem holds. Otherwise
min||u v||
E
||u u
S
|| min||u v||
E
qed.
Theorem 1.15.4 Error bounds
||u u
S
||
E
Ch||u

||

where C is a constant.
50
Proof We rst from the previous theorem that
||u u
S
||
E
= min
vS
||u v||
E
||u u
I
||
E
We look for a bound on ||u u
I
||
E
, where
u
I
(x) =

j
u
j

j
, u
j
= u(x
j
)
. we assume that
u
S
(x) =

j
u
j

j
where u = {u
j
} solves Au = F. We dene e = u u
I
. Since u
I
S implies that
u
I
is piecewise linear, then u

I
= 0. Therefore e

= u

. Looking at the subinterval


[x
i
, x
i+1
] The Schwarz inequality yeilds the estimate
(e)
2

_
x
x
i
1
2
d
_
x
x
i
(e

())
2
d
(x x
i
)
_
x
x
i
(e

())
2
d
h
i
_
x
i+1
x
i
(e

())
2
d
and thus
||e||
2

h
i
_
x
i+1
x
i
(e

())
2
d h
2
i
||e

||
2

Similarly,
(e

)
2

_
x
x
i
1
2
d
_
x
x
i
(e

())
2
d
(x x
i
)
_
x
x
i
(e

())
2
d
h
i
_
x
i+1
x
i
(e

())
2
d
and thus
||e

||
2

h
i
_
x
i+1
x
i
(e

())
2
d h
2
i
||e

||
2

51
Finally we also have
a(e, e) =
_
x
i+1
x
i
(p(x)[e

]
2
+ q(x)[e(x)]
2
)dx
||p||

_
x
i+1
x
i
[e

]
2
+||q||

h
i
_
x
i+1
x
i
[e(x)]
2
dx
||p||

h
2
i
||e

||
2

+||q||

h
3
i
||e

||
2

||p||

h
2
i
||e

||
2

+||q||

h
5
i
||e

||
2

Ch
2
i
||u

||
2

||u u
S
||
E
= min
vS
||u v||
E
||u u
I
||
E
Ch||u

||

where h = max{h
i
}.
52

S-ar putea să vă placă și