Sunteți pe pagina 1din 15

Numerical Solution of Linear Second Order Partial Differential Equations

4

2.0 Numerical Methods
Numerical methods are approximate techniques for solving mathematical problems, taking into account
the extent of possible errors.
The overall goal of the field of numerical analysis or numerical methods is the design and analysis of
techniques to give approximate but accurate solutions to hard problems, the variety of which is suggested
by the following:
Advanced numerical methods are essential in making numerical weather prediction feasible.
Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of
ordinary differential equations.
Car companies can improve the crash safety of their vehicles by using computer simulations of
car crashes. Such simulations essentially consist of solving partial differential equations
numerically.
Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew
assignments and fuel needs. This field is also called Operations Research.
Insurance companies use numerical programs for actuarial analysis
2.1 Classification of Numerical Methods: Numerical methods are broadly classified as direct or iterative
methods:
Direct methods compute the solution to a problem in a finite number of steps. These methods would give
the precise answer if they were performed in infinite precision arithmetic. In practice, finite precision is
used and the result is an approximation of the true solution (assuming stability).
Examples include:
a) Gaussian Elimination,
b) QR factorization method for solving systems of linear equations
c) The Simplex method of linear programming.
I terative methods are not expected to terminate in a number of steps. Starting from an initial guess,
iterative methods form successive approximations that converge to the exact solution only in the limit. A
convergence criterion is specified in order to decide when a sufficiently accurate solution has been found.
Even using infinite precision arithmetic these methods would not reach the solution within a finite
number of steps.
Examples include:
a) Newton's method,
b) The bisection method,
c) Jacobi iteration.
In computational matrix algebra, iterative methods are generally needed for large problems. Iterative
methods are more common than direct methods in numerical analysis. Some methods are direct in
principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method.
For these methods the number of steps needed to obtain the exact solution is so large that an
approximation is accepted in the same manner as for an iterative method.
Numerical Solution of Linear Second Order Partial Differential Equations
5

2.2 Discretization: In application, continuous problems sometimes be replaced by a discrete problem
whose solution is known to approximate that of the continuous problem; this process is called
discretization. For example, the solution of a differential equation is a function. This function must be
represented by a finite amount of data, for instance by its value at a finite number of points at its domain,
even though this domain is a continuum.
2.3 The generation and propagation of errors: The study of errors forms an important part of
numerical analysis. There are several ways in which error can be introduced in the solution of the
problem.
a) Round-off: Round-off errors arise because it is impossible to represent all real numbers
exactly on finite-state machines i.e. digital computers.
b) Truncation and discretization error: Truncation errors are introduced when an iterative method
is terminated or a mathematical procedure is approximated, and the approximate solution differs
from the exact solution. Similarly, discretization induces a discretization error because the
solution of the discrete problem does not coincide with the solution of the continuous problem.

2.4 Application of Numerical Methods:

a) Evaluation of a function: One of the simplest problems is the evaluation of a function at a given
point. The most straightforward approach, of just plugging in the number in the formula is
sometimes not very efficient. Generally, it is important to estimate and control round-off errors
arising from the use of floating point arithmetic.

b) I nterpolation, extrapolation, and regression: Interpolation solves the following problem: given
the value of some unknown function at a number of points and finding a value that function have
at some other point between the given points. A very simple method is to use linear interpolation,
which assumes that the unknown function is linear between every pair of successive points. This
can be generalized to polynomial interpolation, which is sometimes more accurate. Other
interpolation methods use localized functions like splines or wavelets.
Extrapolation is very similar to interpolation, except that now we want to find the value of the
unknown function at a point which is outside the given points.
Regression is also similar, but it takes into account that the data is imprecise. Given some points,
and a measurement of the value of some function at these points (with an error), we want to
determine the unknown function. The least squares-method is one popular way to achieve this.
c) Solving equations and systems of equations: Another fundamental problem is computing the
solution of some given equation. Two cases are commonly distinguished, depending on whether
the equation is linear or not.
Much effort has been put in the development of methods for solving systems of linear equations.
Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian
elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and
Numerical Solution of Linear Second Order Partial Differential Equations
6

positive- definite matrix, and QR decomposition for non-square matrices. Iterative methods such
as the Jacobi method, GaussSeidel method, successive over-relaxation and conjugate gradient
method are usually preferred for large systems.
Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a
function is an argument for which the function yields zero). If the function is differentiable and
the derivative is known, then Newton's method is a popular choice. Linearization is another
technique for solving nonlinear equations.
d) Optimization: Optimization problems ask for the point at which a given function is maximized
(or minimized). Often, the point also has to satisfy some constraints. The field of optimization is
further split in several subfields, depending on the form of the objective function and the
constraint. For instance, linear programming deals with the case that both the objective function
and the constraints are linear. A famous method in linear programming is the simplex method.
The method of Lagrange multipliers can be used to reduce optimization problems with constraints
to unconstrained optimization problems.

e) Evaluating integrals: Numerical integration, in some instances also known as numerical
quadrature, asks for the value of a definite integral. Popular methods use one of the Newton
Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods
rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken
down into integrals on smaller sets. In higher dimensions, where these methods become
prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-
Monte Carlo methods.

f) Differential equations: Numerical analysis is also concerned with computing (in an
approximate way) the solution of differential equations, both ordinary differential equations and
partial differential equations. Partial differential equations are solved by first discretizing the
equation, bringing it into a finite-dimensional subspace. This can be done by a finite element
method, a finite difference method, or a finite volume method.

3.0 Partial Differential Equations
Linear second order partial differential equation is an equation of the following form:

Au
xx
+ 2Bu
x
+ Cu

+u
x
+Eu

+ Fu +0 = u

Where A, B, C, D, E, F and G are functions of x, y or are real constants.
The partial differential equation is called a

Elliptic equation if B
2
AC < 0
Parabolic equation if B
2
AC = 0
Hyperbolic equation if B
2
AC > 0

The simplest examples of the above equations are the following (Iyenger & Jain, 2009):
Numerical Solution of Linear Second Order Partial Differential Equations
7


Parabolic equation: u
t
= C
2
u
xx
, (One dimensional heat equation)
Hyperbolic equation: u
tt
= C
2
u
xx
, (One dimensional wave equation)
Elliptic equation: u
xx
+ u

= u (Two dimensional Laplace equation)



Classification of these equations is important since the classification governs the number and type of
conditions that should be prescribed in order that the problem has a unique solution (Iyenger & Jain,
2009). For example:

a) For the solution of the one dimensional heat equation, we require an initial condition to be
prescribed, u(x, 0) = f(x), and the conditions along the boundary lines x= 0, and x = l, where l is
the length of the rod (boundary conditions), are required.

b) One dimensional wave equation represents the vibrations of an elastic string of length l. Here,
u(x, t) represents the displacement of the string in the vertical plane. For the solution of this
equation, we require two initial conditions, the initial displacement u(x, 0) = f(x), the initial
velocity u
t
(x, 0) = g(x), and the conditions along the boundary lines x = 0 and x = l, (boundary
conditions).

c) For the solution of the Laplaces equation we require the boundary conditions to be prescribed on
the bounding curve. Elliptic equation together with the boundary conditions is called an elliptic
boundary value problem. The boundary value problem holds in a closed domain or in an open
domain which can be conformably mapped on to a closed domain. For example, Laplaces
equation may be solved inside, say, a rectangle, a square or a circle etc. Both the hyperbolic and
parabolic equations together with their initial and boundary conditions are called initial value
problems. Sometimes, they are also called initial-boundary value problems. The initial value
problem holds in either an open or a semi-open domain. For example, in the case of the one
dimensional heat equation x varies from 0 to l and t > 0. In the case of the one dimensional wave
equation, x varies from 0 to l and t > 0.

Consider a partial differential equation Su
xx
= 2u
x
+ u

. In order to classify, the equation is written in


the formSu
xx
- 2u
x
- u

= u. For this equation, A=3, B=0, C=0 and therefore B


2
AC = 0; the
equation is a parabolic equation.

In this paper, the solution of the following boundary value & initial-boundary value problems governed
by the partial differential equations, have been discussed:

(a) Laplaces equation: u
xx
+ u
yy
= v2 = u, with u(x, y) prescribed on the boundary, that is, u(x,
y) = f(x, y) on the boundary. In this problem, the boundary conditions are called Dirichlet
boundary conditions and the boundary value problem is called a Dirichlet boundary value
problem.

(b) Heat conduction equation: u
t
= c
2
u
xx
, with u(x, y) given as initial value and on the boundary.




Numerical Solution of Linear Second Order Partial Differential Equations
8

4.0 Finite Difference Method
In the finite difference method, the derivatives ouot, ouox onJ o
2
uox
2
, which appear in partial
differential equations and the boundary conditions, are replaced by difference quotients. This
discretisation transforms the differential equation into a finite difference equation whose solution
approximates the solution of the differential equation at discrete points which form a grid in space and
time. A reduction in the mesh size increases the number of grid points and therefore the accuracy of the
approximation, although this does of course increase the computation demands. Applying a finite
difference method one has therefore to make a compromise between accuracy and computation time
(Baehr & Stephen, 2006).

A two dimensional domain (x, y) e R is superimposed with a rectangular network or mesh of lines with
step lengths h and k respectively, parallel to the x- and y-axis. The points of intersection of the mesh lines
are called nodes or grid points or mesh points (Figure 1). The grid points are given by (x
i,
y
j
) and the mesh
lines are defined by

x
i
=ih, i =0, 1, 2, ...; y
j
=jk, j =0, 1, 2, ...

If h = k, then we have a uniform mesh.


Figure 1: Nodes in Finite Difference Mesh


At the nodes, the partial derivatives in the differential equation are replaced by suitable difference
approximations. That is, the partial differential equation is approximated by a difference equation at each
nodal point. This procedure is called discretisation of the partial differential equation. Following central
difference approximations may be used (Iyenger & Jain, 2009, p 83):

(u
x
)
,]
=
1
2b
(u
+1,]
-u
-1,]
)
(u

)
,]
=
1
2k
(u
,]+1
- u
,]-1
)
(u
xx
)
,]
=
1
b
2
(u
+1,]
- 2u
,]
+ u
,-1,]
)
Numerical Solution of Linear Second Order Partial Differential Equations
9

(u

)
,]
=
1
k
2
(u
,]+1
-2u
,]
+ u
,]-1
)



5.0 Solution of Laplaces Equation
Laplaces Equation (u
xx
+ u
yy
= v2 = u) is solved numerically by inserting central difference
approximations to obtain following form of the equation:

u
xx
+u

=
1
b
2
(u
+1,]
- 2u
,]
+u
,-1,]
) +
1
k
2
(u
,]+1
- 2u
,]
+ u
,]-1
) = u

(u
+1,]
- 2u
,]
+ u
,-1,]
) + p
2
(u
,]+1
-2u
,]
+ u
,]-1
) = u, where p=h/k
If h = k, that is, p = 1 (called uniform mesh spacing), we obtain the difference approximation as
(u
+1,]
- 2u
,]
+ u
,-1,]
) + (u
,]+1
-2u
,]
+ u
,]-1
) = u
u
+1,]
+ u
-1,]
+ u
,]+1
+ u
,]-1
- 4u
,]
= u
u
,]
=
1
4
(u
+1]
+ u
-1,]
+u
,]+1
+u
,]-1
)
This approximation is called the standard five point formula. u
i,j
is obtained as the mean of the values at
the four neighbouring points in the x and y directions. The nodes in the mesh are numbered in an orderly
way (Figure 3).



Figure 2: Standard Five Point Formula Figure 3: Numbering of nodes


The difference approximation, to the Laplace equation u
xx
+ u

= v2p = u , or u
+1,]
+ u
-1,]
+
u
,]+1
+ u
,]-1
-4u
,]
= u is applied at all the nodes and the boundary conditions are used to simplify
the equations. The resulting system is a linear system of algebraic equations Au = d. System of equations
that arise when we use nine nodes as given in Figure 3, are as following:
Numerical Solution of Linear Second Order Partial Differential Equations
10

At 1 u
2
+ u
4
- 4u
1
= b
1
or -4u
1
+u
2
+ u
4
= b
1

At 2 u
3
+u
1
+u
5
- 4u
2
= b
2
or u
1
- 4u
2
+ u
3
+u
5
= b
2

At 3 u
2
+u
6
- 4u
3
= b
3
or u
2
-4u
3
+u
6
= b
3

At 4 u
5
+u
7
+ u
1
- 4u
4
= b
4
or u
1
- 4u
4
+ u
5
+u
7
= b
4

At 5 u
6
+ u
4
+u
8
+ u
2
- 4u
5
= b
5
or u
2
+ u
4
-4u
5
+u
6
+ u
8
= b
5

At 6 u
5
+ u
9
+ u
3
- 4u
6
= b
6
or u
3
+ u
5
-4u
6
+u
9
= b
6

At 7 u
8
+u
4
- 4u
7
= b
7
or u
4
-4u
7
+u
8
= b
7

At 8 u
9
+u
7
+ u
5
- 4u
8
= b
8
or u
5
+ u
7
-4u
8
+u
9
= b
8

At 9 u
8
+ u
6
-4u
9
= b
9
or u
6
+ u
8
- 4u
9
= b
9



Following linear algebraic system of equations is obtained from above:


-4 1 0 1 0 0 0 0 0 u
1






=
b
1

1 -4 1 0 1 0 0 0 0 u
2
b
2

0 1 -4 0 0 1 0 0 0 u
3
b
3

1 0 0 -4 1 0 1 0 0 u
4
b
4

0 1 0 1 -4 1 0 1 0 u
5
b
5

0 0 1 0 1 -4 0 0 1 u
6
b
6

0 0 0 1 0 0 -4 1 0 u
7
b
7

0 0 0 0 1 0 1 -4 1 u
8
b
8

0 0 0 0 0 1 0 1 -4 u
9
b
9



The resultant matrix system is a band matrix system. The half band width is the number of nodal points
on each mesh line, that is, 3. Therefore, the total band width of the matrix is 3 + 3 + 1 = 7, that is, all the
non-zero elements are located in this band. In the general case, for a large n n system (n unknowns on
each mesh line), the half band width is n and the total band width is n + n + 1 = 2n + 1. All the elements
on the leading diagonal are non-zero and are equals to 4.

Except in the case of the equations corresponding to the nodal points near the boundaries, all the elements
on the first super-diagonal and the first sub-diagonal are non-zero and are equal to 1. The remaining two
non-zero elements (which equals 1) corresponding to each equation are located in the band. For equations
corresponding to the nodal points near the boundary, the number of non-zero elements is less than 5. At
the corner points, the number of non-zero elements is 3 (in the above example, u
1
, u
3
, u
7
, u
9
are corner
points) and at other points near the boundaries (in the above example, u
2
, u
4
, u
6
, u
8
are these points), the
number of non-zero elements is 4. The remaining elements in the matrix are all zero. This property is true
in all problems of solving Dirichlet boundary value problems for Laplaces equation.

The solution of the system of equations can be obtained by direct or iterative methods. We can consider
the solution of the system of equations Au = d by Gauss elimination (a direct method) and an iterative
method called Liebmann iteration, which is the application of Gauss-Seidel iterative method. When the
order of the system of equations is not large, say about 50 equations, we use the direct methods.

Direct methods require the loading of all the elements of the coefficient matrix and the right hand side
vector into the memory of the computer, which may not be possible if the system is large. When the order
of the system is large, which is the case in most practical problems, we use the iterative methods. In fact,
in many problems, we encounter thousands of equations. Iterative methods do not require the loading of
Numerical Solution of Linear Second Order Partial Differential Equations
11

all the elements of the coefficient matrix and the right hand side vector. Information of few equations only
can be loaded at a time (Iyenger & Jain, 2009).

6.0 Solution of Heat Conduction Equation
Consider a thin homogeneous, insulated bar or a wire of length l. Let the bar be located on the x-axis on
the interval [0, l]. Let the rod have a source of heat. For example, the rod may be heated at one end or at
the middle point or has some source of heat. Let u(x, t) denote the temperature in the rod at any instant of
time t. The problem is to study the flow of heat in the rod. The partial differential equation governing the
flow of heat in the rod is given by the parabolic equation:

u
t
= c
2
u
xx
, u x l, t > u

Figure 4: Heat conduction in one dimension


Where c
2
is a constant and depends on the material properties of the rod. In order that the solution of the
problem exists and is unique, we need to prescribe the following conditions:

(i) Initial condition: At time t = 0, the temperature is to be provided, u(x, 0) = f(x), 0 x l.
(ii) Boundary conditions: Since the bar is of length l, boundary conditions at x = 0 and at x = l are
required. These conditions are of the following types:

(a) Temperatures at the ends of the bar is given u(0, t) = g(t) & u(l, t) = h(t), t > 0.
(b) One end of the bar, say at x = 0, is insulated. This implies the condition that:

x
= u at
x=0 for all times.

At the other end, the temperature may be given as, u(l, t) = h(t), t > 0. Since both initial and boundary
conditions are prescribed, the problem is also called an initial boundary value problem.

A rectangular network of mesh lines is superimposed on the problem domain 0 x l, t > 0. Let the
interval [0, l] be divided into M equal parts. Then, the mesh length along the x-axis is h = l/M. The points
along the x-axis are x
i
= ih, i = 0, 1, 2... M.

Numerical Solution of Linear Second Order Partial Differential Equations
12

Let the mesh length along the t-axis be k and defined as t
j
= jk. The mesh points are (x
i
, t
j
) and t
j
is called
as the jth time level. At any point (x
i
, t
j
), the numerical solution is denoted by u
i, j
and the exact solution
by u(x
i
, t
j
).


Figure 5: Mesh in heat conduction problem


Finite difference methods for heat transfer problems are classified into two categories: explicit methods
and implicit methods (Baehr & Stephen, 2006):

6.1 Explicit methods: In explicit methods, the solution at each nodal point on the current time level is
obtained by simple computations (additions, subtractions, multiplications and divisions) using the
solutions at the previous one or more levels. In implicit methods, we solve a linear system of algebraic
equations for all the unknowns on any mesh line t = t
j+1
. When a method uses the nodal values on two
time levels t
j
and t
j+1
, then it is called a two level formula. When a method uses the nodal values on three
time levels t
j1
, t
j
and t
j+1
then it is called a three level formula.

Using forward differences (Iyenger & Jain, 2009, p 80) u
t
is approximated as:
(u
t
)
,]
=
1
b
(u
,]+1
- u
,]
)


Using central differences (Iyenger & Jain, 2009, p 83), the values of u
x
on nodes I, j are approximated as:

(u
x
)
,]
=
1
2b
(u
+1,]
-u
-1,]
)
(u
xx
)
,]
=
1
b
2
(u
+1,]
- 2u
,]
+ u
,-1,]
)

The approximation to heat diffusion equation can be found as:

1
b
(u
,]+1
- u
,]
) = c
2
|
1
b
2
(u
+1,]
- 2u
,]
+u
,-1,]
)]


Re-arranging for u
,]+1


u
,]+1
= u
,]
+ z (u
+1,]
-2u
,]
+ u
-1,]
)
Numerical Solution of Linear Second Order Partial Differential Equations
13

u
,]+1
= zu
-1,]
+ (1 - 2z)u
,]
+zu
+1,]


Where z = c
2
k
h
2
(z called mesh ratio generator)

The value u
,]+1
at the node (xi, t
j+1
) is being obtained explicitly using the values on the previous time
level t
j
. This method is called the Schmidt method. It is a two level method.

Figure 6: Schmidt method

The initial condition u(x, 0) = f(x) gives the solution at all the nodal points on the initial line (level 0). The
boundary conditions u(0, t) = g(t), u(l, t) = h(t), t > 0 give the solutions at all the nodal points on the
boundary lines x = 0 and x = l, for all time levels.

For computation by Schmidt method, values for and h are chosen. This gives the value of the time step
length k. Alternately we may choose the values for h and k. The solutions at all nodal points, (called
interior points), on level 1 are obtained using the explicit method. The computations are repeated for the
required number of steps. If we perform msteps of computation, then we have computed the solutions up
to time t
m
= mk.

6.2 I mplicit methods: Explicit methods have the disadvantage that they have a stability condition on
the mesh ratio parameterz. We have seen that the Schmidt method is stable for z 0.5. This condition
severely restricts the values that can be used for the step lengths h and k. In most practical problems,
where the computation is to be done up to large value of t, these methods are not useful because the time
taken is too high. In such cases, the implicit methods are used.

The most popular and useful method is called as Crank-Nicolson method. There are a number of ways of
deriving this method. Following is one of the simple ways for derivation:

Denote 7
t
as the backward difference (Iyenger & Jain, 2009, p 81) in the time direction, we obtain the
relation:

k
ou
ot
= -log(1 - 7
t
) u = _7
t
+
1
2
7
t
2
+
1
S
7
t
3
+. ] u


Ignoring 7
t
3
and higher order terms and simplifying the series

ou
ot
= _7
t
+
1
2
7
t
2
] u = _
7
t
1 -
1
2
7
t
_u = 7
t
(1 -
1
2
7
t
)
-1
u = I
t
_1 +
1
2
7
t
+
1
4
7
t
2
+. ] u

Numerical Solution of Linear Second Order Partial Differential Equations
14

Applying approximation to heat diffusion equation at nodal point i, j+1:

k(
ou
ot
)
,]+1
= c
2
(
o
2
u
ox
2
)
,]+1

k _
7
t
1 -
1
2
7
t
_u
,]+1
= c
2
|
1
b
2
]o
x
2
u
,]+1

7
t
u
,]+1
= (1 -
1
2
7
t
)o
x
2
u
,]+1

7
t
u
,]+1
= (o
x
2
u
,]+1
-
1
2
7
t
o
x
2
u
,]+1
)
u
,]+1
-u
,]
= _o
x
2
u
,]+1
-
1
2
o
x
2
(u
,]+1
- u
,]
)_


u
,]+1
- u
,]
= _o
x
2
u
,]+1
-
1
2
o
x
2
u
,]+1
+
1
2
o
x
2
u
,]
_


u
,]+1
-u
,]
= _
1
2
o
x
2
u
,]+1
+
1
2
o
x
2
u
,]
_

u
,]+1
-

2
o
x
2
u
,]+1
=

2
o
x
2
u
,]
+ u
,]


u
,]+1
-

2
( u
+1,]+1
- 2u
,]+1
+ u
-1,]+1
) =

2
(u
+1,]
-2u
,]
+ u
-1,]
) + u
,]


u
,]+1
-

2
u
+1,]+1
- u
,]+1
+

2
u
-1,]+1
=

2
u
+1,]
- u
,]
+

2
u
-1,]
+u
,]



-

2
u
+1,]+1
+ (1 + )u
,]+1
-

2
u
-1,]+1
=

2
u
+1,]
+ (1 - )u
,]
+

2
u
-1,]



Figure 7: Crank-Nicolson Method


Numerical Solution of Linear Second Order Partial Differential Equations
15

From the right hand side of the Crank-Nicolson equation, it is evident that this is the mean of the central
difference approximations,o
2
x
u, to the right hand side of the differential equation on the levels j and j +
1. This concept of taking the mean of the central difference approximations to the right hand side of a
given differential equation is often generalized to more complicated differential equations (Iyenger &
Jain, 2009).

The initial condition u(x, 0) = f(x) gives the solution at all the nodal points on the initial line (level 0). The
boundary conditions u(0, t) = g(t), u(l, t) = h(t), t > 0 give the solutions at all the nodal points on the lines
x = 0 and x = l for all time levels.

For computation by Crank-Nicolson method, values for and h are chosen. This gives the value of the
time step length k. Alternately we may choose the values for h and k. The difference equations at all
nodal points on the first time level are written. This system of equations is solved to obtain the values at
all the nodal points on this time level. The computations are repeated for the required number of steps. If
we perform msteps of computation, then we have computed the solutions up to time t
m
= mk.

The system of equations that is obtained if we apply the Crank-Nicolson method is a tri-diagonal system
of equations. It uses the three consecutive unknowns u
i1,j+1
, u
i,j+1
and u
i+1,j+1
on the current time level.
This is the advantage of the method.


6.3 Example Problem: Solution of heat equation by Crank-Nicolson method: Consider an
example of heat diffusion equation u
xx
= u
t
subject to initial and boundary conditions u(x,0)=0, u(0,t)=0
and u(1,t)=1. Solving the equation by Crank-Nicolson method for one time step:

Assuming step length h=0.25, and z=1; therefore k= zh
2
=0.0625

Figure 8: Crank-Nicolson mesh for example problem

Crank-Nicolson method gives: (for = 1)
-
1
2
u
+1,]+1
+ 2u
,]+1
-
1
2
u
-1,]+1
=
1
2
u
+1,]
+
1
2
u
-1,]

- u
+1,]+1
+4u
,]+1
- u
-1,]+1
= u
+1,]
+ u
-1,]


Initial condition u(x,0)=0 gives u
,0
for all i.
Boundary conditions u(0,t)=0 and u(1,t)=1 gives u
0,]
= u for all j and u
4,]
= t
]
= ]k = u.u62S]
Numerical Solution of Linear Second Order Partial Differential Equations
16

For j=0 i=1: - u
2,1
+4u
1,1
- u = u + u = 4u
1,1
- u
2,1
= u
For j=0 i=2: - u
3,1
+ 4u
2,1
-u
1,1
= u +u

= -u
1,1
+ 4u
2,1
- u
3,1
= u
For j=0 i=3: - u
4,1
+ 4u
3,1
- u
2,1
= u +u

= -u
2,1
+4u
3,1
= u.u62S
System of equations becomes:
_
4 -1 u
-1 4 -1
u -1 4
_ _
u
1,1
u
2,1
u
3,1
_ = _
u
u
u.u62S
_

By Gauss-Jordan elimination method:
Performing R
1
/4

_
1 -14 u
-1 4 -1
u -1 4
_ _
u
1,1
u
2,1
u
3,1
_ = _
u
u
u.u62S
_

Performing R
2
+R
1
:

_
1 -14 u
u 1S4 -1
u -1 4
_ _
u
1,1
u
2,1
u
3,1
_ = _
u
u
u.u62S
_
Performing R
2
/(15/4):
_
1 -14 u
u 1 -41S
u -1 4
_ _
u
1,1
u
2,1
u
3,1
_ = _
u
u
u.u62S
_

Performing R
3
+R
2
:
_
1 -14 u
u 1 -41S
u u S61S
_ _
u
1,1
u
2,1
u
3,1
_ = _
u
u
u.u62S
_
Performing R
3
/(56/15):
_
1 -14 u
u 1 -41S
u u 1
_ _
u
1,1
u
2,1
u
3,1
_ = _
u
u
u.u167
_

This gives: u
3,1
= u.u167
u
2,1
-
4
1S
u
3,1
= u = u
2,1
= u.uu447
u
1,1
-
1
4
u
2,1
= u = u
1,1
= u.uu112


7.0 Further Analysis: Convergence and Stability
Many finite difference formulae have the undesirable property that small initial and rounding errors
become larger as the calculation proceeds, which in the end produces a false result. This phenomenon is
called (numerical) instability. In contrast a difference formula is stable when the errors become smaller
during the calculation run and therefore their effect on the result declines. Most difference equations are
only conditionally stable, that is they are stable for certain step or mesh sizes.

The explicit difference method has the disadvantage that the time step t=k is limited by the stability
conditions. Therefore obtaining a temperature profile at a given time usually requires a lot of time steps.
Numerical Solution of Linear Second Order Partial Differential Equations
17

This step size restriction can be avoided by choosing an implicit difference method. It requires a system
of linear equations to be solved for each time step.

Iyenger & Jain, (2009), analyzed the Schmidt method and concluded that the method is stable if:
z = c
2
k
b
2

1
2

This system has a very simple form; it is a tri-diagonal system whose coefficient matrix is only occupied
along the main diagonal and both its neighbours. The method is conditionally stable.

Implicit methods often have very strong stability properties. Stability analysis of the Crank-Nicolson
method shows that the method is stable for all values of the mesh ratio parameter (Iyenger & Jain,
2009). This implies that there is no restriction on the values of the mesh lengths h and k. Depending on
the particular problem that is being solved; we may use sufficiently large values of the step lengths. Such
methods are called unconditionally stable methods. This is advantageous because the choice of an implicit
difference method allows larger time steps to be used and therefore requires a more exact approximation
of the derivative with respect of time.

8.0 Conclusion
The most of physical problems encountered in science and engineering are governed by a large number of
complex partial differential equations. The solution of these complex problems cannot be performed
analytically and can only be performed by numerical methods. Numerical methods are approximate
techniques for solving mathematical problems, taking into account the extent of possible errors. Because
of very nature of numerical methods, these can be performed by digital computers. With the large data
handling capabilities and enormous data processing speed, modern computers can solve a complex system
of partial differential equations, numerically, and in turn, a complex engineering phenomenon in
negligible time.
The most common numerical methods employed for solution of partial differential equations are finite
difference method, finite element method and finite volume method. Finite difference method is the most
commonly used for linear second order differential such as Laplaces Equation, Poisons Equation and
Heat Conduction Equation. In Finite Difference method, the discretization transforms the differential
equation into a finite difference equation whose solution approximates the solution of the differential
equation at discrete points which form a grid in space and time. A reduction in the mesh size increases the
number of grid points and therefore the accuracy of the approximation, although this does of course
increase the computation demands. Applying a finite difference method one has therefore to make a
compromise between accuracy and computation time.
Heat conduction problems commonly encountered in oil refineries, petrochemical plants etc. are solved
by explicit (Schemidt) method or by implicit (Crank-Nicolson) method. A common problem in some
finite difference methods is that small initial and rounding errors become larger as the calculation
proceeds, which in the end produces a false result i.e. the methods are not stable. In the explicit difference
method, the time step is limited by the stability conditions. Therefore obtaining a temperature profile at a
given time usually requires a lot of time steps. This step size restriction can be avoided by choosing an
implicit difference method such as Crank-Nicolson method.
Numerical Solution of Linear Second Order Partial Differential Equations
18

Stability analysis of the Crank-Nicolson method shows that the method is unconditionally stable, allows
larger time steps to be used and therefore requires a more exact approximation of the derivative with
respect of time.

Bibliography:
[1] Iyenger, S. R. K, & Jain, R.K. (2009). Numerical Methods. New Dehli: New Age International
(P) Limited, Publishers.

[2] Baehr, Hans Dieter, & Stephen, Karl. (2006). Heat and Mass Transfer (2
nd
Edition). Berlin:
Springer-Verlag.

[3] Hammerlin, Gunther & Hoffmann, Karl-Heinz. (1991). Numerical Mathematics. Berlin:
Springer-Verlag.
[4] Harder, Douglas Wilhelm. Numerical Analysis for Engineers. Retrieved from
http://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/
[5] Numerical analysis. (2010, March 23). In Wikipedia, The Free Encyclopedia. Retrieved 16:53,
March 24, 2010, from
http://en.wikipedia.org/w/index.php?title=Numerical_analysis&oldid=351585388

S-ar putea să vă placă și