Sunteți pe pagina 1din 119

A B Tumwesigye CSC2103 2008/2009 1

NUMERICAL METHODS I
A.B. TUMWESIGYE
Contents
1 Introduction 2
1.1 What is Numerical Analysis? . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Two issues of Numerical Analysis: . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Advantages of Numerical Analysis: . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Important Notes: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Numerical Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5.1 Sources of Errors: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5.2 Types of Errors: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Numerical Integration 7
2.1 Manual Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Trapezoidal/Trapezium rule . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Composite Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Simpsons rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1 Composite Simpsons rule . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.2 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.3 Program-FORTRAN (an alternative to MAPLE) . . . . . . . . . . 17
3 Solution to Non-Linear Equations 22
3.1 Successive Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.1 Background knowledge . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.2 Successive Substitutions . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.3 Convergence criterion . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1 Derivation of the secant method . . . . . . . . . . . . . . . . . . . . 27
3.2.2 Advantages and Disadvantages of the secant method . . . . . . . . 29
3.2.3 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3 The Regular False method . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.1 Geometric representation and derivation of the regular false algorithm. 31
3.3.2 Order of convergence of the Regular algorithm . . . . . . . . . . . . 33
3.3.3 Advantages and disadvantages of the regular false algorithm . . . . 33
3.3.4 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2
A B Tumwesigye CSC2103 2008/2009 3
3.4 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4.4 Explanation on the bisection method . . . . . . . . . . . . . . . . . 35
3.4.5 Advantages of the bisection method . . . . . . . . . . . . . . . . . . 36
3.4.6 Disadvantages of the bisection method . . . . . . . . . . . . . . . . 36
3.4.7 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5 Newton Raphsons method . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.1 Derivation of the Newtons method . . . . . . . . . . . . . . . . . . 38
3.5.2 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.5.3 General Newtons algorithm for extracting roots of positive numbers. 42
3.5.4 Using Newtons general formula for roots in nding reciprocals of
numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.5.5 Some limitations of the Newton Raphsons method . . . . . . . . . 45
3.5.6 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4 Interpolation 48
4.1 Review- Linear interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.4 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.5 Lagrange interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.5.1 Alternatively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.5.2 Examples of interpolating polynomials . . . . . . . . . . . . . . . . 54
4.5.3 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.5.4 Error analysis in Lagranges interpolation . . . . . . . . . . . . . . 59
4.5.5 Rounding errors in Lagrange polynomials . . . . . . . . . . . . . . . 62
4.5.6 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5 Numerical Dierentiation 71
5.1 Why Numerical techniques for nding derivatives. . . . . . . . . . . . . . . 71
5.2 Analytic denition of a derivative as compared to a numerical denition . . 71
5.3 Forward dierence approximation . . . . . . . . . . . . . . . . . . . . . . . 72
5.3.1 Analytical derivation of the forward dierence approximation. . . . 74
5.4 Backward dierence approximation . . . . . . . . . . . . . . . . . . . . . . 74
5.4.1 Analytical derivation of the backward dierence approximation. . . 76
5.5 The Central dierence approximation . . . . . . . . . . . . . . . . . . . . . 77
5.5.1 Analytical derivation of the central dierence approximation . . . . 78
5.6 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.7 Comparision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.8 The second derivative approximation . . . . . . . . . . . . . . . . . . . . . 80
5.8.1 Error analysis in numerical dierentiation. . . . . . . . . . . . . . . 82
A B Tumwesigye CSC2103 2008/2009 1
5.8.2 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6 Ordinary Dierential Equations 86
6.1 Dierent forms of ordinary dierential equations . . . . . . . . . . . . . . . 87
6.1.1 Initial-value problems . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.1.2 Single step methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2 Taylor series method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.3 Eulers Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.3.1 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.4 Runge Kutta Methods -The Improved Euler method . . . . . . . . . . . . 95
6.4.1 Runge-kutta two stage method of order two . . . . . . . . . . . . . 96
6.4.2 Runge-kutta classical four stage method of order two . . . . . . . . 100
6.4.3 Text Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
7 Sample Questions 107
8 Further Reading 116
Chapter 1
Introduction
Most real mathematical problems do not have analytical solutions. However, they do
have real solutions. In order to obtain these solutions we must use other methods such as
graphical representations, or numerical analysis. Numerical analysis is the mathematical
method that uses numerical approximations to obtain numerical answers to the problem.
Numerical analysis also considers the accuracy of an approximation, and when the ap-
proximation is good enough. Numerical answers are useful because we use numbers to
build our world, not with the exact analytical solution, such as
e

27
The ever-increasing advances in computer technology has enabled many in science and
engineering to apply numerical methods to simulate physical phenomena. Numerical
methods are often divided into elementary ones such as nding the root of an equation,
integrating a function or solving a linear system of equations to intensive ones like the
nite element method. Intensive methods are often needed for the solution of practi-
cal problems and they often require the systematic application of a range of elementary
methods, often thousands or millions of times over. In the development of numerical
methods, simplications need to be made to progress towards a solution: for example
general functions may need to be approximated by polynomials and computers cannot
generally represent numbers exactly anyway. As a result, numerical methods do not usu-
ally give the exact answer to a given problem, or they can only tend towards a solution
getting closer and closer with each iteration. Numerical methods are generally only useful
when they are implemented on computer using a computer programming language .
The study of the behavior of numerical methods is called numerical analysis. This is a
mathematical subject that considers the modeling of the error in the processing of nu-
merical methods and the subsequent re-design of methods.
Numerical analysis involves the study of methods of computing numerical data. In many
problems this implies producing a sequence of approximations; thus the questions involve
the rate of convergence, the accuracy (or even validity) of the answer, and the complete-
2
A B Tumwesigye CSC2103 2008/2009 3
ness of the response. (With many problems it is dicult to decide from a programs
termination whether other solutions exist.) Since many problems across mathematics can
be reduced to linear algebra, this too is studied numerically; here there are signicant
problems with the amount of time necessary to process the initial data. Numerical so-
lutions to dierential equations require the determination not of a few numbers but of
an entire function; in particular, convergence must be judged by some global criterion.
Other topics include numerical simulation, optimization, and graphical analysis, and the
development of robust working code.
Numerical linear algebra topics: solutions of linear systems AX = B, eigenvalues and eigen-
vectors, matrix factorizations. Calculus topics: numerical dierentiation and integration,
interpolation, solutions of nonlinear equations f(x) = 0. Statistical topics: polynomial
approximation, curve tting.
Further information on the elementary methods can be found in books on numerical
methods or books on numerical analysis. Dedicated text books can be found on each of
the intensive methods. Details of available books can be accessed through www.science-
books.net .
Need help understanding numerical methods?
(1) What is the use of numerical methods in real life application?
(2) Need a brief explanation of numerical methods
(3) Fixed Point Iteration, Linear Interpolation and Newton-Raphson Method, what are
the dierences to their uses?
Best Answer
1. um, everywhere? From a cash machine, to calculating how much chemicals to put
to produce laundry detergent, to construction of buildings and bridges.
2. The ever-increasing advances in computer technology has enabled many in science
and engineering to apply numerical methods to simulate physical phenomena. Nu-
merical methods are often divided into elementary ones such as nding the root of
an equation, integrating a function or solving a linear system of equations to in-
tensive ones like the nite element method. Intensive methods are often needed for
the solution of practical problems and they often require the systematic application
of a range of elementary methods, often thousands or millions of times over. In
the development of numerical methods, simplications need to be made to progress
towards a solution: for example general functions may need to be approximated by
polynomials and computers cannot generally represent numbers exactly anyway. As
a result, numerical methods do not usually give the exact answer to a given prob-
lem, or they can only tend towards a solution getting closer and closer with each
4 MAK-ICT
iteration. Numerical methods are generally only useful when they are implemented
on computer using a computer programming language .
3. Visit these sites: http://math.fullerton.edu/mathews/n2003/FixedPointMod.html
http://en.wikipedia.org/wiki/Linear-interpolation
http://mathworld.wolfram.com/NewtonsMethod.html
Other answers
2. Numerical Methods refers to procedures to nd approximate solutions when exact
solutions cannot be found in a straightforward manner.
3. Linear interpolation assumes that if two points on a graph are given, any point in
between them can be found by connecting the original two points by a straight line.
Newton Raphson is a method to nd approximate solutions to an equation through
an iterative process where each calculated value is used as the starting point for the
next calculated value. NRM requires that you can evaluate the rst derivative of
that equation.
1.1 What is Numerical Analysis?
- It is a way to do highly complicated mathematics problems on a computer.
- it is also known as a technique widely used by scientists and engineers to solve their
problems.
1.2 Two issues of Numerical Analysis:
- How to compute? This corresponds to algorithmic aspects;
- How accurate is it? That corresponds to error analysis aspects.
1.3 Advantages of Numerical Analysis:
- It can obtain numerical answers of the problems that have no analytic solution.
- It does NOT need special substitutions and integrations by parts. It needs only the
basic mathematical operations: addition, substitution, multiplication and division,
plus making some comparisons.
A B Tumwesigye CSC2103 2008/2009 5
1.4 Important Notes:
- Numerical analysis solution is always numerical.
- Results from numerical analysis is an approximation.
1.5 Numerical Errors
When we get into the real world from an ideal world and nite to innite, errors arise.
1.5.1 Sources of Errors:
- Mathematical problems involving quantities of innite precision.
- Numerical methods bridge the precision gap by putting errors under rm control.
- Computer can only handle quantities of nite precision.
1.5.2 Types of Errors:
- Truncation error (nite speed and time)
Truncation error is a consequence of doing only a nite number of steps in a cal-
culation that would require an innite number of steps to do exactly. A simple
example of a calculation that will be aected by truncation error is the evaluation
of an innite sum using the NSum function. The computer certainly isnt going to
compute values for all of the terms in an innite sum. The terms that are left out
lead to truncation error.
Truncation Error: The essence of any numerical method is that it is approximate
this usually occurs because of truncation, e.g., cos x

= 1
x
2
2
or terminating an
innite sequence of operations after a nite number have been performed.
It is not possible by numerical techniques alone to get an accurate estimate of the
size of the truncation error in a result. It is possible for any purely numerical al-
gorithm, including the algorithms used by numerical functions in Mathematica, to
produce incorrect results, and to do so without warning. The only way to be certain
that results from functions like NIntegrate and NDSolve are correct is to do some
independent analysis, possibly including detailed investigation of the algorithm that
was used to do the calculation. Such investigations are an important part of the
eld of numerical analysis.
6 MAK-ICT
For example, after using Taylor expansion
e
x
= 1 +
x
1!
+
x
2
2!
+
x
3
3!
+
cos x = 1 +
x
2
2!
+
x
4
4!
+
x
6
6!
+
x
8
8!
+
sin x = x
x
3
3!
+
x
5
5!

x
7
7!
+
x
9
9!
+
You could realize that there are many terms you have truncated o in the expansion,
thats why
- Round o errors
Numbers can be stored only up to a xed nite number of digits: Additional digits
may be rounded or chopped. Rounding error is sometimes characterized by
machine
,
the largest (positive) number that the machine (computer) cannot distinguish be-
tween 1 and 1+
machine
.
Roundo error, or representation error, is the error associated with the fact that
the computer keeps only a nite number of digits in calculations with inexact num-
bers. Since it is not possible (except in special cases) to represent all of the digits
in numbers like 1/3 or or

2, computers store only the rst few digits in numeri-


cal approximations of these numbers. In typical situations, the computer will store
only the rst 16 decimal digits or the rst 53 binary digits. The remaining digits
are discarded. The discarded digits lead to errors in the result. One of the more
conspicuous symptoms of roundo error is the appearance of tiny non-zero numbers
in results that would otherwise be zero.
Although the ability to reduce the eects of roundo error by raising the precision
of a calculation is certainly very useful, it is far from a universal solution to all
problems with numerical error.
1.8625 to three decimal places, it becomes 1.863
1.8625 to two decimal places, it becomes 1.86
1.8625 to one decimal place, it become 1.9
- Human Errors Such as Computing tools/machines, Mathematical equation/model,
propagated error.
Chapter 2
Numerical Integration
There are two main reasons for you to need to do numerical integration: analytical in-
tegration may be impossible or infeasible, or you may wish to integrate tabulated data
rather than known functions. In this section we outline the main approaches to numerical
integration. Which is preferable depends in part on the results required, and in part on
the function or data to be integrated.
That is
This will be useful when we cannot nd an elementary antiderivative for f(x) or if the
function is dened using data obtained from some experiment.
Numerical integration is the numerical approximation of the integral of a function. For a
function of one variable, it amounts to nding the area under the graph of the function.
That is nding I where
I =
_
b
a
f(x) dx
Methods generally replace the integral by a weighted sum of n weights and n function
evaluations, so that
I =
n

i=1
W
i
f(x
i
) dx
For a function of two variables it is equivalent to nding an approximation to the vol-
ume under the surface. Numerical integration is often also referred to as quadrature or
sometimes cubature for functions of two or more variables. Returning to the one variable
case, numerical integration involves nding the approximation to an integral of a function
f(x) through its evaluation at a set of discrete points. There are two distinct approaches
to this. Firstly methods like the trapezium rule or Simpsons rule determine the integral
through evaluating f(x) at regularly spaced points. These are generally referred to as
7
8 MAK-ICT
Newton-Cotes formulae.
Alternative methods termed Gaussian Quadrature methods have arisen that select irregularly-
placed evaluation points, chosen to determine the integral as accurately as possible with
a given set of points.
Gaussian Quadrature methods are important as they often lead to very ecient methods.
In numerical integration the eciency of the method relates to the accuracy obtained
with respect to the number of evaluations of the function f(x). In intensive methods such
as the boundary element method integrations may need to be performed millions of times
so the eciency of the methods needs to be considered sometimes.
In general, care must be taken to match the numerical integration method to the expected
nature of the function f(x). For example it may be known that f(x) is regular. On the
other hand f(x) may be singular or oscillatory and will then need special treatment.
Often a special method called a product integration method can be developed for the
integration of functions of the form f(x) = w(x)g(x) where w(x) is a pre-set function and
the function and g(x) is known to be a relatively nice function.
There are books devoted to numerical integration. Numerical integration is a basic nu-
merical method and the subject is generally covered in books on numerical methods or
numerical analysis (see below). Numerical methods for carrying out numerical integration
can often be easily programmed and can also be found in general numerical libraries.
2.1 Manual Method
If you were to perform the integration by hand, one approach is to superimpose a grid
on a graph of the function to be integrated, and simply count the squares, counting only
those covered by 50% or more of the function. Provided the grid is suciently ne, a
reasonably accurate estimate may be obtained.
2.2 Trapezoidal/Trapezium rule
2.2.1 Composite Trapezoidal Rule
To derive the rule, we use the following gure
Figure 2.1: Illustration of Composite Trapezoidal rule.
A B Tumwesigye CSC2103 2008/2009 9
We divide [a, b] into n equal intervals of width h =
(ba)
n
Let
I =
_
b
a
f(x)dx =
_
x
1
a
f(x)dx+
_
x
2
x
1
f(x)dx +. . . +
_
x
i+1
x
i
f(x)dx +. . . +
_
x
n
x
n1
f(x)dx.
Applying the trapezium equation , we get,
I =
h
2
[f
0
+ 2f
1
+ 2f
2
+. . . + 2f
n1
+f
n
] +E
trunc
,
where
E
trunc
=
h
3
12
[f

(c
1
) +f

(c
2
) +. . . +f

(c
n
)]
[E
trunc
[
Mh
3
12n
2
such that the second derivative f

is continuous on [a, b] and that [f

(x)[ < M for all x


in [a, b].
Example 2.2.1 Approximate the integral
I =
_
1
0
x
2
dx
using the composite trapezoidal rule with step length h = 0.2
Solution
Since
I =
_
1
0
f(x)dx
h
2
f
0
+ 2f
1
+ 2f
2
+ 2f
3
+ 2f
4
+f
5

Partitioning the interval [0, 1], we get,


f
0
= f(x
0
) = f(0) = 0
2
= 0
10 MAK-ICT
f
1
= f(x
1
) = f(0.2) = (0.2)
2
= 0.04
f
2
= f(x
2
) = f(0.4) = (0.4)
2
= 0.16
f
3
= f(x
3
) = f(0.6) = (0.6)
2
= 0.36
f
4
= f(x
4
) = f(0.8) = (0.8)
2
= 0.64
f
5
= f(x
5
) = f(1) = 1
2
= 1
Therefore
I
0.2
2
0 + 2[0.04 + 0.16 + 0.36 + 0.64] + 1
= 0.10 + 2[1.2] + 1 = 0.340.
Absolute error committed is
0.340 0.333 0.00667
We have noted that the error obtained is much smaller than that obtained with the pure
trapezoidal rule in the previous example. In fact the smaller the error, i.e the better the
approximation.
The truncation error is
[E
trunc
[
Mh
3
12n
2
=
2(0.2)
3
(12)5
2
since f

(x) = 2, [f

(x)[ < 2 on [0, 1]


Example 2.2.2
Approximate the integral
I =
_
2
2
e

x
2
2
dx
by the composite trapezoidal rule with h = 1.0. The exact value of the integral I to 4
decimal places is 2.3925.
Solution
Since h = 1.0, then ,
I =
1.0
2
(e

(2)
2
2
+ 2e

(1)
2
2
+ 2e

(0)
2
2
+
1
2
e

(1)
2
2
+e

(2)
2
2
)
A B Tumwesigye CSC2103 2008/2009 11
0.5(0.13534 + 2 0.60653 + 2 1.00000 + 2 0.60653 + 0.73534)
= 2.3484
with error of 0.0441.
Example 2.2.3
Estimate
_
2
1
1
x+1
dx using
(i) Trapezium rule
h
2
[f
0
+f
1
] =
1
2
[
1
2
+
1
3
] = 0.416
(ii) Composite trapezium rule with h = 0.2

h
2
f(1.0) + 2f(1.2) + 2f(1.4) + 2f(1.6) + 2f(1.8) +f(2.0)

0.2
2
0.5 + 2(0.45) + 2(0.42) + 2(0.38) + 2(0.36) + 0.33 = 0.4057
Example 2.2.4 Evaluate
_
3
0
(2x + 3) dx by the Trapezium rule with four intervals (5
ordinates).

h
2
f(0) + 2f(0.75) + 2f(1.5) + 2f(2.25) +f(3.0)

0.75
2
3 + 2(4.5) + 2(6) + 2(7.5) + 9 = 18
Example 2.2.5 Evaluating
_
3
1
sin x dx by the Trapezium rule with 100 points, it gives
the answer as 1.5302437923
But can you think of a way of programme this easily??
Example 2.2.6 Evaluating
_
3
0
sin x
2
dx by the Trapezium rule
n Sum of areas of trapezoids
4 0.43358
8 0.70404
16 0.75723
32 0.76954
64 0.77256
128 0.77331
256 0.77350
512 0.77355
1024 0.77356
2048 0.77356
12 MAK-ICT
0.77356 appears to be a reasonable estimate of our integral.
2.2.2 Text Questions
1. Compute the approximate value of
_
1
0
(x
2
+1)
1
dx by using the trapezoidal rule with
ten subintervals. Then compare with the actual value of the integral. Determine
the truncation error bound and compare with the actual error.
2. If the trapezoidal rule is used to compute
_
5
2
sinxdx with h = 0.01, obtain an upper
bound error.
3. How large must n be if the trapezoidal rule is to estimate
_
2
0
expx
2
dx with an
error not exceeding 10
6
?
4. Consider the integral
_
1
0
sin(
x
2
2
)dx. Suppose that we wish to integrate numerically
with error < 10
5
. What interval width h is needed if the trapezoidal rule is to be
used?
5. Approximate
_
3
1
1
x
dx by the trapezoidal rule with an error of utmost 0.1.
2.3 Simpsons rule
We can obtain the Simpsons rule by various ways. One of the most popular ways is
from Lagranges quadratic interpolation polynomial. The Simpsons rule approximates
the area under the curve y = f(x) from x
0
to x
2
by the area under a parabolic curve.
The gure below illustrates parabolic tting on to the curve y = f(x) from x
0
to x
2
.
Interpolating f(x) by a Lagrange polynomial of degree 2 i.e. P
2
(x) then
f(x) = P
2
(x) +E
trunc
(x)
So
_
x
2
x
0
f(x)dx =
_
x
2
x
0
P
2
(x)dx +
_
x
2
x
0
E
trunc
(x)dx (2.1)
Results
Summing up all the three cases, equation (2.1) becomes
_
x
2
x
0
P
2
(x)dx =
h
3
[f
0
+ 4f
1
+f
2
] +E
trunc
(x) Thus
_
x
2
x
0
P
2
(x)dx =
h
3
[f
0
+ 4f
1
+f
2
] (2.2)
A B Tumwesigye CSC2103 2008/2009 13
Relation equation (2.2) is the Simpsons rule for approximating the integral. The
integral for the error in equation (2.1), becomes
_
x
2
x
0
E
trunc
(x)dx =
_
x
2
x
0
(x x
0
)(x x
1
)(x x
2
)
3!
f
(4)
(c(x))dx
This can be shown (with diculty!) to be
Mh
5
180n
4
(2.3)
That the fourth derivative f
4
is continuous on [a, b] and that [f
4
(x)[ < M for all x in
[a, b].
Example 2.3.1
Use Simpsons rule to approximate the integral I =
_
1
0
x
2
dx.
Solution
Since I =
_
1
0
f (x)dx
h
3
[f (x
0
) + 4f (x
1
) + f (x
2
)]
But x
0
= 0, x
1
=
1
2
, x
2
= 1, h =
(1 0)
(2)(1)
=
1
2
=
(b a)
2n
and n = 1.
Therefore
I
1
6
[f(0) + 4f(
1
2
) +f(1)] =
1
6
[0
2
+ 4.(
1
2
)
2
+ 1
2
] =
1
3
= 0.33
But the exact value of the integral is
1
3
= 0.33. It should not surprise you that the
Simpson rule has generated the exact value of the integral. In fact the general result is
that for f(x) a polynomial of degree two or less, the Simpson rule will always generate
the exact value of the integral. This will later be stated as a theorem.
2.3.1 Composite Simpsons rule
Lets consider the Figure (8.2),
Figure 2.2: Illustration of the composite Simpsons rule.
14 MAK-ICT
We divide the interval [a, b] into 2n equal intervals of width h =
(ba)
2n
. Thus the integral,
I =
_
b
a
f(x)dx
becomes,
I =
_
b
a
f(x)dx =
_
x
2
a
f(x)dx +
_
x
4
x
2
f(x)dx +. . . +
_
2n
x
2n2
f(x)dx
=
h
3
[f
0
+ 4f
1
+f
2
] + [f
2
+ 4f
3
+f
4
]+
[f
4
+ 4f
5
+f
6
] +. . . + [f
2n2
+ 4f
2n1
+f
2n
]
=
h
3
[f
0
+ 4(f
1
+f
3
+. . . +f
2n1
) + 2(f
2
+f
4
+. . . +f
2n2
) +f
n
] +E
trunc
Where the truncation error,
E
trunc
=
h
5
90
[f
(4)
(c
1
) +f
(4)
(c
2
) +. . . +f
(4)
(c
n
)]
=
h
5
90
f
(4)
(c
2
) n
=
(b a)h
4
180
f
(4)
(c
f
)
where a c
f
b.
Example 2.3.2
Use the Simpsons rule to compute the integral
I =
_
2
2
e

x
2
2
dx
Using step size h = 1.0. Recall the exact value of I to 4 decimal places is 2.3925.
A B Tumwesigye CSC2103 2008/2009 15
Solution
Using the Simpsons rule, we have,
I
1.0
3
_
e

(2)
2
2
+ 4e

(1)
2
2
+ 2e

(0)
2
2
+ 4e

(1)
2
2
+e

(2)
2
2
_
= 2.3743
The error committed is 0.0182. We note that is error is much smaller than that obtained
when using Trapezoidal rule in the previous lecture though same step size is used.
Example 2.3.3
It is required to obtain
_
2
0
e
x
2
dx
exact to 4 decimal places. What should h be for Simpsons rule.
Solution
Since the error term is

(b a)
180
h
4
f
(4)
(c
f
)
But
f(x) = e
x
2
therefore
f

(x) = 2xe
x
2
f

(x) = 2(e
x
2
+ 2xe
x
2
) = 2e
x
2
(1 + 2x
2
) = e
x
2
(2 + 4x
2
)
therefore, f

(x) = 2xe
x
2
(2 + 4x
2
) + 8xe
x
2
= e
x
2
(4x + 8x
3
+ 8x) = e
x
2
(12x + 8x
3
)
and f
(iv)
(x) = 8e
x
2
(2x
4
+ 5x
2
+ 1) < 424e
4
So h!
2h
4
180
424e
4
< (0.5)10
4
= 0.057
Say choose h = 0.05.
16 MAK-ICT
2.3.2 Text Questions
1. Compute an approximate value of
_
1
0
(x
2
+ 1)
1
Using Composite Simpsons rule with
(i) h = 0.1,
(ii) h = 0.5.
Compare with the actual value of the integral in each case. Next, determine the
truncation error bound and compare with the actual error.
2. If the Simpsons rule is used to compute
_
5
2
sin xdx
with h = 0.75, obtain an upper bound on the error.
3. Establish the composite Simpsons rule over (n 1) even subintervals
_
b
a
f(x)dx =
h
3
[(f(a) +f(b)) + 4
(n1)
2

i=1
f(a + (2i 1)h) + 2
(n3)
2

i=1
f(a + 2ih)] +E
trunc
where, h =
(ba)
(n1)
and
E
trunc
=
(b a)
180
h
4
f
(4)
(c)
for some c [a, b]
4. Consider the integral
_
1
0
sin(
x
2
2
)dx.
Suppose that we wish to integrate numerically with error < 10
5
. What interval
width h is needed if the Simpsons rule is to be used?
A B Tumwesigye CSC2103 2008/2009 17
5. Compute
_
2
0
(x
3
+ 1)dx
by using h =
1
4
and compare with the exact value of the integral.
Example 2.3.4 Using Maple; use Simpsons Rule with n = 100 to approximate the
integral
_
1
0
1
1 +x
2
dx
> with(student);
> simpson(
1
1+x
2
, x = 0..1, 100);
1
200
+
1
75
_
50

i=1
1
1 +
_
1
50
i
1
100
_
2
_
+
1
150
_
49

i=1
1
1 +
1
2500
i
2
_
> evalf();
.7853981634
> evalf(Pi/4);
.7853981635
When trying for the trapezium rule, instead of simpson, you replace it with trape-
zoid in step II
2.3.3 Program-FORTRAN (an alternative to MAPLE)
Note that this program is written for clarity rather than speed. The number of function
evaluations actually computed may be approximately halved for the Trapezium rule and
reduced by one third for Simpsons rule if the compound formulations are used. Note also
that this example is included for illustrative purposes only. No knowledge of Fortran or
any other programming language is required in this course
REAL*8 FUNCTION TrapeziumRule(x0,x1,nx)
C=====parameters
INTEGER*4 nx
REAL*8 x0,x1
C=====functions
REAL*8 f
C=====local variables
INTEGER*4 i
REAL*8 dx,xa,xb,fa,fb,Sum
18 MAK-ICT
dx = (x1 - x0)/DFLOAT(nx)
Sum = 0.0
DO i=0,nx-1
xa = x0 + DFLOAT(i)*dx
xb = x0 + DFLOAT(i+1)*dx
fa = f(xa)
fb = f(xb)
Sum = Sum + fa + fb
ENDDO
Sum = Sum * dx / 2.0
TrapeziumRule = Sum
RETURN
END
Assigment to Hand in 2.1 Evaluate
_
2
1
x
2
cos xdx, f(x) = x
2
cos x with h =
2 1
6
=
1
6
. [a, b] = [1, 2]
using (i)Trapezium rule, (ii) Simpsons rule
Example 2.3.5
(a) (i) State one reason to justify numerical techniques of integration. 2 Marks
The analytic integral can be very complicated, and some times impossible. For
example
_
3
0
e
x
2
dx, but also The function f(x) might not be known, but can
be dened on some discrete points.
(ii) One of the commonly used numerical methods of integrations is the Simpsons
rule, state the rules. 1 Marks
I
S
=
h
3
[f
0
+ 4(f
1
+f
3
+. . . +f
2n1
) + 2(f
2
+f
4
+. . . +f
2n2
) +f
n
] +E
trunc
(b) Derive Trapeziums rule for the integration of a function f(x) between a and b,
I
T
=
h
2
[f
0
+ 2f
1
+ 2f
2
+. . . + 2f
n1
+f
n
] +E
trunc
State its truncation error. 3 Marks
Figure 2.3: Illustration of Trapezoidal rule.
A B Tumwesigye CSC2103 2008/2009 19
We divide [a, b] into n equal intervals of width h =
(ba)
n
Let
I =
_
b
a
f(x)dx =
_
x
1
a
f(x)dx+
_
x
2
x
1
f(x)dx +. . . +
_
x
i+1
x
i
f(x)dx +. . . +
_
x
n
x
n1
f(x)dx.
Applying the trapezium equation , we get,
I =
h
2
[f
0
+ 2f
1
+ 2f
2
+. . . + 2f
n1
+f
n
] +E
trunc
,
where
E
trunc
=
h
3
12
[f

(c
1
) +f

(c
2
) +. . . +f

(c
n
)]
[E
trunc
[
Mh
3
12n
2
such that the second derivative f

is continuous on [a, b] and that [f

(x)[ < M for


all x in [a, b].
(c) Numerically approximate
_
2
0
_
2 + cos(2

x)

dx
by using the Trapezium rule with
(i) n = 4 = h =
1
2
4 Marks
I
h
2
[f
0
+ 2f
1
+ 2f
2
+. . . + 2f
n1
+f
n
]
1
4
__
2 + cos(2

0)
_
+
_
2 + cos(2

2)
_
+ 2
__
2 + cos(2

0.5)
_
+
_
2 + cos(2

1)
_
+
_
2 + cos(2

1.5)
___
20 MAK-ICT

1
4
_
3 + 2 + cos(2

2) + 2
_
(2 + cos

2) + (2 + cos 2) + (2 + cos

6)
__

1
4
_
5 + cos(2

2) + 2
_
(2 + cos

2) + (2 + cos 2) + (2 + cos

6)
__
=
1
4
[13.98841913]
3.4971
(ii) n = 8 = h =
1
4
4 Marks
I =
h
2
[f
0
+ 2f
1
+ 2f
2
+. . . + 2f
n1
+f
n
]
=
_
2 + cos(2

0)
_
+
_
2 + cos(2

2)
_
+ 2
_
2 + cos(2

0.25)
_
+ 2
_
2 + 2 cos(2

0.5)
_
+ 2
_
2 + cos(2

0.75)
_
+ 2
_
2 + cos(2

1)
_
+ 2
_
2 + cos(2

1.25)
_
+ 2
_
2 + cos(2

1.5)
_
+ 2
_
2 + cos(2

1.75)
_
3.46928
(d) Considering your results in part (c) above, state any two reasons on how to reduce
the errors in numerical integration. 2 Marks
(e) (i) Solve the integral in part (c) above using the Simpsons rule with n = 4. 4
Marks
I =
h
3
[f
0
+ 4(f
1
+f
3
+. . . +f
2n1
) + 2(f
2
+f
4
+. . . +f
2n2
) +f
n
] +E
trunc

1
6
__
2 + cos(2

0)
_
+
_
2 + cos(2

2)
_
+ 2
__
2 + cos(2

0.5)
_
+
_
2 + cos(2

1.5)
__
+ 4
_
2 + cos(2

1)
__

1
6
_
3 + 2 + cos(2

2) + 2
_
(2 + cos

2) + (2 + cos

6)
_
+ 4(2 + cos 2)
_

1
6
_
5 + cos(2

2) + 2
_
(2 + cos

2) + (2 + cos

6) + 4(2 + cos 2)
__
3.46008250981
(ii) Show that using the Simpsons rule with n = 8, the integral in part (c) above
is I 3.460002979.
I =
h
3
[f
0
+ 4(f
1
+f
3
+. . . +f
2n1
) + 2(f
2
+f
4
+. . . +f
2n2
) +f
n
] +E
trunc
A B Tumwesigye CSC2103 2008/2009 21

1
12
__
2 + cos(2

0)
_
+
_
2 + cos(2

2)
_
+ 2
__
2 + cos(2

0.5)
_
+
_
2 + cos(2

1)
_
+
_
2 + cos(2

1.5)
__
+ 4
__
2 + cos(2

0.25)
_
+
_
2 + cos(2

0.75)
_
+
_
2 + cos(2

1.25)
_
+
_
2 + cos(2

1.75)
___
3.460002979
4 Marks
(f) Which of the two techniques of integration is superior. 1 Marks
It is the Simpsons rule that is more superior.
Chapter 3
Solution to Non-Linear Equations
3.1 Successive Substitution
3.1.1 Background knowledge
Successive substitution is one of the iterative techniques for solving nonlinear equations.
Iterative techniques start with an initial value/guess x
0
to the root and then using a
suitable recurrence relation we generate a sequence of approximations x
k

k=o
. If the
sequence x
0
, x
1
, . . . converges, then it does so on the required root. Iterative techniques
are written in the form
x
r+1
= g(x
r
), r = 0, 1, 2, . . . ,
if the next iterate x
r+1
depends on the previous one x
r
.
or x
r+1
= g
r
(x
r
, x
r1
), r = 1, 2, . . . ,
if the next iterate depends on the previous two i.e. x
r
and x
r1
.
3.1.2 Successive Substitutions
Masenge (1989) called this method the successive substitutions. Sometimes it is called
General Iteration or xed point method. In the method, we seek the roots of,
f(x) = 0. (3.1)
We try to split f(x) in the form,
f(x) = x g(x) (3.2)
However, this splitting may not be unique. But not all the dierent splittings may be
useful to us. We can determine the type of splitting which is useful to a numerical analyst.
Now, instead of solving equation (3.1) we now solve x = g(x). The scheme for solving
this problem is given by the algorithm;
x
n+1
= g(x
n
), n = 0, 1, 2, . . .
22
A B Tumwesigye CSC2103 2008/2009 23
Thus we start with a suitable value x
0
, and generate the sequence of approximations
x
1
= g(x
0
)
x
2
= g(x
1
)
x
3
= g(x
2
)
x
4
= g(x
3
)

x
n+1
= g(x
n
)

That is, the sequence is x


1
, x
2
, . . . , x
n
, . . .
Example 3.1.1
Find the real root of the equation
x
2
2x 3 = 0
in the interval [2, 4].
Solution
Splitting
f(x) = x
2
2x 3 = 0
in the form
f(x) = x g(x) = 0.
We get the schemes,
x = g(x) =
3
x 2
i.e x =
3
x 2
Giving the iterative scheme.
x
r+1
=
3
x
r
2
24 MAK-ICT
Taking the initial approximation x
0
= 4, we get the iterates
x
1
=
3
x
0
2
= 1.5
x
2
=
3
x
1
2
= 6
x
3
=
3
x
2
2
= 0.375
x
4
=
3
x
3
2
= 1.263158
x
5
=
3
x
4
2
= 0.919355
x
6
=
3
x
5
2
= 1.02762
x
7
=
3
x
6
2
= 0.990876

According to the behavior of the iterates, there is no hope for convergence in the
interval [2, 4]. Hence such a rearrangement is no good. Splitting f(x) = 0 in the form,
x = g
2
(x) =
x
2
3
2
giving the iterative scheme,
x
r+1
=
x
2
r
3
2
therefore with x
0
= 4 we get,
x
1
=
x
2
0
3
2
= 6.5
x
2
=
x
2
1
3
2
= 19.625
A B Tumwesigye CSC2103 2008/2009 25
x
3
=
x
2
2
3
2
= 191.070

This show that the iterates are obviously diverging. Hence such a rearrangement is
no good. Splitting f(x) = 0 in the form,
x = g
3
(x) =
_
(2x + 3)
Giving the iteration formula,
x
r+1
=
_
(2x
r
+ 3)
Thus with x
0
= 4 we get,
x
1
=
_
(2x
0
+ 3) = 3.31662
x
2
=
_
(2x
1
+ 3) = 3.10375
x
3
=
_
(2x
2
+ 3) = 3.03439
x
4
=
_
(2x
3
+ 3) = 3.01144

In fact this is an arrangement which is giving a sequence of iterates which are converging
to the root . The sequence is converging to the root x = 3.
Note 3.1.1
One would actually wonder as to whether you have to keep trying the splitting until you
get one which converges to the root. We can test and know the splitting which gives
a convergence sequence of approximations before starting to compute the iterates. This
wonderful criterion is called the convergence criterion for the iterative scheme of the form
x
r+1
= g(x
r
).
However, before stating the criterion, lets rst formerly state what is meant by an iterative
scheme
x
r+1
= g(x
r
).
being convergent.
Denition 3.1.1
An iterative scheme/process. x
r+1
= g(x
r
) is convergent if,
lim
r
[x
r+1
x
r
[ = 0,
otherwise we say that the scheme is divergent.
26 MAK-ICT
3.1.3 Convergence criterion
Let the function g(x) be continuous in a small interval [a, b] containing a simple (single)
root of the function f(x). Let also g(x) be dierentiable in the open interval (a, b). If
there exists a real number L such that 0 [g

(x)[ L 1, a g(x) b x (a, b),


then for an arbitrary starting value x
0
taken from (a, b) the iteration formula x
r+1
= f(x
r
)
will converge. The rate of convergence of the iteration will depend on the smallness of
the constant L relative to unity.
Example 3.1.2 x
3
sinx = 0, the root is = 0.929
i)x = (sin x)
1
3
= g(x) g

(x) =
1
3
(sinx)
2
3
cos x
g

() = 0.23 so [g

()[ < 1 and iteration is convergent .


ii) x =
sin x
x
2
= g(x) g

(x) =
x
2
cos x2xsin x
x
4
g

() = 1.3 so [g

()[ > 1 and iteration is divergent.


Is there a way to choose g(x) so that the iteration will always work?
+ Is there a way to improve the rate of convergence?
Note 3.1.2
Sometimes this theorem is known as the xed point theorem, i.e. a xed point is a number
x

such that x

= g(x

). Thus a root of f(x) = 0 is a xed point of the scheme x = g(x).


Theorem 3.1.1 Fixed point theorem
If g(x) and g

(x) are continuous in I = (x

, x

+ ), where x

is a xed point, and


if [g

(x)[ k < 1, x I, x
r+1
= g(x
r
) converges to x

(attractive xed point). If


[g

(x)[ > 1 x I, the process does not converge to x

(repulsive).
Proof
[x

= g(x

)] [x
r+1
= g(x
r
)]
x

x
r+1
= g(x

) g(x
r
)
= g

(c
n
)(x

x
n
), c
n
(x

x
n
)
(mean value theorem) Writing, x

x
n+1
= e
n+1
etc
[e
n+1
[ = [g

(c
0
)[[e
0
[ k[e
n
[
If x
0
I,
[e
0
[ = [g

(c
0
)[[e
0
[ k[e
0
[
[e
2
[ = k[e
1
[ k
2
[e
0
[
= [e
n+1
[ k
n+1
[e
0
[
So as n , k
n+1
0 and [e
n+1
[ 0 i.e x
n+1
x

.
Example 3.1.3
1. Verify for the following example that x = 3 is a solution of x = g(x).
A B Tumwesigye CSC2103 2008/2009 27
(a) g(x) =
18x
(x
2
+9)
(b) g(x) = x
3
24
(c) g(x) =
2x
3
(3x
2
9)
(d) g(x) =
81
(x
2
+18)
Starting with x
0
= 3.1, calculate the rst few iteration and justify theoretically the
apparent behavior.
2. Consider the xed iteration x
k
= (x
k1
) for g(x) = 2(x 1)
1
2
for x 1. Show
that only one xed point exists (at x = 2) and that g

(2) = 1. Compute iterations


starting from
(a) x
0
= 1.5 and
(b) x
0
= 2.5 and show them on a plot of x and g(x).
3. By splitting f(x) = x
3
x 1 = 0 in the form f(x) = x g(x) = 0 for nding the
root [1, 2],
(i) get three dierent splitting and with their corresponding iterative formulae.
(ii) Test using the convergence criterion which of the splitting lead to a convergent
sequence.
(iii) For the scheme leading to a convergent sequence, start with a suitable initial
approximation and nd the root correct to 3 decimal places.
3.2 Secant Method
The secant method needs two points near the root before the algorithm can be applied.
Thus it is of the form
x
r+1
= f(x
r
, x
r1
).
3.2.1 Derivation of the secant method
We linear approximate the graph of y = f(x) in gure (10.1), by a chord passing through
the points A and B. The equation of this chord is,
28 MAK-ICT
y f(x
1
) =
f(x
0
) f(x
1
)(x x
1
)
x
0
x
1
.
The chord cuts the x - axis at x
2
thus
f(x
1
) =
f(x
0
) f(x
1
)(x
2
x
1
)
x
0
x
1
Giving,
x
2
= x
1

f(x
1
)(x
1
x
0
)
f(x
1
) f(x
0
)
or in general,
x
n+1
= g(x
n
, x
n1
)
x
n+1
= x
n

f(x
n
)(x
n
x
n1
)
f(x
n
) f(x
n1
)
(3.3)
Note 3.2.1 Realise we have been able to generate Equation 3.3 by comparing two tan-
gents at x
0
and x
1
. But if we compare the two tangents at x
0
and x
2
we generate
x
n+1
= x
n1

f(x
n1
)(x
n1
x
n
)
f(x
n1
) f(x
n
)
(3.4)
The error in the (n + 1)
th
iterate is related to the error in the n
th
iterate e
n
by the
relation e
n+1
Ae
k
n
where k 1.618 . . . and A is a constant. This relation suggests that
the method has order of convergence 1.618.
A B Tumwesigye CSC2103 2008/2009 29
Example 3.2.1
Use the secant method to nd the root near 2 of the equation x
3
2x 5 = 0. Start the
iteration with x
0
= 1.9 and x
1
= 2.0.
Solution Recall
x
n+1
= x
n1

f(x
n1
)(x
n1
x
n
)
f(x
n1
) f(x
n
)
Since f(x) = x
2
2x 5. We nd
x
0
= 1.9, f
0
= 1.941
x
1
= 2.0, f
1
= 1.000
x
2
= x
0

f(x
0
)[x
0
x
1
]
f(x
0
) f(x
1
)
= 1.9
0.1(1.941)
0.941
= 2.1063 f
2
= 0.1320
x
3
= 2.0
0.1063(1.000)
1.1320
= 2.0939 f
3
= 0.0073
x
4
= 2.1063
(0.0124)(0.1320)
0.1393
= 2.09455 f
4
= 0.0002
nally we have
x
4
= 2.0939
0.00065(0.0073)
0.00728
= 2.09455
Thus since x
4
and x
5
are identical to 5 decimal places, so x
5
= 2.09455 is the value of
the root correct to ve decimal places.
3.2.2 Advantages and Disadvantages of the secant method
The method;
(i) can work for double roots.
(ii) has order of convergence of 1.618.
(iii) is not always convergent.
30 MAK-ICT
The above are advantages or disadvantages depending on the comparison in question.
Example 3.2.2
Find the real root of f(x) = x
3
+x
2
3x 3 = 0 using secant method.
Solution
First we must nd an interval where the real root x

lies. Now since, f(1) = 1+133 =


4 < 0 and f(2) = 8+463 = 3 > 0 and f(x) is continuous for all real x, there exists
x

(1, 2) such that f(x

) = 0 (the intermediate value theorem).


f

(x) = 3x
2
+ 2x 3 = 0
when x = 0.7208 or 1.3874. Since these values of x do not belong to (1, 2) we conclude
that x

is the only real root in (1, 2) (Application of Rolles theorem). Thus since, [f(1)[ >
[f(2)[, we let x
0
= 1 and x
1
= 2. To get,
x
2
= 1
(4)(1)
7
= 1 +
4
7
= 1.571432f(1.571429) = 1.364432,
So [f(1.571429)[ < [f(2)[, hence let x
0
= 2, x
1
= 1.571429 to get,
x
2
= 2 (3)
2 1.571429
3 + 1.364432
= 1.705411 f(1.705411) = 0.2477445
So that [f(1.705411)[ < [f(1.541429)[ So we let, x
0
= 1.571429, x
1
= 1.705411 to get,
x
2
= 1.571429 (1.364432)
1.571429 1.705411
1.364432 + 0.247745
= 1.735136
and the next iterate is 1.732051. Thus continuing this process gives, the fourth and fth
approximations to x

equal to 1.731996 and 1.732051 respectively. Therefore 1.732051 is


a root correct to 2 decimal places.
3.2.3 Text Questions
1. Show that there is a root of the equation
f(x) = 3x sin x e
x
= 0
in the interval (0, 1). Estimate this root to 2 decimal places using the secant method.
2. Find the roots of the following equations, using the methods of secant.
A B Tumwesigye CSC2103 2008/2009 31
(i) e
x
= cos x
(ii) x
3
2x + 1 = 0
(iii) sin 2x e
x
1 = 0
(iv) ln(x 1) = x
2
3. Use the secant method to nd the real root of the equation
x
3
+ 2x
2
x + 5 = 0.
4. Consider obtaining the root of;
f(x) =
e
x
+ 1 sin x
(x 2)
Show that f(1.9) < 0, f(2.1) > 0 and use the secant method to obtain the root.
3.3 The Regular False method
The regular false algorithm uses a similar geometric approach like the secant method
(the regular false method is an associate of the secant method) with the exception that,
f(x
r
)f(x
r1
) < 0 at each stage of the algorithm. Thus to start the method, you need two
points x
0
and x
1
near the root such that f(x
0
)f(x
1
) < 0.
3.3.1 Geometric representation and derivation of the regular
false algorithm.
Figure 3.1: Geometrical representation of the regular false method.
From Figure above, we note that the produce f(x
0
)(f(x
1
) < 0, which is in conformity
with the regular false. This was not necessarily the case for the secant method. The
equation of the Chord CD is,
y = f(x
1
) =
f(x
0
f(x
1
))(x x
1
)
x
0
x
1
This Chord cuts the x-axis at x
2
i.e.
f(x
1
) =
f(x
0
) f(x
1
)
(x
0
x
1
)
(x
2
x
1
)
32 MAK-ICT
giving, x
2
= x
1

f(x
1
)(x
1
x
0
)
f(x
1
) f(x
0
)
or in general we have that, x
n+1
= g(x
n
, x
n1
)
That is, x
n+1
= x
n

f(x
n
)(x
n
x
n1
)
f(x
n
) f(x
n1
)
. (3.5)
Equation (3.5) is the popular Regular false/falsi position method with condition that at
each stage of the algorithm, f(x
r
)f(x
r1
) < 1.
Example 3.3.1
Find the real root of f(x) = x
3
2x 2 = 0 using Regular-falsi method.
Solution
First we must nd an interval where the real root x

lies. Now since, f(1) = 3 < 0 and


f(2) == 2 > 0 and f(x) is continuous for all real x, there exists x

(1, 2) such that


f(x

) = 0 (the intermediate value theorem). We let x


0
= 1 and x
1
= 2. To get,
x
2
= 2
(2)(1)
6
= 2.333333,
since f(x
2
) = f(2.33333) = 6.0370 > 0
Thus x
3
= x
2

f(x
2
)(x
2
x
0
)
f(x
2
)f(x
0
)
x
3
= 2.3333
6.0370(2.3333 1)
9.0370
= 1.4426
since f(x
3
) = f(1.4426) = 1.88296 < 0
Thus x
4
= x
3

f(x
3
)(x
3
x
2
)
f(x
3
)f(x
2
)
x
4
= 1.4426
1.8829(0.8904)
7.91996
= 1.6542914 =f(x
4
) = 0.781316 < 0
x
5
= x
4

f(x
4
)(x
4
x
2
)
f(x
4
) f(x
2
)
= 1.73206 =f(x
5
) = 0.26785 < 0
A B Tumwesigye CSC2103 2008/2009 33
x
6
= x
5

f(x
5
)(x
5
x
2
)
f(x
5
) f(x
2
)
= 1.757605 =f(x
6
) = 0.085654 < 0
x
7
= x
6

f(x
6
)(x
6
x
2
)
f(x
6
) f(x
2
)
= 1.76565 =f(x
7
) = 0.02682 < 0
x
8
= x
7

f(x
7
)(x
7
x
2
)
f(x
7
) f(x
2
)
= 1.76816 =f(x
8
) = 0.008356 < 0
x
9
= x
8

f(x
8
)(x
8
x
2
)
f(x
8
) f(x
2
)
= 1.76894 =f(x
9
) = 0.002595 < 0
x
10
= x
9

f(x
9
)(x
9
x
2
)
f(x
9
) f(x
2
)
= 1.76918
We could continue our iterations, though the regular-falsi takes long to converge. But
clearly the exact value is 1.7693
Note 3.3.1 In every next iteration, the previous point must be part of it.
3.3.2 Order of convergence of the Regular algorithm
The error in the (n + 1)
th
iterate (denoted e
n+1
) is related to the error in the n
th
iterate
e
n
by the equation,
e
n+1
Ae
k
n
where k = 1. This suggests that although the regular false uses the same formula as the
secant method, the order of convergence of the regular false is one compared to 1.618
for the secant. Thus, the method is slower at converging to the root compared to the
secant method. However, with the condition f(x
r
)f(x
r1
) < 0 at each stage, ensures that
the regular false is always convergent which is not the case with secant method.
3.3.3 Advantages and disadvantages of the regular false algo-
rithm
(i) The regular false algorithm is always convergent.
34 MAK-ICT
(ii) The order of convergence of the method is one.
The two basic points on the advantages and disadvantages. Whether it is an advan-
tage or a disadvantage it all depends on the comparison in question. For instance
in comparison with the secant method, it is disadvantageous that the regular false
has order of convergence one. While it is advantageous that it is always convergent.
3.3.4 Text Questions
1. Use the regular false algorithm to approximate the root in the interval (1, 2) of the
equation x
3
2x 2 = 0. Start is x
0
= 1 and x
1
= 2.
2. Use the regular false algorithm to nd the root of
f(x) = x
2
4x + 2 = 0
that lies in the interval (0, 1) and state your answer correct to three decimal places.
3. Verify that x = 3 is a solution of x = g(x) where
g(x) =
18x
(x
2
+ 9)
.
Use the regular false to approximate this root.
4. Consider the equation
f(x) =
e
x
+ 1 + sin x
(x 2)
= 0
whose root you would want to nd. Show that f(1.9) < 0, f(2.1) > 0 and use the
regular false algorithm to compute this root.
5. Approximate to three decimal places the roots of the following equations using the
regular false algorithm.
(i) x
3
= 2
(ii) x
2
= 3
(iii) x
4
= 2
(iv) x
5
= 3
6. (a) Derive the regular false algorithm by clearly giving its geometrical illustration.
(b) What advantages and disadvantages does the secant method enjoy over other
methods so far considered for solving nonlinear equations.
A B Tumwesigye CSC2103 2008/2009 35
3.4 Bisection Method
3.4.1 Background
The bisection method is one of the bracketing methods for nding roots of equations.
3.4.2 Implementation
Given a function f(x) and an interval which might contain a root, perform a predetermined
number of iterations using the bisection method.
3.4.3 Limitations
Investigate the result of applying the bisection method over an interval where there is a
discontinuity. Apply the bisection method for a function using an interval where there
are distinct roots. Apply the bisection method over a large interval.
3.4.4 Explanation on the bisection method
The bisection method takes a similar geometrical approach with the regular false algo-
rithm. You need two initial guesses x
0
and x
1
to the root x

of the nonlinear equation


f(x) = 0, such that f(x
0
)f(x
1
) < 0. The next approximation is obtained by getting the
arithmetic mean of the previous two. However, the pair x
r
, x
r+1
to be used to get x
r+2
must satisfy the condition f(x
r
)f(x
r+1
) < 0. Masenge (1989) called this method a trivial
simplication of the regular false.
In mathematics, the bisection method is a root-nding algorithm which works by re-
peatedly dividing an interval in half and then selecting the subinterval in which the root
exists.
Example 3.4.1 Use the Bisection method to nd the root of x
2
= 3
x
2
3 = 0 = f(x) = x
2
3. Since f(1) < 0, f(2) > 0, the root lies in (1, 2) ie
x
0
= 1, x
1
= 2
x
2
=
x
1
+x
0
2
=
1 + 2
2
= 1.5 =f(x
2
) = 0.75 < 0
x
3
=
x
2
+x
1
2
= 1.75 =f(x
3
) > 0
x
4
=
x
3
+x
2
2
= 1.625 =f(x
4
) < 0
36 MAK-ICT
x
5
=
x
4
+x
3
2
= 1.6875 =f(x
5
) < 0
x
6
=
x
5
+x
3
2
= 1.71875
etc.... process still slow but convergent.
Example 3.4.2
Use the bisection algorithm to approximate the root in the interval (1, 2) of the equation
x
3
2x 2 = 0
Solution
Let x
0
= 1 and x
1
= 2 therefore
f(x
0
) = 1 2 2 = 3 < 0
f(x
1
) = 8 4 = 2 = 2 > 0 therefore
= (3)(2) < 0.
Hence root exists in (1, 2). Therefore,
c =
a +b
2
=
1 + 2
2
= 1.5
and f(x
2
) = f(1.5) = 1.5
3
2(1.5) 2. and the process goes on until convergence to the
solution. Complete this as an exercise.
3.4.5 Advantages of the bisection method
The method is simple.
The method is always convergent.
3.4.6 Disadvantages of the bisection method
It requires the values of a and b.
The convergence of interval halving is very slow (slow at converging to the root x*)
The method fails in case of approximating a double root or a root of even multiplicity.
A B Tumwesigye CSC2103 2008/2009 37
3.4.7 Text Questions
1. Approximate to 2 decimal places the roots of the following equations using the
bisection method.
(i) x
2
= 3
(ii) x
3
= 2
(iii) x
4
= 2
2. The function h(x) = x sin x occurs in the study of damped forced oscillation. Find
the value of x that lies in the interval [0, 2] where the function takes on the value
h(x) = 1. Use interval bisection.
3. If a = 0.1 and b = 1.0, how many steps of the bisection method are needed to
determine a root in this interval with an error of at most
1
2
10
8
?
4. Consider obtaining the root of:
f(x) =
e
x
+ 1 + sin x
(x 2)
.
Show that f(1.9) < 0, f(2.1) > 0 and use the bisection method to obtain the root.
5. Find the real root of the equation
x
3
x
2
x + 1 = 0
using the bisection algorithm.
6. The bisection method generates intervals [a
0
, b
0
], [a
1
, b
1
], and so on, which of these
inequalities are true for the root r that is being calculated?
(a) [r a
n
[ 2[r b
n
[
(b) [r a
n
[ 2
n1
(b
0
a
0
)
(c) [r b
n
[ 2
n1
(b
0
a
0
)
(d) 0 r a
n
2
n
(b
0
a
0
)
(e) [r
1
2
(a
n
+b
n
)[ 2
n2
(b
0
a
0
)
Example 3.4.3 Find all the real solutions to the cubic equation x
3
+ 4x
2
10 = 0 in
the interval [1, 2].
Example 3.4.4 Use Newtons method to nd the roots of the cubic polynomial x
3

3x + 2 = 0 in the interval
(i) [0, 2]
(ii) [3, 1]
38 MAK-ICT
3.5 Newton Raphsons method
3.5.1 Derivation of the Newtons method
If we are given a non-linear equation f(x) = 0 and we are to apply the Newton Raphsons
method, we linear approximate the graph of y = f(x) by a straight line passing through
the point (x
0
, f
0
) and tangential to the graph of y = f(x). Take the slope of this line to
be p. Geometrically this is given in gure below.
Figure 3.2: Geometrical representation of a tangent to a curve at a point.
The equation of the line with slope p and passing through the point (x
0
, f
0
) is
y f
0
x x
0
= p (3.6)
However, we know that p is the slope of the tangent to y = f(x) at (x
0
, f
0
). This is given
by;
p = f

(x
0
) = f

0
(3.7)
Substituting equation (3.7) in equation (3.6) we get
y f
0
x x
0
= f

0
y f
0
= (x x
0
)f

0
(3.8)
From gure 13.1, line of equation 13.3 cuts the x-axis at the point (x
1
, 0) i.e when x = x
1
and y = 0. Substituting in equation (3.8) we get;
0 f
0
= (x
1
x
0
)f

0
Making x
1
the subject, we get,
x
1
= x
0

f
0
f

0
or
x
1
= x
0

f(x
0
)
f

x
0
(3.9)
A B Tumwesigye CSC2103 2008/2009 39
equation (3.9) is actually the Newtons method for obtaining the next iterate x
1
from the
previous iterate x
0
. The equation (3.9) is generalized and written;
x
n+1
= x
n

f(x
n
)
f

(x
n
)
, (3.10)
since the linear approximation of the curve is done at each of the iterates x
n
, x
n+1
, x
n+1
, . . .
as reected in gure above.
Example 3.5.1
Use Newton Raphsons method to nd the root of
x
2
3 = 0 on [1, 2]
.
f(x
n
) = x
2
n
3
therefore f

(x
n
) = 2x
n
But Raphsons formula is x
n+1
= x
n

f(x
n
)
f

(x
n
)
Substituting in the Raphsons formula we get
x
n+1
= x
n

(x
2
n
3)
2x
n
=
2x
2
n
x
2
n
+ 3
2x
n
=
_
x
2
n
+ 3
2x
n
_
Taking the initial guess/approximation as x
0
= 2, but you could also consider x
0
= 1 you
come up with the same answer.
x
1
=
x
2
0
+ 3
2x
0
=
2
2
+ 3
2(2)
= 1.75
x
2
=
x
2
1
+ 3
2x
1
=
(1.75)
2
+ 3
2(1.75)
= 1.7321
40 MAK-ICT
x
3
=
x
2
2
+ 3
2x
2
=
(1.7321)
2
+ 3
2(1.7321)
= 1.7320508
x
4
=
x
2
3
+ 3
2x
3
=
(1.7320508)
2
+ 3
2(1.7320508)
= 1.7320508
Thus the root is 1.7320508
Example 3.5.2
Use Newton Raphsons method to nd the root of
f(x) = x
3
+ 3x 1
On [0, 1].
Solution
Clearly f(0) = 0
3
+ 3(0) 1 = 1
f(1) = 1
3
+ 3(1) 1 = 3
Therefore f(0)f(1) < 1 implying a real root exists in the interval [0, 1]. But
f(x
n
) = x
3
n
+ 3x
n
1
therefore f

(x
n
) = 3x
2
n
+ 3 = 3(x
2
n
+ 1).
But Raphsons formula is x
n+1
= x
n

f(x
n
)
f

(x
n
)
Substituting in the Raphsons formula we get
x
n+1
= x
n

(x
3
n
+ 3x
n
1)
3x
2
n
+ 3
=
3x
3
n
x
3
n
3x
n
+ 1
3x
2
n
+ 3
=
1
3
_
2x
3
n
3x
n+1
x
2
n
+ 1
_
Taking the initial guess/approximation as x
0
= 0
x
1
=
1
3
(
2x
3
0
3x
0
+ 1
x
2
0
+ 1
) =
1
3
_
1
1
_
= 0.333
A B Tumwesigye CSC2103 2008/2009 41
x
2
=
1
3
_
2(
1
3
)
3
3(
1
3
) + 1
(
1
3
)
2
+ 1
_
= 0.0222
x
3
=
1
3
_
2(0.0222)
3
3(0.0222) + 1
(0.0222)
2
+ 1
_
= 0.3110
x
4
=
1
3
_
2(0.3110)
3
3(0.3110) + 1
(0.3110)
2
+ 1
_
= 0.0386
x
5
=
1
3
_
2(0.0386)
3
3(0.0386) + 1
(0.0386)
2
+ 1
_
= 0.2943
Example 3.5.3
Find the root of
f(x) = 3x + sin x e
x
= 0
in the interval (0, 1). Using Newton Raphsons method.
Solution
Since f(0) = 1 < 0 and f(1) = 3 +sin(1) e > 0, so there is a real root in (0, 1). Using
x
n+1
= x
n

f(x
n
)
f

(x
n
)
= x
n

(3x
n
+ sin x
n
e
x
n
)
(3 + cos x
n
e
x
n
)
=
3x
n
+x
n
cos x
n
x
n
e
x
n
3x
n
sin x
n
+e
x
n
3 + cos x
n
e
x
n
=
x
n
(cos x
n
e
x
n
) sin x
n
+e
x
n
3 + cos x
n
e
x
n
With x
0
= 0 (initial guess), therefore
x
1
=
0(cos 0 e
0
) sin 0 +e
o
3 + cos 0 e
o
=
1
3
= 0.33

33
x
2
= . . . . . . . . .
x
3
= . . . . . . . . .
x
4
= . . . . . . . . .
Compute x
2
, x
3
and x
4
as an exercise.
42 MAK-ICT
3.5.2 Text Questions
1. Verify that when Newtons method is used to compute

x (by solving the equation


x
2
= R), the sequence of iterates is dened by,
x
n+1
=
(x
n
+
R
x
n
)
2
.
Hence nd

3 correct to six decimal places.


2. Find the root of the equation
x
2
+x 1 = 0
in the interval [0, 1], giving your answer correct to 4 decimal places.
3. Show that the cubic equation
2x
3
+ 3x
2
3x 5 = 0
has a real root in the interval [1, 2]. Approximate this root correct to ve decimal
places using Newton Raphsons method.
4. Use Newtons method to approximate the root of the equation
g(x) = x
3
2 sin x
on [0.5, 2].
5. The convergence rate m of an iterative process is given by
e
n+1
= Ae
m
n
.
Obtain experimentally the convergence rate for Newtons method applied to nding
a root of
(i) tan hx x
2
= 0 starting at x = 1
(ii) x
3
x
2
x + 1 = 0 starting at x = 1.5. (use computer program)
3.5.3 General Newtons algorithm for extracting roots of posi-
tive numbers.
Suppose that our interest is to nd the r
th
root of a real positive number A. If x is the
value of this root, then x is related to A by the equation, x
r
= A. or x
r
A = 0
A B Tumwesigye CSC2103 2008/2009 43
Let f(x) = x
r
A, then x is the root of the nonlinear equation,
f(x) = x
r
A = 0.
Applying the Newton Raphsons method, we have;
x
n+1
= x
n

f(x
n
)
f

(x
n
)
But f(x
n
) = x
r
n
A
f

(x
n
) = rx
r1
n
therefore x
n+1
= x
n

(x
r
n
A)
rx
r1
n
=
rx
r
n
x
r
n
+A
rx
r1
n
=
(r 1)x
r
n
+A
rx
r1
n
=
1
r
(r 1)x
n
+
A
x
r1
n
(3.11)
Equation (3.11) was also given by Masenge (1987) in his book on Fundamentals of Numer-
ical methods. Equation (3.11) is a general formula from which we can obtain quadratically
convergent iterative processes for nding approximations to arbitrary roots of numbers.
Note 3.5.1 The root, or answer interested in is the x. And also what is f(x) is the
function f(x) = 0
Example 3.5.4
When r = 2 we have x
2
= A x =

A. Thus for r = 2 in the general formula, we get,


x
n+1
=
1
2
(x
n
+
A
x
n
)
Which is the Newtons square root algorithm for extracting roots of positive numbers.
Example 3.5.5
Use Newtons square root algorithm to nd the square root of 5 correct to six decimal
places.
44 MAK-ICT
Solution
Substituting in the general formula r = 2 and A = 5 we get,
x
n+1
=
1
2
(x
n
+
A
x
n
)
Starting with x
0
= 2 we get,
x
1
=
1
2
(x
0
+
5
x
0
) = 2.25
x
2
=
1
2
(x
1
+
5
x
1
) = 2.236111111
x
3
=
1
2
(x
2
+
5
x
2
) = 2.236067978
x
4
=
1
2
(x
3
+
5
x
3
) = 2.2360798

Since x
0
and x
1
agree, in one decimal place, then x
1
= 2.25 is the value of the root
correct only to one decimal place. Likewise x
2
= 2.236111111 is also correct only to one
decimal place since it agrees with the previous iterate x
1
= 2.25 only in one decimal place.
However, x
3
= 2.236067978 is correct to three decimal places since it is in agreement with
the previous iterate x
2
in exactly three places of decimal. But x
4
= 2.236067978 is exactly
the same as x
3
. In fact they are exactly the same up to nine decimal places. This means
that x
4
= 236067978 is the value of the root correct to nine decimal places. Thus x
4
, must
also be correct up to six decimal places. Hence, the value of the root that you state as
being correct to six decimal places or nine decimal places is x
4
= 2.236067978. Compare
with the value obtained from calculator.
3.5.4 Using Newtons general formula for roots in nding recip-
rocals of numbers.
If we have that r = 1 then x
1
= A (positive number), this means x =
1
A
(reciprocal of
A) with r = 1 in the general formula in equation (3.11) then we get,
x
n+1
= x
n
(2 Ax
n
).
A B Tumwesigye CSC2103 2008/2009 45
This formula is quadratically convergent and can suitably be applied to calculate the
reciprocal of numbers.
Example 3.5.6
Use Newtons reciprocal algorithm to nd the reciprocal of 3.
Solution
Using the algorithm,
x
n+1
= x
n
(2 Ax
n
)
Since A = 3, let x
0
= 0.5. Therefore,
x
1
= x
0
[2 3(x
0
)] = 0.5[2 3(0.5)] = 0.25
x
2
= x
1
[2 3(x
1
)] = 0.3125
x
3
= x
2
[2 3(x
2
)] = 0.33203125
x
4
= x
3
[2 3(x
3
)] = 0.3333282447
x
5
= x
4
[2 3(x
4
)] = 0.333333333
x
6
= x
5
[2 3(x
5
)] = 0.333333333

Thus x
6
= 0.333333333 is the value of the reciprocal of 3 i.e
1
3
correct to nine decimal
places. But we know that
1
3
= 0.33.
3.5.5 Some limitations of the Newton Raphsons method
Good though it is, the method has some limitations.
(a) If in the immediate neighborhood of a root of f(x), f

(x) vanishes or is very small, the


Newton Raphsons method will not converge. Masenge (1987) explained the reason
for this failure as; since f

(x) is very small, the quantity,


f(x
n
)
f

(x
n
)
becomes very large.
The consequent is that we are thrown away from the root we are approximating.
(b) The Newton Raphsons method may also fail if f(x) has a point of inection in the
neighborhood of the root.
3.5.6 Text Questions
1. Use Newtons square root algorithm to nd the square root of 2 correct to 6 decimal
places.
2. Use cube root Newtons algorithm to nd the cube root of 7 correct to four decimal
places.
46 MAK-ICT
3. Use Newtons reciprocal algorithm to nd
(i) the reciprocal of the square root of 2.
(ii) The reciprocal of the cube root of 4.
4. State the advantages and disadvantages of the Newtons method for nonlinear equa-
tions as compared to the other methods considered in the previous lectures.
Assigment to Hand in 3.1 Compute the roots (zeros) of f(x) = e
x
4 2x using
successive substitution methodwith (i)x
0
= 2, (ii) x
0
= 0
Example 3.5.7
(a) (i) State the convergence criterion of successive substitution technique of solving
a non-linear equation f(x) = 0. 2 Marks
Let the function g(x) be continuous in a small interval [a, b] containing a simple
(single) root of the function f(x). Let also g(x) be dierentiable in the open
interval (a, b). If there exists a real number L such that 0 [g

(x)[ L
1, a g(x) b x (a, b), then for an arbitrary starting value x
0
taken from (a, b) the iteration formula x
r+1
= f(x
r
) will converge. The rate
of convergence of the iteration will depend on the smallness of the constant L
relative to unity.
(ii) Given the function f(X) = x
3
sin x = 0, [0, 1], using successive substitution
technique, we can generate
x
n+1
= (sin x)
1
3
& x
n+1
=
sin x
x
2
Which of the two methods converge? Why? 2 Marks
Since for g(x) = (sin x)
1
3
, g

(x) =
1
3
(sin)
2
3
cos x = 0 < g

(x) < 1, thus


the converging formula.
(iii) Use the converging formula in (ii) above, to approximate the root of
f(x) = x
3
sin x = 0 in [0, 1] to 3 decimal places.
[Hint: Let x
0
= 1 and use radians.] 5 Marks
n (sin x)
1
3
1 1.0
2 0.944
3 0.932
4 0.929
5 0.929
A B Tumwesigye CSC2103 2008/2009 47
(b) (i) Dene the Newton-Raphson method formula for nding the root of a non-linear
equation f(x) = 0 2 Marks
x
n+1
= x
n

f(x
n
)
f

(x
n
)
,
(ii) The convergence of the Newton-Raphson method technique highly depends on
the initial guess. Discuss. 2 Marks
Yes, when the initial guess is in the interval given, the iterations converge faster
than otherwise.
(iv) Use Newton Raphson method to estimate one of the solutions of x
2
4 = 0
using x
0
= 6 to 2 decimal places. 5 Marks
x
n
x
n+1
x
0
= 6 x
1
= 3.33
x
1
= 3.33 x
2
= 2.27
x
2
= 2.27 x
3
= 2.01
x
3
= 2.01 x
4
= 2.00
x
4
= 2.00 x
5
= 2.00
(iii) Newtons Raphsons method is one of the popular schemes for solving a non-
linear equation f(x) = 0. Prove that the Newton Raphsons method for nding
the square root of a positive number A is given by,
x
n+1
=
1
2
_
x
n
+
A
x
n
_
Use the scheme above to approximate the square root of 5(

5) to three decimal
places with x
0
= 2. 5 Marks
x
n
x
n+1
x
0
= 2 x
1
= 2.25
x
1
= 2.25 x
2
= 2.236
x
2
= 2.236 x
3
= 2.2.236
x
3
= 2.236 x
4
= 2.236
(d) With any simple example, write short notes on the Bisection method for solving
non-linear equations. 2 Marks
Chapter 4
Interpolation
Linear interpolation is often used to approximate a value of some function f using two
known values of that function at other points.
4.1 Review- Linear interpolation
Using the similarities of triangles,
f(b) f(a)
b a
=
f(c) f(a)
c a
f(b) f(a)
b a
(c a) = f(c) f(a)
thus
f(c) = f(a) +
f(b) f(a)
b a
(c a) (4.1)
and
c = a +
f(c) f(a)
f(b) f(a)
(b a) (4.2)
Example 4.1.1 Given the data below
Time 0 1 2 3 4
Distance 0 6 39 67 100
48
A B Tumwesigye CSC2103 2008/2009 49
(i) Find the distance traveled when t = 2.3 hrs.
Then a = 2, b = 3, & c = 2.3
f(a) = 39, f(b) = 67, & f(c)?
f(c) = f(a) +
f(b) f(a)
b a
(c a) = 39 +
(67 39)
(3 2)
(2.3 2) = 47.4
(ii) The time taken when the distance traveled is 80 miles.
Then a = 3, b = 4, & c =?
f(a) = 67, f(b) = 100, & f(c) = 80
c = a +
f(c) f(a)
f(b) f(a)
(b a) = 3 +
(80 67)
(100 67)
(4 1) = 3.39394
Example 4.1.2 The bus stages along Kampala-Jinja are 10 km apart. An express bus
traveling between the two towns only stops at these stages except in case of an emergency
when its permitted to stop at a point between the two stages.
The fares (fee) between the rst, second, third and fourth stages from Jinja are Sh 110, Sh 150, Sh 185
and Sh 200 respectively. On a certain day, a passenger paid to travel from Jinja in the
bus upto the fourth stage, but he fell sick and had to be left on a health center 33 km
from Jinja.
Given that he was refunded money for the distance he had not traveled, nd the
approximate amount of money he received.
Distance (x) 10 20 30 40
Amount Paid 110 150 185 200
Then a = 30, b = 40, & c = 33
f(a) = 185, f(b) = 200, & f(c)?
f(c) = f(a) +
f(b) f(a)
b a
(c a) = 185 +
(200 185)
(40 30)
(33 30) = 189.5
The journey he had moved costed him Sh 189.5, thus he was refunded
200 189.5 = 10.5 shillings
50 MAK-ICT
Another person who had only Sh.165 was allowed to board a bus but would be left
at a point worth his money, how far from Jinja would he be left.
Then a = 20, b = 30, & c =?
f(a) = 150, f(b) = 185, & f(c) = 165
c = a +
f(c) f(a)
f(b) f(a)
(b a) = 20 +
(165 150)
(185 150)
(30 20) = 24.286 km
Example 4.1.3 The table below shows the values of cos x.
80

0 10 20 30 40 50
cos 80

0.1736 0.1708 0.1679 0.1650 0.1622 .01593


(i) Find the value of cos 80

35

.
Then 30

= 2, b = 40

, & c = 35

f(a) = 0.1650, f(b) = .01622, & f(c)?


f(c) = f(a) +
f(b) f(a)
b a
(c a) = 0.1650 +
(0.0028)
10

= .01636
(ii) Find the cos
1
0.1655.
Then a = 20

, b = 30

, & c =?
f(a) = 0.1679, f(b) = 0.1650, & f(c) = 0.1655
c = a +
f(c) f(a)
f(b) f(a)
(ba) = 20

+
(0.1655 0.1679)
(0.1650 0.1679)
(30

20

) = 20

+8.276

= 28.276

cos
1
0.1655 = 80

28.3

Example 4.1.4 Use Linear interpolation to nd the root of the equation x


3
x1 = 0
which lies between (1, 2)
x 1 2
f(x) 1 5
A B Tumwesigye CSC2103 2008/2009 51
Recall that a value x is a root or a solution to an equation f(x) if x satisfy it, i.e if
f(x) = 0. The question is to nd the value of x at f(x) = 0.
Then a = 1, b = 2, & c =?
f(a) = 1, f(b) = 5, & f(c) = 0
c = a +
f(c) f(a)
f(b) f(a)
(b a) = 1 +
(0 1)
(5 1)
(2 1) = 1.16667
Example 4.1.5 Use linear interpolation to estimate the root of the equation
x
3
2x 5 = 0 which lies in the interval (2, 3). 2.0588
Example 4.1.6 Use linear interpolation to estimate the root of the equation x
2
2 = 0
which lies in the interval (2, 1). 1.3333
Example 4.1.7 The following data gives the distance covered by a particle for a certain
period of time.
Time (s) 0 1 2
Distance(m) 0 5 7
Estimate the time taken by a particle to cover a distance of 6 m. 1.0833 s
4.2 Application
Linear interpolation is often used to ll the gaps in a table. Suppose you have a table
listing the population of some country in 1970, 1980, 1990 and 2000, and that you want
to estimate the population in 1994. Linear interpolation gives you an easy way to do this.
The basic operation of linear interpolation between two values is so commonly used in
computer graphics that it is sometimes called a lerp in the jargon of computer graphics.
The term can be used as a verb or noun for the operation. e.g. Bresenhams algorithm
lerps incrementally between the two endpoints of the line.
Lerp operations are built into the hardware of all modern computer graphics processors.
They are often used as building blocks for more complex operations: for example, a
bilinear interpolation can be accomplished in three lerps. Because this operation is cheap,
its also a good way to implement accurate lookup tables with quick lookup for smooth
functions without having too many table entries.
52 MAK-ICT
4.3 History
Linear interpolation has been used since antiquity for lling the gaps in tables, often
with astronomical data. It is believed that it was used in the Seleucid Empire (last
three centuries BC) and by the Greek astronomer and mathematician Hipparchus (second
century BC). A description of linear interpolation can be found in the Almagest (second
century AD) of Ptolemy.
4.4 Extensions
In demanding situations, linear interpolation is often not accurate enough (since not all
points can be approximated to be on a straight line). In that case, it can be replaced by
polynomial interpolation or spline interpolation.
Linear interpolation can also be extended to bilinear interpolation for interpolating func-
tions of two variables. Bilinear interpolation is often used as a crude anti-aliasing lter.
Similarly, trilinear interpolation is used to interpolate functions of three variables. Other
extensions of linear interpolation can be applied to other kinds of mesh such as triangular
and tetrahedral meshes.
4.5 Lagrange interpolation
We may know the value of a function f at a set of points x
1
, x
2
, . . . , x
N
. How do we esti-
mate the value of the function at any point x; how do we compute f(x). In general this is
done by developing a smooth curve though the points (x
1
, f(x
1
)), (x
2
, f(x
2
)), ...(x
N
, f(x
N
)).
Lagrange interpolation is a way to pass a polynomial of degree N 1 through N
points.
Lagrange polynomials are the interpolating polynomials that equal zero in all given
points, save one. Say, given points x
1
, x
2
, . . . , x
N
, Lagranges polynomial number k is the
product
P
k
(x) =
(x x
1
)
(x
k
x
1
)
(x x
2
)
(x
k
x
2
)

(x x
k1
)
(x
k
x
k1
)
(x x
k+1
)
(x
k
x
k+1
)

(x x
N
)
(x
k
x
N
)
such that P
k
(x
k
) = 1 and P
k
(x
j
) = 0, for j dierent from k. In terms of Lagranges
polynomials the polynomial interpolation through the points (x
1
, y
1
), (x
2
, y
2
), . . . , (x
N
, y
N
)
could be dened simply as
P(x) = y
1
P
1
(x) +y
2
P
2
(x) +... +y
N
P
N
(x)
Denition 4.5.1 Polynomial interpolation is a method of constructing a function and
estimating values at non-tabular points between x
0
and x
n
.
A B Tumwesigye CSC2103 2008/2009 53
4.5.1 Alternatively
Let f(x) be a continuous function on [a, b], such that
f(x
k
) = f
k
, k = 0, 1, ..., n,
with x
k
[a, b]. We call the set of points x
k
tabular points (or interpolating points),
while the set of values f
k
are called the tabular values of f(x). We seek for a polynomial
of degree n, such that
P
n
(x
k
) = f
k
, for k = 0, 1, 2, . . . n. (4.3)
Such a polynomial is called Lagranges interpolation polynomial. However, the process of
calculating f(x), x [a, b], x ,= x
k
k from 0 to n, is called interpolation.
We now proceed to derive a formula for P
n
(x). However, there is need to introduce some
key denitions.
Denition 4.5.2
Let D
k
: k = 0, 1, ..., n be any set of numbers. We dene the products;
n

k=0
D
k
= D
0
.D
1
...D
n
and
n

k=0,k=j
D
k
= D
0
.D
1
...D
j 1
.D
j +1
...D
n
.
This denition is important in the following dening equation for Lagranges interpolating
polynomial.
Theorem 4.5.1
The Lagranges interpolation polynomial P
n
(x) is given by,
P
n
(x) =
n

k=0
L
k
(x)f(x
k
) (4.4)
where L
k
(x) =
n

j =0,j =k
x x
j
x
k
x
j
and f
k
f (x
k
)
54 MAK-ICT
Proof
Because P
n
(x) is of degree n, we may write
L
k
(x) =
k
n

j=0,j=k
(x x
j
),

k
constant.
For P
n
(x) to satisfy equation (4.3) we must have,
L
k
(x
j
) =
kj
( Kroneker delta)
with

kj
=
_
_
_
1, j = k
0, j ,= k
(4.5)
With the condition (1.3) we have,

k
=
1
n

j=0,j=k
(x
k
x
j
)
This completes the proof.
4.5.2 Examples of interpolating polynomials
(i) Linear interpolating polynomials
When n = 1, in equation (4.4) we have the polynomial as
P
1
(x) =
1

k=0
L
k
(x)f(x
k
)
= L
0
(x)f(x
0
) +L
1
(x)f(x
1
)
But L
k
(x) =
n

j=0 j=k
(x x
j
)
(x
k
x
j
)
therefore L
0
(x) =
_
x x
1
x
0
x
1
_
L
1
(x) =
_
x x
0
x
1
x
0
_
A B Tumwesigye CSC2103 2008/2009 55
Thus,
P
1
(x) =
_
x x
1
x
0
x
1
_
+
_
x x
0
x
1
x
0
_
f(x
1
) (4.6)
Which is known as Lagranges interpolating polynomial of degree one, popularly
known as Linear interpolating polynomial.
Example 4.5.1
Construct a linear interpolation polynomial for the data,
x 0 1
f(x) 1.0000 2.7183
Hence interpolate f(0.5)
Solution
Let x
0
= 0 and x
1
= 1
therefore f(0) = 1.0000 and f(1) = 2.7183
Substituting in equation (4.6) for linear interpolation we have,
P
1
(x) = (1 x)(1.0000) + (x)(2.7183)
= 1.0000 + 1.7183x
and hence, P
1
(0.5) = 1.0000 + (1.7183)(0.5)
= 1.0000 + 0.8592 = 1.8592
In fact the data in this particular example describes the graph of f(x) = e
x
on [0, 1].
The geometrical interpretation of the linear interpolation on [0, 1] .
(ii) Quadrative interpolating polynomials
When n = 2 in equation (4.4) we get
P
2
(x) =
2

k=0
L
k
(x)f
k
= L
0
(x)f(x
0
) +L
1
(x)f(x
1
) +L
2
(x)f(x
2
)
with L
k
(x) =
n

j=0,=k
(x x
j
)
(x
k
x
j
)
56 MAK-ICT
then L
0
(x) =
(x x
1
)(x x
2
)
(x
0
x
1
)(x
0
x
2
)
L
1
(x) =
(x x
0
)(x x
2
)
(x
1
x
0
)(x
1
x
2
)
L
2
(x) =
(x x
0
)(x x
1
)
(x
2
x
0
)(x
2
x
1
)
Thus, we have,
P
2
(x) =
(x x
1
)(x x
2
)
(x
0
x
1
)(x
0
x
2
)
f(x
0
)
+
(x x
0
)(x x
2
)
(x
1
x
0
)(x
1
x
2
)
f(x
1
) +
(x x
0
)(x x
1
)
(x
2
x
0
)(x
2
x
1
)
f(x
2
)
Which is called a quadratic interpolating polynomial. Generally they are better
interpolating polynomials than the linear ones.
Example 4.5.2
You are given that, f(0) = 2, f(2) = 4, and f(3) = 10. Find a Lagrange polyno-
mial of degree 2 that ts the data.
Solution
Since x
0
= 0, x
1
= 2 and x
2
= 3 therefore f(x
0
) = 2, f(x
1
) = 4, f(x
2
) = 10.
But
P
2
(x) = L
0
f(x
0
) +L
1
f(x
1
) +L
2
f(x
2
)
L
0
=
(x x
1
).(x x
2
)
(x
0
x
1
)(x
0
x
2
)
=
(x 2)(x 3)
(2)(3)
=
1
6
x
2

5
6
x + 1
L
1
=
(x x
0
).(x x
2
)
(x
1
x
0
)(x
1
x
2
)
=
(x 0)(x 3)
(2 0)(2 3)
=
1
2
x
2
+
3
2
x
L
2
=
(x x
0
).(x x
1
)
(x
2
x
0
)(x
2
x
1
)
=
(x 0)(x 2)
(3 0)(3 2)
A B Tumwesigye CSC2103 2008/2009 57
=
1
3
x
2

2
3
x
P
2
(x) = f(x) = L
0
f
x
0
+L
1
f
x
1
+L
2
f
x
2
f(x) = (2)
_
1
6
x
2

5
6
x + 1
_
+ (4)
_

1
2
x
2
+
3
2
x
_
+ (10)
_
1
3
x
2

2
3
x
_
= x
2
+x 2
Thus P
2
(x) can be used to interpolate f(x) at any of the non tabular points.
(iii) The Cubic interpolating polynomial
When n = 3 in equation (4.4) we get,
P
3
(x) =
3

k=0
L
k
(x)f
k
= L
0
(x)f(x
0
) +L
1
(x)f(x
1
) +L
2
(x)f(x
2
) +L
3
(x)f(x
3
)
and with, L
k
(x) =
n

j=0,j=k
(x x
j
)
(x
k
x
j
)
then, L
0
(x) =
(x x
1
)(x x
2
)(x x
3
)
(x
0
x
1
)(x
0
x
2
)(x
0
x
3
)
L
1
(x) =
(x x
0
)(x x
2
)(x x
3
)
(x
1
x
0
)(x
1
x
2
)(x
1
x
3
)
L
2
(x) =
(x x
0
)(x x
1
)(x x
3
)
(x
2
x
0
)(x
2
x
1
)(x
2
x
3
)
L
3
(x) =
(x x
0
)(x x
1
)(x x
2
)
(x
3
x
0
)(x
3
x
1
)(x
3
x
2
)
Thus,
P
3
=
(x x
1
)(x x
2
)(x x
3
)
(x
0
x
1
)(x
0
x
2
)(x
0
x
3
)
f(x
0
) +
(x x
0
)(x x
2
)(x x
3
)
(x
1
x
0
)(x
1
x
2
)(x
1
x
3
)
f(x
1
)
58 MAK-ICT
+
(x x
0
)(x x
1
)(x x
3
)
(x
2
x
0
)(x
2
x
1
)(x
2
x
3
)
f(x
2
) +
(x x
0
)(x x
1
)(x x
2
)
(x
3
x
0
)(x
3
x
1
)(x
3
x
2
)
f(x
3
)
Construction of the cubic polynomials from available data will be tested in the
text questions at the end of this lecture. Note that the higher degree Lagrange
polynomials can be constructed with ease.
Theorem 4.5.2
Lagranges interpolation polynomial P
n
(x) is unique.
Note 4.5.1
The uniqueness means that you cannot nd any other polynomial of the same degree
which can interpolate the data.
Proof
The proof proceeds by contradiction. Let P
n
(x) and Q
n
(x) be two dierent poly-
nomials which interpolate f(x) over the set of points x
k
: k = 0, 1, ..., n which
belong to the interval [a, b], then
P
n
(x
k
) = Q(x
k
) = f
k
, for k = 0, 1, 2, . . . , n.
Lets dene,
r(x) = P(x) Q(x) x [a, b],
then r(x) has at most degree n. Since r(x
k
) = 0, for k = 0, 1, 2, ..., n, it has n + 1
distinct zeros in [a, b]. This contradicts the fundamental law of algebra which states
that a non-zero polynomial of degree n cannot have more than n zeros and so P
n
(x)
and Q
n
(x) are the same polynomials.
4.5.3 Text Questions
1. Construct a linear interpolating polynomial P
1
(x) for the function f(x) =
1
x
on the
interval [1, 2]. Use your polynomial to interpolate f(x) at x = 1.2 .
2. Given that x
0
= 0, x
1
=
1
2
and x
2
= 1 for f(x) = e
x
. Construct a Lagrange
polynomial that agrees with f(x) at the interpolating points.
3. Find a third degree Lagrange polynomial that goes through the points (0, 0), (1, 1), (8, 2)
and (27, 3).
Use the polynomial to nd q for (20, q). Also construct a linear interpolating poly-
nomial using only (8, 2) and (27, 3), then use the linear polynomial to estimate q.
Compare the estimated q
s
and comment on your results given that the data is of
the function y = x
1
3
.
4. Find the interpolating polynomials going through
(i) (0, 1) and (2, 3)
A B Tumwesigye CSC2103 2008/2009 59
(ii) (1, a), (0, b) and (1, c)
(iii) (0, 1), (1, 0), (2, 1) and (3, 0)
5. Given the table below, use Lagranges interpolation polynomials of degree one, two
and three to nd f(2.5)
x 2.0 2.2 2.4 2.6
f(x) 0.5102 0.5208 0.5104 0.4813
Solution
(i) For n = 1, since need to predict f(2.5) then x
0
= 2.4, x
2
= 2.6
P
1
(x) = (
x x
1
x
0
x
1
)f(x
0
) + (
x x
0
x
1
x
0
)f(x
1
)
= (
x 2.6
2.4 2.6
)0.5104 + (
x 2.4
2.6 2.4
)0.4813
= 0.49585
so for x = 2.5, f(2.5) = 0.49585
(ii) For n = 2, x
0
= 2.2, x
1
= 2.4, x
2
= 2.6
=
(x x
0
)(x x
2
)
(x
1
x
0
)(x
1
x
2
)
f(x
1
) +
(x x
0
)(x x
1
)
(x
2
x
0
)(x
2
x
1
)
f(x
2
)
= 0.53814
6. Repeat questions in linear-interpolation (review-section) using the Lagrange linear
interpolating polynomial. comment on the accuracy of the two techniques.
4.5.4 Error analysis in Lagranges interpolation
Truncation errors in Lagrange interpolation
We assume that f
k
f(x
k
), k = 0, 1, 2, . . . , n are exact and we consider the truncation
error e(x) = f(x) P
n
(x) for x [a, b] and P
n
(x) the Lagrange polynomial of degree n
as dened in Lecture 1. Apart from the fact that e(x
k
) = 0, for k = 0, 1, 2, . . . , n with
x
k
[a, b], we can say nothing more about e(x) for any x ,= x
k
. In addition we have that
f(x) has at least n + 1 continuous derivatives on [a, b], then it is possible to express e(x)
in terms of f
n+1
(x). We now state without proof two necessary Lemmas.
60 MAK-ICT
Lemma 4.5.1
Given
q
n+1
(x) =
n

k=0
(x x
k
), x [a, b]
of degree (n + 1),
q

n+1
(x) is of degree n such that
q

n+1
(x
j
) =
n

k=0,k=j
(x
j
x
k
)
Lemma 4.5.2
Let a function g(x) be dened on [a, b]. Let S
k
: k = 0, 1, . . . , n be a set of distinct
points each belonging to [a, b], with S
0
< S
1
< S
2
< . . . < S
n
. Suppose that:
(i) g
(n)
(x)(n integer) is continuous on [S
0
, S
n
].
(ii) g(S
k
) = 0, k = 0, 1, . . . , n.
Then there is at least a number (S
0
, S
n
) such that , g
(n)
() = 0
Lemma 4.5.3
The expression in the truncation error e(x) is given by:
e(x) =
n

k=0
(x x
k
)
f
(n+1)
()
(n + 1)!
with = (x) and min
k
(x, x
k
) < < max
k
(x, x
k
)
A B Tumwesigye CSC2103 2008/2009 61
Proof
For points x = x
k
, k = 0, 1, 2, . . . , n, the theorem is trivially satised.
Suppose x ,= x
k
, we dene
q(t) =
n

k=0
(t x
k
)
and
g(t) = f(t) P
n
(t)
q(t)
q(x)
(f(x) P
n
(x)) for t, x [a, b].
Now g(x
k
) = 0, k = 0, 1, 2, . . . n Also g(x) = 0 (x ,= x
k
). So for each xed x ,= x
k
, g(t)
has n + 2 distinct zeros.
Since g(t) satises all the conditions of Lemma 1.2, we deduce that there is a number
such that g
(n+1)
() = 0 where
min
k
(x, xk) < < max
k
(x, x
k
).
Now
g
(n+1)
(t) = f
(n+1)
(t) P
(n+1)
n
(t)
(f(x) P
n
)q
(n+1)
(t)
q(x)
= f
(n+1)
(t) (n + 1)!
(f(x) P
n
(x))
q(x)
Since
P
n+1
(t) = 0
and
q
n+1
(t) = (n + 1)!
The result of the theorem follows when t = .
Example 4.5.3
Using two point linear interpolation of e
x
on [0, 1] therefore
e
x
=
x(x 1)
2
e

, (0, 1).
But x(x 1) is maximum or minimum at x =
1
2
with maximum absolute value equal to
1. So e(x) has a maximum value equal to
e
8
= 0.3398 (4dp)
62 MAK-ICT
4.5.5 Rounding errors in Lagrange polynomials
Errors known as rounding errors are usually introduced in the functional evaluation of
f(x
k
). Suppose rounding errors E
k
occur in the data f
k
= f(x
k
), k = 0, 1, 2, . . . , n
respectively.
Let P(x) and P

(x) denote, respectively, the interpolating polynomials using exact and


inexact data.
Thus,
P
n
(x) =
n

k=0
(f
k
+E
k
)L
k
(x)
P

n
(x) =
n

k=0
f
k
L
k
(x)
so [P
n
(x) P

n
(x)[
n

k=0
[E
k
[[L
k
(x)[
Which is the rounding error bound. If data was rounded to m decimal places, then the
absolute maximum error is [E
k
[
1
2
10
m
for each k = 0, 1, . . . , n.
Example 4.5.4
Find the rounding error bound in linear interpolation, of e
x
for x [0, 1], when data were
rounded to four digits.
Solution
[P
1
(x) P
1
(x)[
1
2
10
4
([L
0
(x)[ +[L
1
(x)[)
Now, L
0
(x) = 1 x
and L
1
(x) = x
and so on [0, 1], [L
0
(x)[ +[L
1
(x)[ = 1 x +x = 1.
Thus, [P
1
(x) P

1
(x)[
1
2
10
4
,
which says that the eect of rounding errors in the data, on P
1
(x) maintains the same
maximum magnitude.
Example 4.5.5
A B Tumwesigye CSC2103 2008/2009 63
The table below gives the tabulated values of the probability integral,
I =
_
2

_
x
0
e
t
2
2
dt
Use linear interpolation to nd the value of I when x = 1.125. Estimate also the error
bound on the truncation error over [1, 1.25]
x 1.00 1.05 1.10 1.15 1.20 1.25
I 0.683 0.705 0.729 0.750 0.770 0.789
Solution
Note 4.5.2
At the tabular points x
0
, x
1
, ..., x
n
, the truncation error is zero, hence we can write
f(x) = P
n
(x) +R
n

i=0
(x x
i
) (R constant depending on x)
Let,
F(x) = f(t) P
n
(t) f(x) P
n
(x)
n

i=0
(t x
i
)
n

i=0
(x x
i
)
()
then
F(x
i
) = f(x
i
) P
n
(x
i
) = 0, i = 0, 1, 2, . . . n
Further
F(x) = f(x) P
n
(x) f(x) +P
n
(x) = 0, x (a, b), x ,= x
i
,
necessarily and i = 0, 1, 2, . . . n.
Thus, F(t) vanishes in (a, b) at n + 2 distinct points. On applying Rolles theorem re-
peatedly we conclude that there exists c (a, b) such that, F
(n+1)
(c) = 0.
Dierentiating equation () (n + 1) times and putting t = c, gives,
f(x) = P
n
(x) +
n

i=0
(x x
i
)
f
(n+1)
(c)
(n + 1)!
64 MAK-ICT
For linear interpolation, n = 1, so,
P
1
(x) =
x x
1
x
0
x
1
f(x
0
) +
x x
0
x
1
x
0
f(x
1
)
with
x
0
= 1.10, x
1
= 1.15 f(x
0
) = .729, f(x
1
) = 0.750
thus ,
P
1
(1.125) =
(1.125 1.15)(0.729)
0.05
+
(1.125 1.10)(0.75)
0.05
= 0.5(0.729 + 0.750) = 0.7395
Truncation error
1

i=0
(x x
i
)
f
(2)
(c)
2!
, Now (x x
0
)(x x
1
) is minimum at x =
x
0
+x
1
2
and is
equal to
(x
1
x
0
)
2
4
in magnitude.
f

(x) =
_
2

e
x
2
2
, f

(x) =

xe

x
2
2
f

(x) =

e
x
2
2
+

x
2
e
x
2
2
and is 0 if x = 1. Thus ,
[f

(x)[

e
A bound on the truncation error is given by,
(0.05)
2
8
_
2
e
0.00009
A B Tumwesigye CSC2103 2008/2009 65
4.5.6 Text Questions
1. Compute a bound on the truncation error for e
x
on [1, 1.4] when fourth degree
polynomial is used to interpolate e
x
.
2. Obtain error bounds for both linear and quadratic interpolation for sin hx over the
interval [1.90, 2.10]
3. A table of values for
x
4
x
12
is constructed for 0 x 1 in such a way that the error
in linear interpolation does not exceed if rounding errors were neglected. Show
that for uniform spacing h, then h does not exceed the value 2

2.
4. Find the rounding error bound when quadratic interpolation, of e
x
for x [0, 1],
when data were rounded to four digits.
Note 4.5.3 Use Lagrange interpolation to nd an appropriate function passing through
the given points. Sketch a graph of this function based only on the given points and what
you think the curve must be. Compare your sketch with the graph created by graphing
technology.
(a) A linear function passing through the points (1, 3) and (2, 1).
(b) A quadratic function passing through the points (1, 3), (0, 2), and (2, 1).
(c) A cubic function passing through the points (1, 3), (0, 2), (1, 5), and (2, 1).
(d) A quartic (fourth degree) polynomial function passing through (2, 4), (1, 3), (0, 2), (1, 5),
and (2, 1).
Note 4.5.4 Finding a quadratic function that resembles other functions: By choos-
ing three noncolinear points on any curve we can use Lagranges interpolation to nd
a parabola that passes through those points. For each of the following functions nd a
parabola that passes through the graphs of the functions when at points with the indi-
cated rst coordinates. Use graphing technology to draw a sketch of the function and the
quadratic function you nd. Discuss how you might use the function you nd to estimate
the value of the given function.
(a) f(x) = x
5
4x
3
+ 2; x = 0, 1, 2.
(b) f(x) =

x; x = 0, 1, 4.
(c) f(x) = 2
x
; x = 1, 0, 1.
(d) f(x) = 2
x
; x = 0, 1, 2.
(e) f(x) = sin
_

2
x
_
; x = 0, 1, 2.
(f) f(x) = cos
_

2
x
_
; x = 1, 0, 1 .
66 MAK-ICT
Describe a general procedure for nding a polynomial function of degree n that passes
through n + 1 given points with distinct rst coordinates.
Note 4.5.5 Lagranges interpolation formula has the disadvantage that the degree of
the approximating polynomial must be chosen at the outset; an alternative approach is
discussed in the next Step. Thus, Lagranges formula is mainly of theoretical interest for
us here; in passing, we mention that there are some important applications of this formula
beyond the scope of this book - for example, the construction of basis functions to solve
dierential equations using a spectral (discrete ordinate) method.
Note 4.5.6 Given the data below,
x -3 1 4 5 7
f(x) -28 4 28 36 52
Use linear interpolation to approximate
(a) f(3).
(b) f(x) if x = 5.
(c) x if f(x) = 12.
(d) the root/solution of f(x). f(x) = 8x 4
Note 4.5.7 Use Lagranges polynomial to approximate f(2) for
x
0
x
1
x 1 3
f(x) 5 21
f(x) = 8x 3
L
0
(x) =
(x x
1
)
(x
0
x
1
)
=
1
2
(3 x)
L
1
(x) =
(x x
0
)
(x
1
x
0
)
=
1
2
(x 1)
P
1
(x) = L
0
(x)f(x
0
) +L
1
(x)f(x
1
)
=
1
2
(3 x)(5) +
1
2
(x 1)(21)
A B Tumwesigye CSC2103 2008/2009 67
=
1
2
[15 5x + 21x 21]
=
1
2
[16x 6]
8x 3
f(2) = 8(2) 3 = 13
Note 4.5.8 The Lagrange basis polynomials (n=N-1) on the data to approximate
f(3.5)
x
0
x
1
x
2
x 1 2 4
f(x) 3 2 1
L
0
(x) =
(x x
1
)(x x
2
)
(x
0
x
1
)(x
0
x
2
)
=
1
3
(x 2)(x 4)
L
1
(x) =
(x x
0
)(x x
2
)
(x
1
x
0
)(x
1
x
2
)
=
1
2
(x 1)(x 4)
L
2
(x) =
(x x
0
)(x x
1
)
(x
2
x
0
)(x
2
x
1
)
=
1
6
(x 1)(x 2)
P
2
(x) = L
0
(x)f(x
0
) +L
1
(x)f(x
1
) +L
2
(x)f(x
2
)
=
1
3
(x 2)(x 4)(3)
1
2
(x 1)(x 4)(2) +
1
6
(x 1)(x 2)
= (x 2)(x 4) (x 1)(x 4) +
1
6
(x 1)(x 2)
=
1
6
x
2

3
2
x + 4 +
1
3
0.1667x
2
1.5x + 4.3333
f(3.5) 0.1667(3.5)
2
1.5(3.5) + 4.3333 1.125075
Note 4.5.9 Lagrange error for n = 2,
f(x) = P
2
(x) +
f
(3)
()
3!
(x x
0
)(x x
1
)(x x
2
), [a, b], where x
0
, x
1
, x
2
[a, b]
68 MAK-ICT
Note 4.5.10 The identity
n

k=0
L
k
(x) = 1
(established by setting f(x) = 1) may be used as a check. Note also that with n = 1 we
recover the linear interpolation formula:
P
1
(x) = L
0
(x)f(x
0
) +L
1
(x)f(x
1
)
=
(x x
1
)
(x
0
x
1
)
f(x
0
) +
(x x
0
)
(x
1
x
0
)
f(x
1
)
= f(x
0
) +
(x x
0
)
(x
1
x
0
)
[f(x
1
) f(x
0
)]
Note 4.5.11 Use Lagranges interpolation formula to nd the interpolating polynomial
P
3
(x) through the points (0, 3), (1, 2), (2, 7), and (4, 59) and then nd the approximate
value of P
3
(3).
x
0
x
1
x
2
x
3
x 0 1 2 4
f(x) 3 2 7 59
Here n = 3 = 4 1
The Lagrange coecients are:
L
0
(x) =
(x x
1
)(x x
2
)(x x
3
)
(x
0
x
1
)(x
0
x
2
)(x
0
x
3
)
=
(x 1)(x 2)(x 4)
(0 1)(0 2)(0 4)
=
1
8
(x
3
7x
2
+ 14x 8)
L
1
(x) =
(x x
0
)(x x
2
)(x x
3
)
(x
1
x
0
)(x
1
x
2
)(x
1
x
3
)
=
(x 0)(x 2)(x 4)
(1 0)(1 2)(1 4)
=
1
3
(x
3
6x
2
+ 8x)
L
2
(x) =
(x x
0
)(x x
1
)(x x
3
)
(x
2
x
0
)(x
2
x
1
)(x
2
x
3
)
=
(x 0)(x 1)(x 4)
(2 0)(2 1)(2 4)
=
1
4
(x
3
5x
2
+ 4x)
L
3
(x) =
(x x
0
)(x x
1
)(x x
2
)
(x
3
x
0
)(x
3
x
1
)(x
3
x
2
)
=
(x 0)(x 1)(x 2)
(4 0)(4 1)(4 2)
=
1
24
(x
3
3x
2
+ 2x)
(The student should verify that [L
0
(x) +L
1
(x) +L
2
(x) +L
3
(x) = 1]
A B Tumwesigye CSC2103 2008/2009 69
Hence, the required polynomial is
P
3
(x) =
3
8
(x
3
7x
2
+ 14x 8) +
2
3
(x
3
6x
2
+ 8x)

7
4
(x
3
5x
2
+ 4x) +
59
24
(x
3
3x
2
+ 2x)
=
1
24
_
9x
3
+ 63x
2
126x + 72 + 16x
3
96x
2
+ 128x 42x
3
+ 210x
2
168x + 59x
3
177x
2
+ 118x

=
1
24
_
9x
3
+ 63x
2
126x + 72 + 16x
3
96x
2
+ 128x 42x
3
+ 210x
2
168x + 59x
3
177x
2
+ 118x

=
1
24
_
24x
3
+ 0x
2
48x + 72

= x
3
2x + 3
Consequently f(3) P(3) = 3
3
2(3) + 3 = 24, However, note that, if the explicit form
of the interpolating polynomial were not required, one would proceed to evaluate P
3
(x)
for some value of x directly from the factored forms of L
k
(x). Thus, in order to evaluate
P
3
(3), one has
L
0
(3) =
(x x
1
)(x x
2
)(x x
3
)
(x
0
x
1
)(x
0
x
2
)(x
0
x
3
)
=
(3 1)(3 2)(3 4)
(0 1)(0 2)(0 4)
=
1
4
. etc.
Exercise 4.1 Given that f(2) = 46, f(1) = 4, f(1) = 4, f(3) = 156, and f(4) = 484,
use Lagranges interpolation formula to estimate the value of f(0).
Example 4.5.6 Use Lagrange interpolation polynomial for the data below
x
0
x
1
x
2
x
3
x 1 2 4 5
f(x) 3 8 54 107
to show that P(x) = x
3
x
2
+x + 2 The Lagrange coecients are:
L
0
(x) =
(x x
1
)(x x
2
)(x x
3
)
(x
0
x
1
)(x
0
x
2
)(x
0
x
3
)
=
(x 2)(x 4)(x 5)
(1 3)(1 4)(1 5)
=
1
12
(x
3
11x
2
+ 38x 40)
L
1
(x) =
(x x
0
)(x x
2
)(x x
3
)
(x
1
x
0
)(x
1
x
2
)(x
1
x
3
)
=
(x 1)(x 4)(x 5)
(2 1)(2 4)(2 5)
=
1
6
(x
3
10x
2
+ 29x 20)
70 MAK-ICT
L
2
(x) =
(x x
0
)(x x
1
)(x x
3
)
(x
2
x
0
)(x
2
x
1
)(x
2
x
3
)
=
(x 1)(x 2)(x 5)
(4 1)(4 2)(4 5)
=
1
6
(x
3
8x
2
+ 17x 10)
L
3
(x) =
(x x
0
)(x x
1
)(x x
2
)
(x
3
x
0
)(x
3
x
1
)(x
3
x
2
)
=
(x 1)(x 2)(x 4)
(5 1)(5 2)(5 4)
=
1
12
(x
3
7x
2
+ 14x 8)
(The student should verify that [L
0
(x) +L
1
(x) +L
2
(x) +L
3
(x) = 1] Hence, the required
polynomial is
P
3
(x) =
1
12
(x
3
11x
2
+ 38x 40)[3] +
1
6
(x
3
10x
2
+ 29x 20)[8]
+
1
6
(x
3
8x
2
+ 17x 10)[54] +
1
12
(x
3
7x
2
+ 14x 8)[107]
=
1
12
_
3x
3
+ 33x
2
114x + 120 + 16x
3
160x
2
+ 464x 320 108x
3
+ 864x
2
1836x + 1080 + 107x
3
749x
2
+ 1498x 856

=
1
12
_
3x
3
+ 33x
2
114x + 120 + 16x
3
160x
2
+ 464x 320 108x
3
+ 864x
2
1836x + 1080 + 107x
3
749x
2
+ 1498x 856

=
1
12
_
12x
3
12x
2
+ 12x + 24

= x
3
x
2
+x + 2
Consequently f(3.5) P(3.5) = (3.5)
3
(3.5)
2
+ (3.5) + 2 = 36.125,
Chapter 5
Numerical Dierentiation
5.1 Why Numerical techniques for nding derivatives.
A large number of physical problems involve functions which are known only through
experimental measurements. In order to nd derivatives of such functions one is forced to
use methods based on the available discrete data and the methods are purely numerical.
Sometimes, some functions with known analytical expressions, their derivatives may be
so complex and computationally involved in their evaluation that one must resort to
numerical methods, though less accurate, but have less involved expressions.
Throughout numerical weather prediction, you often need to calculate the gradient of a
function at a number of points. As we rarely know the equation which denes the function,
we need to calculate numerically an estimate of the gradient. This method requires the
use of nite dierence schemes. In this module we will be considering nite dierence
schemes to approximate the gradient of a function in one variable. The problem can be
stated more specically in these terms: how to estimate the gradient of a function in f(x),
at a point a.
Numerical dierentiation is the process of nding the numerical value of a derivative of a
given function at a given point. In general, numerical dierentiation is more dicult than
numerical integration. This is because while numerical integration requires only good
continuity properties of the function being integrated, numerical dierentiation requires
more complicated properties such as Lipschitz classes.
5.2 Analytic denition of a derivative as compared
to a numerical denition
Analytically, we dene the derivative of f(x) at x = a denoted f

(x) as the limiting


process,
f

(x) = lim
h0
f(x +h) f(x)
h
.
71
72 MAK-ICT
However, a numerical approximation to f

(x) involves a dierence such as


(h) =
f(x +h) f(x)
h
In geometrical terms, this can be represented as in Figure 5.1.
Figure 5.1: Geometrical interpretation of a derivative
Note 5.2.1 f

(x) is the slope of the tangent T and, (h) is the slope of the line L.
As h 0, (h) f

(x) suggesting that h should be small for a good approximation.


However, in computing (h) the two terms f(x) and f(x+h) are close in value. So there
is likely to be a loss of signicant digits.
5.3 Forward dierence approximation
The forward dierence approximation for a derivative is given by,
f

(x) =
f(x +h) f(x)
h
The formula can geometrically be represented as in Figure (5.3)
Figure 5.2: Geometrical representation of the forward dierence approximation.
What the above expression actually calculates is the gradient of the line which intersects
the points x, f(x) and x +h, f(x +h) as you can see from the gure above. In the gure,
the function is shown with a red line (a curve). The blue line (a lower straight line)
represents the approximation to the gradient and the black line (the upperline) is the
actual gradient of the function at x = a .
Example 5.3.1
For the function f(x) = x
2
, approximate f

(2) with step length


(i) h = 0.1
A B Tumwesigye CSC2103 2008/2009 73
(ii) h = 0.01
(iii) h = 0.001. Compare your results with the analytic/exact value of f

(2).
Solution
(i) Since
f(x)
f(x +h) f(x)
h
But f(x) = x
2
and x = 2, h = 0.1
therefore f(x + h) = (2 + 0.1)
2
= (2.1)
2
= 4.41
and
f(x) = 2
2
= 4
therefore
f

(2)
4.41 4
0.1
=
0.41
0.1
= 4.1
However, the exact value of
f

(2) = (2)(2) = 4
Since f

(x) = 2x. This yields an error of 0.1.


(ii) Since h = 0.01, therefore
f(x +h) = (2.01)
2
= 4.0401
and
f(x) = 2
2
= 4
therefore
f

(2)
4.0401 4
0.01
=
0.0401
0.01
= 4.01
The error committed is 0.01 i.e smaller than in part (i).
(iii) For h = 0.001,
f(x +h) = (2.001)
2
= 4.004001
and
f(x) = 2
2
= 4
therefore
f

(2)
4.004001 4
0.001
=
0.004001
0.001
= 4.001
The error committed in this case is 0.001.
74 MAK-ICT
5.3.1 Analytical derivation of the forward dierence approxi-
mation.
We derive the forward dierence approximation analytically from Taylor series expansion
of f(x +h). By Taylor series we have,
f(x +h) = f(x) +hf

(x) +
_

_
h
2
2
f

(x) +
h
3
6
f
(3)
(x) +. . . 1(a)
h
2
2
f

(c
1
) x c
1
x +h 1(b)
If we rearrange equation 1(b), we get,
f

(x) =
(x +h) f(x)
h

h
2
f

(c
1
) (5.1)
Equation (5.1) is the forward dierence form which we can write in better notation as:
f

(x)
f(x +h) f(x)
h
+ Truncation Error
Note 5.3.1
(i) Error 0(h) means that the error 0 (goes to zero) near x = a at the rate of Ah
where A is a constant (i.e. halving h, halves the error).
(ii) The magnitude of the truncation error i.e
E
truc
=
h
2
f

(c
1
).
Indeed [
h
2
f

(c
1
) is the bound on the truncation error.
5.4 Backward dierence approximation
The backward dierence approximation for the derivative is given by,
f

(x) =
f(x) f(x h)
h
+
h
2
f

(c
2
).
A B Tumwesigye CSC2103 2008/2009 75
Figure 5.3: Geometrical representation of the backward dierence approximation.
The formula can be geometrically represented as in Figure (5.3.1).
Example 5.4.1
For the function f(x) = x
2
, approximate f

(2) using the backward dierence approxi-


mation with step length
(i) h = 0.1,
(ii) h = 0.01 and
(iii) h = 0.001.
Solution
(i) Since the backward dierence approximation is,
f

(x)
f(x) f(x h)
h
But f(x) = x
2
, x = 2 and h = 0.1, therefore
f(x) = f(2) = 2
2
= 4
f(x h) = f(2 0.1) = f(1.9) = (1.9)
2
= 3.61
therefore
f

(2) =
4 3.6100
0.1
=
0.3900
0.1
= 3.900
Since the exact value of
f

(2) = (2)(2) = 4,
therefore the absolute error committed by using the numerical formula is 0.1. This
is not very bad.
(ii) Since h = 0.01 and
f(x) = f(2) = 2
2
= 4
f(x h) = f(2 0.01) = f(1.99) = (1.99)
2
= 3.9601,
therefore
f

(2)
4 3.9601
0.01
=
0.0399
0.01
= 3.99
Error committed is 0.01. This is better than the previous.
76 MAK-ICT
(iii) Since h = 0.001 and f(x) = 4
f(x h) = f(2 0.001) = f(1.999) = (1.999)
2
= 3.996001,
therefore
f

(2)
4 3.996001
0.001
=
0.003999
0.001
= 3.999
Error committed is 0.00001. This error is smaller than any of the previous two. Infact
the zero error means the exact value of the derivative is generated.
Note 5.4.1 We note that the smaller h is the better is the numerical approximate to
the derivative.
5.4.1 Analytical derivation of the backward dierence approxi-
mation.
Using the Taylor series expansion
f(x h) = f(x) hf

(x) +
_

_
h
2
2
f

(x)
h
3
6
f
(3)
(x) +. . . ()
h
2
2
f

(c
2
) x c
1
x +h ()
Rearranging equation () i.e
f(x h) = f(x) hf

(x) +
h
2
2
f

(c
2
),
we get,
f

(x) =
f(x) f(x h)
h
+
h
2
f

(c
2
)
with x h c
2
x
Which is the backward dierence approximation. The approximation could also be
written as,
f

(x) =
f(x) f(x h)
h
+E
trunc
or
f

(x) =
f(x) f(x h)
h
+ 0(h)
A B Tumwesigye CSC2103 2008/2009 77
5.5 The Central dierence approximation
We state the central dierence approximation to the rst derivative as;
f

(x) =
f(x +h) f(x h)
2h
+
h
2
3!
f
3
(c)
The formula is geometrically represented as seen in the Figure 5.4.
Figure 5.4: Geometrical interpretation of the central dierence approximation.
The slope of the line AB is the central dierence approximation
f

(x)
f(x +h) f(x h)
2h
.
From Figure 5.4, it is clear that the gradient of AB is closer to the gradient of the tangent
at x = a. That is, the two lines are almost parallel. This is expected to be a better
approximation than the forward and the backward. Indeed it is as we shall see in the
following example.
Example 5.5.1
Approximate f

(2) for f(x) = x


2
using the central dierence approximation with step size
(i) h = 0.1
(ii) h = 0.01
(iii) h = 0.001
Solution
(i) Since a = 2, h = 0.1, but
f

(x)
f(x +h) f(x h)
2h
=
f(2.1) f(1.9)
2(0.1)
=
(2.1)
2
f(1.9)
2
0.2
= 4.000000000
78 MAK-ICT
Since the exact value is,
f

(2) = 2(2) = 4
therefore the error is zero to 9 places of decimals. This could not be achieved with
a forward or backward formula using same step size. This is because the central
formula is of higher order i.e 0(h
2
)
(ii) With h = 0.01,
f

(q)
f(2.01) f(1.99)
2(0.01)
=
(2.01)
2
(1.99)
2
0.02
= 4.000000000.
This also generates a zero error.
(iii) With h = 0.001,
f

(2)
f(2.01) f(1.999)
2(0.001)
=
(2.001)
2
(1.999)
2
0.002
= 4.00000000
Which also generates a zero error.
5.5.1 Analytical derivation of the central dierence approxima-
tion
Using Taylor series expansion of f(x +h) and f(a h) about x = a, we have;
f(x +h) = f(x) +hf

(x) +
h
2
2
f

(x) +
h
3
6
f
(3)
(c
1
) (5.2)
and
f(x h) = f(x) hf

(x) +
h
2
2
f

(x)
h
3
6
f
(3)
(c
2
) (5.3)
equation (5.2) minus equation (5.3) we get;
f(x +h) = f(x h) = 2f

(x)h +
h
3
6
(f
(3)
(c
1
) f
(3)
(c
2
)) (5.4)
A B Tumwesigye CSC2103 2008/2009 79
Now,
f
(3)
(c
1
) +f
(3)
(2)
2
= f
(3)
(c
3
), c
1
c
3
c
2
This is derived from the intermediate value theorem.
So
f

(x) =
f(x +h) f(x h)
2h
+
h
2
3!
f
(3)
(c
3
)
i.e, the error is of order 0(h
2
).
5.6 Text Questions
1. For f(x) = e
x
, approximate f

(1) using forward, backward and central dierence


formulae with,
(i) h = 0.1
(ii) h = 0.01
(iii) h = 0.001
Compare the results with the analytic/exact value of f

(1). Comment on your


results.
2. Let f(x) be given by the table below. The inherent round o error has the bound
[e
k
[ 5 10
5
, use the rounded values in your calculations.
x 1.100 1.190 1.199 1.200 1.201 1.210
f(x) 0.4536 0.3717 0.3633 0.3624 0.314 0.3630
(a) Find approximation for f

(1.2) using the central dierence formula with h =


0.01, and h = 0.001.
(b) Compare with f

(1.2) = sin(1.2) = 0.932.


(c) Find the total error bound for the three cases in part (a).
3. For f(x) = cos x, Use the forward, backward and central dierence formulae to
approximate f

(0.8) using h = 0.001. Compare your results with the analytic val-
ues.[Hint: either all in radians or all in degrees, Ans= -0.717]
80 MAK-ICT
4. Repeat question three using the formula
f

(x
0
)
f
1
f
1
2h
()
Compare your results with analytic results. What can you say about the order
of the truncation error in () in comparison to the forward, backward and central
dierence formulae.
5.7 Comparision
It is clear that the central dierence gives a much more accurate approximation of the
derivative compared to the forward and backward dierences. Central dierences are
useful in solving partial dierential equations. If the data values are available both in the
past and in the future, the numerical derivative should be approximated by the central
dierence.
5.8 The second derivative approximation
The most commonly used approximation is of a central dierence form which we obtain
from the following Taylor series approximations earlier considered in the previous lecture.
Consider again the expansions,
f(x +h) = f(x) +hf

(x) +
h
2
2
f

(x) +
h
3
6
f
(3)
(x) +
h
4
24
f
(4)
(c)) (5.5)
and
f(x h) = f(x) hf

(x) +
h
2
2
f

(x)
h
3
6
f
(3)
(x) +
h
4
24
f
(4)
(c
2
) (5.6)
addition of equations (5.5) and (5.6) gives,
f(x +h) +f(x h) = 2f(x) +h
2
f

(x) +
h
4
12
_
f
(4)
(c
1
) +f
4
(c
2
)
_
(5.7)
and using,
f
(4)
(c
1
) +f
(4)
(c
2
)
2
= f
(4)
(c
3
)
A B Tumwesigye CSC2103 2008/2009 81
with c
1
c
3
c
2
. This is derived from the intermediate value theorem and making f

(x)
the subject in equation (5.7), we get,
f

(x) =
f(x h) 2f(x) +f(x +h)
h
2
+
h
2
6
f
(4)
(c
3
) (5.8)
or
f

(x) =
f(x h) 2f(x) +f(x +h)
h
2
+ 0(h
2
) (5.9)
Equation (5.9) is a second order approximation for the second order derivative. The
formula is handy for approximating second order derivatives.
Example 5.8.1
Approximate f

(1) for the function f(x) = x


3
with
(i) h = 0.100
(ii) h = 0.010
(iii) h = 0.001.
Solution
(i) Since,
f

(x)
f(x h) 2f(x) +f(x +h)
h
2
But x = 1, h = 0.1, therefore
f

(1)
f(0.9) 2f(1) +f(1.1)
(0.1)
2
=
(0.9)
3
2(1)
3
+ (1.1)
3
0.01
= 6.00000000000005
However, the exact value is f

(x) = 6x, therefore f

(1) = 6. Hence error committed


is just 5.0 10
14
. This is really small.
82 MAK-ICT
(ii) With h = 0.01
f

(1)
f(0.99) 2f(1) +f(1.01)
(0.01)
2
=
(0.99)
3
2 + (1.01)
3
0.0001
= 6.00000000000000
Since the exact value is 6, the error committed is zero to 14 decimal places.
(iii) This also generates a zero error, since,
f

(1)
f(0.999) 2f(1) +f(1.001)
(0.001)
2
=
(0.999)
3
2 + (1.001)
3
0.000001
= 6.00000000000000
5.8.1 Error analysis in numerical dierentiation.
Addition of a rounding error term
When we use a formula such as
f

(x) =
f(x +h) f(x)
h
+E
trun
(h
2
) (5.10)
in general the dierence
f(x+h)f(x)
h
will be evaluated with a rounding error. For example
if f is evaluated to four decimal places with h = 0.1, there is a possible rounding error of
2 0.00005
0.1
= 0.001
in the dierence
f(x +h) f(x)
h
.
A B Tumwesigye CSC2103 2008/2009 83
This is because when a number is evaluated to n- decimal places, the maximum absolute
error committed is
1
2
10
n
. Thus maximum error in
f(x +h) is
1
2
10
4
= 0.00005
and maximum absolute error in
f(x) is
1
2
10
4
= 0.00005.
But the error in the dierence is a sum of the absolute errors in the individual numbers.
So the maximum error in
f(x +h) f(x) is 2(
1
2
10
4
) or 2(0.00005)
Thus, the error in
f(x +h) f(x)
h
is 2
(0.00005)
0.1
assuming h = 0.1 is exact. Thus equation (5.10) becomes
f

(x) =
y
1
y
0
h
+E
truc
+E
round
where, y
1
, y
0
are the rounded values of f(x +h) and f(x) respectively.
Optimum step size.
We now have that:
Total error = E
trunc
+ E
round
.
Thus there is no point in choosing h so that [E
trunc
[ is small. If [E
round
[ is large since the
benet is swamped.
Example 5.8.2
Suppose
f(x +h) = y
1
+e
1
and
f(x) = y
0
+e
0
84 MAK-ICT
and [e
1
[, [e
0
[ < e (small number). Find the optimum choice for h when using the approx-
imation
f

(x)
f(x +h) f(x)
h
Solution
For the approximation,
f

(x)
f(x +h) f(x)
h
Maximum absolute error in f(x +h) is e and that in f(x) is also e. So
[E
round
[ <
e +e
h
=
2e
h
Now
[E
trunc
[ =
h
2
[f

(c
1
)[
h
2
M
2
say, where M
2
= max [f

(x)[, a x a +h Thus,
[Total error[
2e
h
+
h
2
M
2
.
The optimum choice for h is
d
dh
[total error[ = 0
that is,
2e
h
2
+
M
2
2
= 0
giving h = 2
_
e
M
2
5.8.2 Text Questions
1. For f(x) = x
2
+x, approximate f

(1) using a step length,


(i) h = 0.4
(ii) h = 0.04
A B Tumwesigye CSC2103 2008/2009 85
(iii) h = 0.004
2. Given the function f(x) = e
x
, approximate f

(x) at x = 2 using step length h of


magnitude.
(i) 0.1
(ii) 0.01
(iii) 0.001
3. Let f(x) = cosx. Use the formula considered in this Lecture for approximating
f

(x), with h = 0.01 to calculate approximations for f

(0.8). Compare with the


true value.
4. Given that
f

(x
0
)
f
2
+ 16f
1
30f
0
+ 16f
1
f
2
12h
2
Use the formula to approximate f

(1) for f(x) = e


x
using h = 0.5. Compare
your answer with the analytic answer and the answer obtained when using formula
equation (5.9). What do you notice and suggest?
5. Show that for the central dierence formula
f

(x) =
f(x +h) f(x)
2h
+
h
2
3!
f
(3)
(c),
(i) [Total error[
e
h
+
Mh
2
6
(ii) and Optimum h = (
3e
M
)
1
3
(iii) for f(x) = cosx, x = 0.8 and h = 0.0001, Show that, [ E
trunc
[ 0.110
8
. And
if e = 0.5 10
9
, Show that, [E
round
[ < 0.5 10
5
and Optimum h = 0.0011.
Chapter 6
Ordinary Dierential Equations
Numerical ordinary dierential equations is the part of numerical analysis which
studies the numerical solution of ordinary dierential equations (ODEs). This eld is also
known under the name numerical integration, but some people reserve this term for the
computation of integrals.
Many dierential equations cannot be solved analytically, in which case we have to satisfy
ourselves with an approximation to the solution. The algorithms studied here can be
used to compute such an approximation. An alternative method is to use techniques from
calculus to obtain a series expansion of the solution.
Ordinary dierential equations occur in many scientic disciplines, for instance in mechan-
ics, chemistry, ecology, and economics. In addition, some methods in numerical partial
dierential equations convert the partial dierential equation into an ordinary dierential
equation, which must then be solved.
The key idea behind numerical solution of odes is the combination of function values at
dierent points or times to approximate the derivatives in the required equation. The
manner in which the function values are combined is determined by the Taylor Series ex-
pansion for the point at which the derivative is required. This gives us a nite dierence
approximation to the derivative.
Common numerical methods for solving initial value problems of ordinary dierential
equations are summarized:
(a) Taylor series (no h in the expansion only x
0
and x).
(b) Eulers method (no x in the expansion only x
0
and h).
(c) Runge Kutta method.
86
A B Tumwesigye CSC2103 2008/2009 87
6.1 Dierent forms of ordinary dierential equations
We are faced with three types of problems in the solution of ordinary dierential equa-
tions, namely initial, boundary and mixed-value problems. For initial and boundary value
problems, we take the dierential equation, as an example,
y

= f(x, y, y

).
If in addition, we are given that y

() = a and y

() = b ( ,= ) and a < , we
have what is called a two-point boundary-value problem. If on the other hand we are
given y() = a and y

() = b then we have an initial-value problem. For mixed problems


we consider the example of
y
(4)
= f(x, y, y

).
If y() = a, y

() = a

, y() = b and y

() = b

then we have a mixed-value problem. We


shall mainly deal with initial value and two point boundary-value problems in this unit.
6.1.1 Initial-value problems
Given a rst order ordinary dierential equation y

= f(x, y), with initial condition y() =


y
0
. Suppose we wish to nd a numerical solution y on the interval [a, b]. We rst divide
[a, b] into n equal sub-intervals of step length h =
(ba)
n
. We wish to tabulate the solution
y at the points x
i
= a + ih including the point x
n
= b. We have two kinds of method to
be able to do this. We have what are called single-step methods and we also have what
we call multi-step methods.
6.1.2 Single step methods
These methods advance the solution one step at a time using information at one previous
point from the point x = a to the point x = b. They are of two kinds namely, those which
involve a non-xed truncation error and those involving a xed truncation error.
6.2 Taylor series method
This method is based on the Taylor series expansion. It is one of the oldest methods for
solving dierential equations and was used by Newton. Consider the Taylor series,
y(x) = y(x
0
) +
(x x
0
)
1!
y

(x
0
) +
(x x
0
)
2
2!
y

(x
0
) +. . . +
(x x
0
)
n
y
(n)
(x
0
)
n!
(6.1)
where the series is convergent and all the derivatives, y

, y

, . . . , y
(n)
. . . exist on the interval
of solution. This extends the solution from to +h, and from +h to + 2h and so
on until the point x = +nh = is reached.
88 MAK-ICT
Example 6.2.1 Solve the dierential equation
y

= 1 2xy, y(1) = 0.54 = x


0
= 1
Hence nd y(1.1)
(Note: this equation has no simple analytic solution)
y

= 1 2xy y

(1) = 1 2(1)(0.54) = 0.08


y

= 2xy

2y y

(1) = 2(1)(0.08) 2(0.54) = 0.92


y

= 2xy

4y

(1) = 2(1)(0.92) 4(0.08) = 2.16


y(x) = y(x
0
) +
(x x
0
)y

(x
0
)
1!
+
(x x
0
)
2
y

(x
0
)
2!
+
(x x
0
)
3
y

(x
0
)
3!
+. . .
y(x) = y(1) +
(x 1)y

(1)
1!
+
(x 1)
2
y

(1)
2!
+
(x 1)
3
y

(1)
3!
+. . .
y(x) = (0.54) +
(x 1)(0.08)
1!
+
(x 1)
2
(0.92)
2!
+
(x 1)
3
(2.16)
3!
+. . .
y(1.1) = (0.54) +
(0.1)(0.08)
1!
+
(0.1)
2
(0.92)
2!
+
(0.1)
3
(2.16)
3!
+. . .
= 0.54 0.008 0.0046 + 0.00036 0.5278
Example 6.2.2 Solve the dierential equation
y

= x
2
+y
2
, y(0) = 1 for x = 0.25 & x = 0.5 = x
0
= 0
y

= x
2
+y
2
y

(0) = 0
2
+ 1
2
= 1
y

(x) = 2x + 2y(x)y

(x) y

(0) = 2(0) + 2(1)(1) = 2


y

(x) = 2 + 2[y

(x)y

(x) +y

(x)y(x)] y

(0) = 2 + 2[(1)(1) + (2)(1)] = 8


y(x) = y(x
0
) +
(x x
0
)y

(x
0
)
1!
+
(x x
0
)
2
y

(x
0
)
2!
+
(x x
0
)
3
y

(x
0
)
3!
+. . .
y(x) = y(0) +
(x 0)y

(0)
1!
+
(x 0)
2
y

(0)
2!
+
(x 0)
3
y

(0)
3!
+. . .
y(x) = (1) +
(x 0)(1)
1!
+
(x 0)
2
(2)
2!
+
(x 0)
3
(8)
3!
+. . .
A B Tumwesigye CSC2103 2008/2009 89
y(0.25) = (1) +
(0.25)(1)
1!
+
(0.25)
2
(2)
2!
+
(0.25)
3
(8)
3!
+. . . 1.33333
y(0.5) = (1) +
(0.5)(1)
1!
+
(0.5)
2
(2)
2!
+
(0.5)
3
(8)
3!
+. . . 1.91667
Example 6.2.3 Given y

= y; y(0) = 1, nd y(0.04)
y(x) = y(x
0
) +
(x x
0
)y

(x
0
)
1!
+
(x x
0
)
2
y

(x
0
)
2!
+
(x x
0
)
3
y

(x
0
)
3!
+. . .
y(x) = 1 +x +
x
2
2!
+
x
3
3!
+. . .
y(0.04) = 1 + 0.04 +
0.04
2
2!
+
0.04
3
3!
+. . . 1.04081067
Example 6.2.4 Using the Taylor series method nd an expression for the solution y(x)
given that,
y

= x
3
y
and the initial condition y = 1 when x = 0. Use this expression to nd values of y for
x = x
0
+h with h = 0.1
For x
0
= 0, y
0
= y(x
0
) = 1
y

(x) = x
3
y y

(x
0
) = x
3
0
y
0
= 1
y

(x) = 3x
2
y

(x
0
) = 3x
2
0
y

0
= 1
y

(x) = 6x y

(x
0
) = 6x
0
y

0
= 1
y
iv
(x) = 6 y

y
iv
(x
0
) = 6 y

0
= 7
Thus
y(x) = y(x
0
) +
(x x
0
)
1!
y

(x
0
) +
(x x
0
)
2
2!
y

(x
0
) +. . . +
(x x
0
)
n
y
(n)
(x
0
)
n!
y(x) = 1 x(1) +
x
2
2
(1)
x
3
6
(1) +
x
4
24
(7) +. . .
y(0.1) = 1 (0.1)(1) +
(0.1)
2
2
(1)
(0.1)
3
6
(1) +
(0.1)
4
24
(7) +. . .
0.905
90 MAK-ICT
The Taylor series method has certain obvious advantages. First if we are prepared to
compute enough derivatives we can avoid any truncation error. The size of the truncation
error may be estimated from the rst neglected term of the series.
Second, we are not restricted to equal intervals and we can change the interval easily and
at will. A large interval can often, be used.
Third it has an easy checking technique.
y(x h) = y(x) hy

(x) +
h
2
2!
y

(x) . . .
and this value may be compared with a previous value. It can be applied, theoretically to
non-linear equations and requires no special starting technique. There are disadvantages
as well. For example, the computation of higher derivatives is not particularly easy in
general. Sometimes we can nd a recurrence relation, as indicated in the previous example.
For automatic computation it certainly implies extra programming. The method has been
successively applied on a wide variety of automatic computers to the equation
P(x)y

+q(x)y

+r(x)y = 0
where p, q and r are quadratic functions of x. In this case there is a ve term recurrence
relation between the derivatives.
6.3 Eulers Method
It uses the rst two terms of the Taylor series expansion.
the technique is not so popular for solving ODEs, this is because of the high truncation
error inherited in the scheme.
Suppose by Taylor formula we have
y(x
0
+h) = y(x
0
) +hy

(x
0
) +
h
2
2!
y

(x
0
) +. . . +
h
p
p!
y
(p)
(x
0
) +
h
p+1
y
(p+1)
(x
0
)
(p + 1)!
where (x, x+h). If we can evaluate higher derivatives of y; we obtain Taylor algorithm
of order p, by neglecting the remainder term in formula above. If we retain in formula
above only the rst two terms (h
2
= h
3
= h
4
0 for h very small) we get what we
call Eulers method, given by
y
n+1
= y
n
+hy

n
+
y
n+1
= y
n
+hy

n
(6.2)
Note 6.3.1 When using Eulers method, one has to rst specify the value of h to use.
A B Tumwesigye CSC2103 2008/2009 91
Note 6.3.2 The system is easy:
y
n+1
= y
n
+hy

n
The next step = previous step + (h)derivative of previous step.
Note 6.3.3 It has local truncation error given by
E = =
h
2
2
y

(), (x
n
, x
n+1
).
Example 6.3.1 Use Eulers method to solve the dierential equation y

= y, y(0) = 1
on the interval [0, 1].
We know the classical solution of the equation is y = e
x
. Using Eulers method and four
decimal arithmetic with h = 0.005 we have
y
n+1
= y
n
+hy

n
= (1 +h)y
n
since y

= y.
y(0.005) = y
1
= (1.005)y
0
= (1.005)(1) = 1.005
y(0.010) = y
2
= (1.005)y
1
= (1.005)2 = 1.0100
y(0.015) = y
3
= (1.005)y
2
= (1.005)(1.0100) = 1.0151
y(0.020) = y
4
= (1.005)y
3
= (1.005)(1.0151) = 1.0202
y(0.025) = y
5
= (1.005)y
4
= (1.005)(1.0202) = 1.0253
y(0.030) = y
6
= (1.005)y
5
= (1.005)(1.0202) = 1.0304
y(0.035) = y
7
= (1.005)y
6
= (1.005)(1.0304) = 1.0356
y(0.040) = y
8
= (1.005)y
7
= (1.005)(1.0356) = 1.0408
The above result compared with the exact solution y = e
x
are correct to about four
decimal places.
Example 6.3.2 Solve the ODE y

= y with y(0) = 1 for x = 0.04 with step length


h = 0.01
(the analytic solution is y = e
x
)
y(0.01) = y(0) + (0.01)y

(0) = y(0) + (0.01)[y(0)] = 1 + (0.01)(1) = 0.99 = y


1
y(0.02) = y(0.01)+(0.01)y

(0.01) = y(0.01)+(0.01)[y(0.01)] = 0.99+(0.01)(0.99) = 0.9801 = y


2
We have so far reached at x = 0.02, so we continue upto x = 0.04 which is required.
92 MAK-ICT
x Analytic Numerical Absolute error
0.01 0.990049 0.99 0.00005
0.02 0.9801986 0.9801 0.000098
0.03 0.970445 0.970299 0.00015
0.04 0.960789 0.96059 0.0001934
For a general h,
y
n+1
= y
n
(1 +h) = y
(n+1)
(1 +h)
2
= . . . = (1 +h)
n
and
(1 +h)
n
= 1 +nh +
n(n 1)h
2
2
+. . .
is the approximate solution.
e
nh
= 1 +nh +
n
2
h
2
2!
+. . .
is the exact solution. The two solutions agree for the rst two terms but dier for the
rest of the series. As h 0, we nd that there is more and more agreement between the
two solutions.
6.3.1 Text Questions
1. (a) Show that for the dierential equation
y

= 1 +xy +x
2
y
2
,
with the substitution z = y
2
,
y
(n+1)
= xy
(n)
+ny
(n1)
+x
2
z
(n)
+ 2nxz
(n1)
+n(n 1)2
(n2)
z
(n)
= yy
(n)
+
_
n
1
_
y

y
(n1)
+
_
n
2
_
y

y
(n2)
+. . .
(b) Hence solve the equation by Taylor series method, given that y(0) = 0 and nd
the numerical solution for x = 0.1, 0.2 and 0.3.
2. Derive the Taylor series approximation upto terms of order h
6
for the initial value
problem
y

= 1.3x +y +x
2
+xy, y(0) = 0
Find the solution for x = 0.2 correct to two decimal places.
A B Tumwesigye CSC2103 2008/2009 93
3. (a) Use the Euler method to nd the solution of y

= ty
2
, y(2) = 1 from t
0
= 2
to t
N
= 3. Use the xed step-length h = 0.1 and then h = 0.05. [y(t) =
2
t
2
2
for t >

2]
(b) Use the modied Euler method
y
j+1
= y
j
+hf(t
j
+
1
2
h, y
j
+
1
2
hf
j
)
and Heuns method to solve the problem in (a) at t = 2.1 use h = 0.1 and
h = 0.05. Do the results indicate that both methods are second-order? [Heuns
method is also called the improved Euler method or the predicator-corrector
Euler method].
Example 6.3.3
(a) Some of the numerical techniques of solving an ordinary dierential equation are
the Taylors and Eulers methods.
(i) Derive the Taylor and Eulers method of solving an dierential equation. 4
Marks
Consider the Taylor series,
y(x) = y(x
0
) +
(x x
0
)
1!
y

(x
0
) +
(x x
0
)
2
2!
y

(x
0
) +. . . +
(x x
0
)
n
y
(n)
(x
0
)
n!
(6.3)
where the series is convergent and all the derivatives, y

, y

, . . . , y
(n)
. . . exist on
the interval of solution. This extends the solution from to + h, and from
+h to + 2h and so on until the point x = +nh = is reached.
The Eulers method uses the rst two terms of the Taylor series expansion.
the technique is not so popular for solving ODEs, this is because of the high
truncation error inherited in the scheme.
Suppose by Taylor formula we have
y(x
0
+h) = y(x
0
) +hy

(x
0
) +
h
2
2!
y

(x
0
) +. . . +
h
p
p!
y
(p)
(x
0
) +
h
p+1
y
(p+1)
(x
0
)
(p + 1)!
where (x, x+h). If we can evaluate higher derivatives of y; we obtain Taylor
algorithm of order p, by neglecting the remainder term in formula above. If we
retain in formula above only the rst two terms (h
2
= h
3
= h
4
0 for
h very small) we get what we call Eulers method, given by
y
n+1
= y
n
+hy

n
+
94 MAK-ICT
y
n+1
= y
n
+hy

n
(6.4)
(ii) Considering your derivations, which of the two methods is more superior? De-
fend your answer 2
Marks
Its the Taylors method which is better, since Eulers method only uses the
rst two terms of the Taylors expansion, thus truncation errors.
(b) Given a rst order ordinary dierential equation
y

= (x
2
+y
2
) , y(0) = 1
With h = 0.02, nd y(0.02) and y(0.04) using
(i) the Taylors expansion upto the fourth derivative . 5 Marks
y

(x) = x
2
+y
2
y

(0) = 1
y

(x) = 2x + 2yy

(0) = 2
y

(x) = 2 + 2yy

+ 2y

(0) = 8
y
iv
(x) = 2yy

+ 2y

+ 2y

+ 2y

y
iv
(0) = 28
Thus
y(x) = y(x
0
) +
(x x
0
)
1!
y

(x
0
) +
(x x
0
)
2
2!
y

(x
0
) +. . . +
(x x
0
)
n
y
(n)
(x
0
)
n!
y(x) = 1 +x(1) +
x
2
2
(2) +
x
3
6
(8) +
x
4
24
(28) +. . .
y(x) = 1 +x +x
2
+
3
4
x
3
+
15
14
x
4
+. . .
y(0.02) = 1 + 0.02 + 0.02
2
+
3
4
0.02
3
+
15
14
(0.02)
4
+. . .
1.0204082
y(0.04) = 1 + 0.04 + 0.04
2
+
3
4
0.04
3
+
15
14
(0.04)
4
+. . .
1.0416672
A B Tumwesigye CSC2103 2008/2009 95
(ii) the Eulers formula. 5 Marks
y
n+1
= y
n
+hy

n
y(0.02) = y(0) + (0.02)y

(0, 1)
= 1 + (0.02)[0
2
+ 1
2
]
1.02
y(0.04) = y(0.02) + (0.02)y

(0.02, 1.02)
= 1.02 + (0.02)[0.02
2
+ 1.02
2
]
1.040816
(c) (i) Repeat the problem above using Eulers technique using h = 0.01, that is
y(0.01), y(0.02), y(0.03) and y(0.04) . 8 Marks
y
n+1
= y
n
+hy

n
y(0.01) = y(0) + (0.01)y

(0, 1)
= 1 + (0.01)[0
2
+ 1
2
]
1.01
y(0.02) = y(0.01) + (0.01)y

(0.01, 1.01)
= 1.01 + (0.01)[0.01
2
+ 1.01
2
]
1.020202
y(0.03) = y(0.02) + (0.01)y

(0.02, 1.020202)
= 1.020202 + (0.01)[0.02
2
+ 1.020202
2
]
1.030614
y(0.04) = y(0.03) + (0.01)y

(0.03, 1.030614)
= 1.030614 + (0.01)[0.03
2
+ 1.030614
2
]
1.041245
(ii) Comparing your answers in c(i) and part (b), fully explain the eect of the size
of h on your results. 2 Marks
The smaller the h, the better the approximations.
6.4 Runge Kutta Methods -The Improved Euler method
Runge-Kutta methods are the most popular of all numerical techniques for solving or-
dinary Dierential Equations. During the lecture you learn how to use Runge-kutta
methods on ordinary dierential equations. The Runge-kutta techniques which you are
about to learn in this lecture are those of second and fourth order.
96 MAK-ICT
Note that in the examples of the previous lecture we selected very small h in order
to achieve the accuracy of four decimal places. This is a general feature that Eulers
method requires small h to achieve the required accuracy. Higher order Taylor
algorithms are not convenient since higher order total derivatives of y have
to be obtained . Runge kutta methods attempt to obtain greater accuracy as well
as avoiding the need for calculating derivatives. Basically the idea behind Runge-kutta
methods is to compute the value of f(x, y) at several strategically chosen points near the
solution curve. In the interval [x
n
, x
n+1
] and to combine these values in such a way as to
match as many terms as possible in the Taylor series expansion.
6.4.1 Runge-kutta two stage method of order two
We are looking for the solution of a rst order ordinary dierential equation
y

= f(x, y), y(x


n
) = y
n
We let k
1
= hf(x
n
, y
n
), and
k
2
= hf(x
n
+kh, y
n
+k
1
)
Then
y
n+1
= y
n
+Ak
1
+Bk
2
,
where , , h A and B are scalars. We derive conditions for this method to be of second
order by computing the Taylor series expansion of y
n+1
as a function of h.
y
n+1
= y
n
+hAf
n
+Bhf
n
+h
2
B(f
x
+f
y
f)
n
+
h
3
B
(
2
f
xx
+ 2f
xy
f +
2
f
yy
f
2
)
n
+ 0(h
4
)
We wish to make this series expansion agree as closely as possible with the Taylor series
for the solution. The best we can do is to match terms up to h
2
, which gives the order
conditions
A +B = 1 = =
1
2B
.
If these are satised the local error is 0(h
3
) and thus the method of second-order Runge-
kutta methods. If we take B = 1, A = 0, = =
1
2
we obtain the midpoint method
y
n+1
= y
n
+hf(x
n
+
h
2
, y(x
n
+
h
2
))
A B Tumwesigye CSC2103 2008/2009 97
Taking B =
1
2
= A and therefore = = 1 we nd
y
n+1
= y
n
+
1
2
(k
1
+k
2
) (6.5)
which is Heuns method , with
k
1
= hf(x
n
, y
n
) (6.6)
k
2
= hf(x
n
+h, y
n
+k
1
) (6.7)
Like Euler, we assume an iterative scheme with
f(x, y) = y
n+1
= y

(6.8)
98 MAK-ICT
Example 6.4.1 Solve y

= x y, y(0) = 0 taking h = 0.1.(classical solution is given


by y = e
x
+x 1)
If we use the second-order Runge-kutta process
k
1
= 0.1(0) = 0, k
2
= (0.1)(0.1 0) = 0.01
y
1
= y(0.1) = 0 +
1
2
(0.01) = 0.005
k
1
= 0.1(0.1 0.005) = 0.0095
k
2
= 0.1(0.2 0.0145) = 0.01855
y
2
= y(0.2) = 0.005 +
1
2
(0.095 + 0.01855) = 0.019025
k
1
= 0.1(0.2 0.019025) = 0.0181
k
2
= 0.1(0.30 0.0371) = 0.0263
y
3
= y(0.3) = 0.0190 +
1
2
(0.0181 + 0.0263) = 0.0412
k
1
= 0.1(0.3 0.0387) = 0.0261
k
2
= 0.1(0.4 0.0673) = 0.0333
Therefore
y
4
= y(0.4) = 0.0412 +
1
2
(0.0261 + 0.0333)
= 0.0709
But where to stop?, we do stop at the value of x, we are interested in, for example, if the
question asked at x = 0.6, the interest would be y
6
= y(0.6) because of the step width h.
That is for h = 0.1
x
0
x
1
x
2
x
3
x
4
x
5
x
6
0.0 0.1 0.2 0.3 0.4 0.5 0.6
y
0
y
1
y
2
y
3
y
4
y
5
y
6
For h = 0.25
x
0
x
1
x
2
x
3
x
4
x
5
x
6
0.0 0.25 0.5 0.75 1.0 1.25 1.5
y
0
y
1
y
2
y
3
y
4
y
5
y
6
A B Tumwesigye CSC2103 2008/2009 99
If need the value of y at x = 0.75, then need y
3
= y(0.75)
We can make a table of comparison of the analytic solution with the Numerical solution
at various points of the previous example. Analytically y = e
x
+x 1
x Analytic Numerical Absolute error
0.1 0.004837 0.005 0.000163
0.2 0.01873075 0.019025 0.000519
0.3 0.04081822 0.0412 0.000038
0.4 0.07032 0.0709 0.000479
A very powerful technique.
General Runge-Kutta
A general explicit Runge-kutta method of s stages has the form
k
1
= hf(x
n
, y
n
)
k
2
= hf(x
n
+c
2
h, y
n
+a
21
k
1
)
k
3
= hf(x
n
+c
3
h, y
n
+a
31
k
1
+a
32
k
2
)

k
s
= hf(x
n
+c
s
h, y
n
+a
s
, k
1
+. . . +a
ss1
k
s1
y
n+1
= y
n
+b
1
k
1
+b
2
k
2
+. . . +b
s
k
s
.
With s = 3, it is possible to construct Runge-kutta methods of third order accuracy. One
of the simplest Runge-kutta method of third order with easily remembered coecients in
Heuns third order method given by,
k
1
= hf(x
n
, y
n
)
k
2
= hf(x
n
+
h
3
, y
n
+
k
1
3
)
k
3
= hf(x
n
+
2h
3
, y
n
+
2k
2
3
)
and y
n+1
= y
n
+
(k
1
+ 3k
3
)
4
100 MAK-ICT
6.4.2 Runge-kutta classical four stage method of order two
It can be shown that for a fourth order Runge-kutta method four stages are needed.
Historically the classical Runge-kutta four stage method of fourth order given by
k
1
= hf(x
n
, y
n
),
k
2
= hf(x
n
+
h
2
, y
n
+
k
1
2
)
k
3
= hf(x
n
+
h
2
, y
n
+
k
2
2
)
k
4
= hf(x
n
+h, y
n
+k
3
)
Then the solution at n + 1 is given by,
y
n+1
= y
n
+
1
6
(k
1
+ 2k
2
+ 2k
3
+k
4
) (6.9)
with local truncation error of O(h
5
).
Example 6.4.2
Solve y

= x y, y(0) = 0. The classical solution is given by y = e


x
+ x 1 Take
h = 0.1 and use the Runge-kutta fourth order method.
Solution
y

= x y, y(0) = 0 h = 0.1
k
1
= 0.1(0 0) = 0,
k
2
= (0.1)(0.05 0) = 0.005
k
3
= 0.1(0.05 0.0025) = 0.0048
k
4
= 0.1(0.1 0.0048) = 0.0095
Therefore
y(0.1) = 0 +
1
6
(0 + 0.01 + 0.0096 + 0.0095) = 0.0049
k
1
= 0.1(0.1 0.0049) = 0.0095
k
2
= 0.1(0.15 0.0097) = 0.0140
k
3
= 0.1(0.15 0.0119) = 0.0138
k
4
= 0.1(0.2 0.0187) = 0.0181
A B Tumwesigye CSC2103 2008/2009 101
y(0.2) = 0.0049 +
1
6
(0.0095 + 0.280 + 0.0276 + 0.0181)
= 0.0191
k
1
= 0.1(0.2 0.0191) = 0.0181
k
2
= 0.1(0.25 0.0282) = 0.0222
k
3
= 0.1(0.25 0.0302) = 0.0220
k
4
= 0.1(0.3 0.0411) = 0.0259
y(0.3) = 0.0191 +
1
6
(0.018 + 0.0444 + 0.0440 + 0.0259)
= 0.0412
k
1
= 0.1(0.3 0.0412) = 0.0259
k
2
= 0.1(0.35 0.0542) = 0.0296
k
3
= 0.1(0.35 0.0560) = 0.0294
k
4
= 0.1(0.4 0.0706) = 0.0329
y(0.4) = 0.0412 +
1
6
(0.0259 + 0.0592 + 0.0548 + 0.0329)
= 0.0700.
6.4.3 Text Questions
1. Use the Runge-kutta 4
th
and 2
nd
order process to solve
y

= x y, y(0) = 0
(classical solution is given by y = e
x
+x 1, take h = 0.1)
2. Solve the following equations
(a) y

= y +x, y(0) = 1, y

(0) = 1
(b) tx

+tx

+xx

= 2te
3t
x(1) = 2, x

(1) = 1, x

(1) = 0
(c) y
2
y

= y

, y(2) = 2, y

(2) =
1
2
3. solve the ode y

= y; y(0) = 1 upto the fourth power of x using the Taylor series


[Ans=0.9999000499] and Runge-kutta [Ans = 0.9999995]
102 MAK-ICT
There are other techniques for solving ordinary dierential equations numerically. The
techniques considered include, Adams - Bash forth - Moulton methods, other multi step
methods and the predictor corrector methods. You also get a comparison of Runge-kutta
and multi step methods.
Home Work 6.1 Repeat all exercises and worked examples in Taylor and Euler meth-
ods, when using the Runge-kutta second order technique.
Exercise 6.1 Use Taylors series to solve the dierential equation
dy
dx
= 2x y, y(0) = 1
y

= 2x y
y

= 2 +y

= y

y
(
4) = y

At x
0
= 0
y

(0) = 2(0) y(0) = 1


y

(0) = 2 +y

(0) = 2 1 = 3
y

(0) = y

(0) = 3
y
(4)
(0) = y

(0) = 3
y(x) = y(x
0
) +y

(x
0
)
(x x
0
)
1!
+y

(x
0
)
(x x
0
)
2
2!
+y

(x
0
)
(x x
0
)
3
3!
+y
(4)
(x
0
)
(x x
0
)
4
4!
+
y(x) = 1 + 1.0x 1.5x
2
+ 0.5x
3
0.125x
4
+
y(0) = 1
y(0.1) = 0.9145
y(0.2) = 0.8562
y(0.3) = 0.8225
y(0.4) = 0.8112
Hint:
dy
dx
= y

A B Tumwesigye CSC2103 2008/2009 103


Exercise 6.2 Using Euler method with h = 0.1, nd y(0.4) to the following o.d.e.
dy
dx
= f(x, y) = 2x y, y(0) = 1
y(0) = 1 x
0
= 0, y
0
= 1
y
n+1
= y
n
+hy

n
= y
n
+h[2x
n
y
n
]
y(0.1) = 1 + (0.1)[2(0) (1)] = 0.9 x
1
= 0.1, y
1
= 0.9
y(0.2) = 0.9 + (0.1)[2(0.1) (0.9)] = 0.83 x
2
= 0.2, y
2
= 0.83
y(0.3) = 0.83 + (0.1)[2(0.2) (0.83)] = 0.787 x
3
= 0.3, y
3
= 0.787
y(0.4) = 0.787 + (0.1)[2(0.3) (0.787)] = 0.7683 x
4
= 0.1, y
4
= 0.7683
With h = 0.001, y
1
= 0.999 and y
2
= 0.998003
Quite accurate, right? What is the price we pay for accuracy? Consider y(10), for h = 0.1,
we need to compute it in 100 steps. For h = 0.001, we will have to calculate it in 10000
steps. No free lunch as usual.
Exercise 6.3 Using Runge-kutta second order method with h = 0.1, nd y(0.3) to the
following o.d.e.
dy
dx
= f(x, y) = 2x y, y(0) = 1
k
1
= 0.1, k
2
= 0.07, y
1
= y(0.1) = 0.915
k
1
= 0.0715, k
2
= 0.04435, y
2
= y(0.2) = 0.8571
k
1
= 0.04571, k
2
= 0.02114, y
3
= y(0.3) = 0.8237
Exercise 6.4 Using Runge-kutta fourth order method with h = 0.1, nd y(0.2) to the
following o.d.e.
dy
dx
= f(x, y) = 2x y, y(0) = 1
k
1
= 0.1, k
2
= 0.085, k
3
= 0.08575, k
4
= 0.071425 y
1
= y(0.1) = 0.9145125
k
1
= 0.0715, k
2
= 0.0579, k
3
= 0.0586, k
4
= 0.0456 y
2
= y(0.2) = 0.8562
104 MAK-ICT
Exercise 6.5 Show that the analytical solution of
dy
dx
= f(x, y) = 2x y, y(0) = 1
y(x) = 2(1 x) 3e
x
y(0.1) = 0.914512254
y(0.2) = 0.856192259
y(0.3) = 0.822454662
y(0.4) = 0.810960
Exercise 6.6 Using Eulers method with h = 0.1, nd y(0.2) to the following o.d.e.
dy
dx
=
y
2
1 +x
, y(0) = 1
y
1
= y(0.1) = 0.9
y
2
= y(0.2) = 0.82636
Its analytical solution is
y =
1
[1 + ln(1 +x)]
y(0.2) = 0.84579
Error = 0.84579 0.82636 = 0.01943
Exercise 6.7 Using Eulers method with h = 0.05, nd y(0.2) to the following o.d.e.
dy
dx
=
y
2
1 +x
, y(0) = 1
y
1
= y(0.05) = 0.95
y
2
= y(0.1) = 0.90702
y
3
= y(0.15) = 0.86963
y
4
= y(0.2) = 0.83675
A B Tumwesigye CSC2103 2008/2009 105
Its analytical solution is
y =
1
[1 + ln(1 +x)]
y(0.2) = 0.84579
Error = 0.84579 0.83675 = 0.00904
Example 6.4.3
(a) Runge-Kutta methods are popular Numerical techniques for solving ordinary dier-
ential equations. Given the equation,
y

= f(x, y), y(x


n
) = y
n
.
(i) State the general Runge-Kutta second stage method of second order 3 Marks
y
n+1
= y
n
+
1
2
(k
1
+k
2
)
k
1
= hf(x
n
, y
n
)
k
2
= hf(x
n
+h, y
n
+k
1
)
(ii) Compare and contrast the Runge-kutta and Eulers methods of solving ordi-
nary dierential equations. 4
Marks
They both use Taylors rst two terms.
Runge-kutta is just an improved Euler technique.
(b) Given an ordinary dierential equation
dy
dx
= y, y(0) = 1
(i) Solve y(0.2) and y(0.4 using Runge-kutta second order method with h = 0.2.
[5 Marks]
y
n+1
= y
n
+
1
2
(k
1
+k
2
)
k
1
= hf(x
n
, y
n
)
k
2
= hf(x
n
+h, y
n
+k
1
)
106 MAK-ICT
x k
1
k
2
y
0 1
0.2 -0.2 -0.16 0.82
0.4 -0.164 -0.1312 0.6724
(ii) The problem above
dy
dx
= y, y(0) = 1 has an analytical/exact solution given
by y(x) = e
x
. Use the analytical method to compute the y(0.2) and y(0.4. 4
Marks
y(x) = e
x
y(0.2) = e
0.2
= 0.8187
y(0.4) = e
0.4
= 0.6703
(iii) Reduce the step size h to 0.1 and compute y(0.1) and y(0.2). Comment on
your both answers of y(0.2) for the Runge-Kutta method with 0.1 and h = 0.2
in relation to the exact answer. 5 Marks
x k
1
k
2
y
0 1
0.1 -0.1 -0.09 0.905
0.2 -0.0905 0.08145 0.819025
(c) In the Runge-Kutta methods, Clearly, there is a trade-o between accuracy and
complexity of calculation which depends heavily on the chosen value for h. Elab-
orate what is meant by this assentation. 3
Marks
In general as h is decreased the calculation takes longer but is more accurate. How-
ever, if h is decreased too much the slight rounding that occurs in the computer
(because it cannot represent real numbers exactly) begins to accumulate enough
to cause signicant errors. For many higher order systems, it is very dicult to
make the Euler approximation eective. For this reason more accurate, and more
elaborate techniques were developed. We will discuss those methods developed by
two mathematicians, Runge and Kutta, near the beginning of the 20th century. The
techniques, aptly, are called the Runge-Kutta methods.
(d) Which method, Eulers method or the Runge-Kutta method is better to use? and
why? [2 Marks]
Runge-Kutta second order method is called the improved Euler method. Thus
Runge-kutta gives a better approximation
Chapter 7
Sample Questions
Question 1
(a) What is Polynomial interpolation?
(b) The Lagranges interpolating polynomial P(x) is given by,
P(x) =
n

i=0
L
i
(x)f
i
where the coecients,
L
i
(x) =
n

j=0,j=i
(x x
j
)
(x
i
x
j
)
i = 0, 1, 2, , n
(i) Prove that P(x) is unique.
(ii) Given the data
x 0 2 3
f(x) -2 4 10
Find a Lagrange quadratic interpolating polynomial that ts the data and use
it to interpolate f(1.5).
107
108 MAK-ICT
Question 2
(a) state one advantage and one disadvantage of the following iterative methods used
in the solution of a non-linear equation f(x) = 0 :
(i) Bisection
(ii) Regular false
(iii) Newton Raphsons
(b) (i) Prove that the general Newton-Raphson iterative method for nding the r
th
root of a number, A > 0 is given by,
x
n+1
=
1
p
(p 1)x
n
+
A
x
r1
n

(ii) From 2b(i) deduce the corresponding iteration formula when p = 2, and use
it to nd an approximate value of
1

2
correct to two decimal places.
(iii) Use 2b(ii) to deduce the approximate value of

2. Give reasons why this


approach to nding the approximate value of

2 is better than using the


iterative formula
x
n+1
=
1
2
_
x
n
+
A
x
n
_
Question 3
(a) The derivative of a function f(x) at a point x
0
is dened by the limiting process,
f

(x
0
) = lim
h0
f(x
0
+h) f(x
0
)
h
,
provided the limit exists. However, a numerical approximation to f

(x
0
) involves
dierences such as the forward dierence approximation
f

(x
0
)
f(x
0
+h) f(x
0
)
h
(3.1)
(i) Using (3.1), approximate f

(1) for f(x) = e


x
2
using a step size h = 0.1. Com-
pare your answer with the exact value of the derivative. Account for the dier-
ence in the results. Suggest one way of improving on your result so obtained.
A B Tumwesigye CSC2103 2008/2009 109
(ii) Geometrically represent the dierence approximation (3.1).
(iii) Derive the dierence approximation (3.1) together with its truncation error
term. State the order of the approximation
(b) For the approximation (3.1), nd expression for
(i) total error and the optimal size of h
(ii) State two outstanding problems associated with numerical dierentiation of a
function f(x).
Question 4
(a) Using the composite form of the Trapezoidal Rule with h = 0.25, approximate the
value of I where
I =
_
3
1
e
x
sin x dx.
Compare your result with the exact value of I.
(b) Determine the step-length h needed so that the approximate value of I obtained
from the composite Trapezoidal rule in 4(a) has maximum truncation error 10
4
.
Question 5
(a) (i) State the general Runge-Kutta four stage method of fourth order popular for
solving initial value problems of the form, y

= f(x, y), y(x


n
) = y
n
.
(ii) State the order of the local truncation error for the scheme in part (i).
(b) Given the initial value problem y

= x y, y(0) = 0 with classical solution given by,


y = e
x
+x 1. Using h = 0.1, with Runge-Kutta fourth order method, nd
(i) y(0.1)
(ii) y(0.2)
(ii) y(0.3)
(iii) y(0.4)
Make a table of comparison of the classical solutions and numerical solutions with the
absolute errors committed at each of the points 0.1, 0.2, 0.3 and 0.4. Account for the errors
in the numerical solutions obtained.
Question 6
110 MAK-ICT
(a) One of the oldest numerical techniques for solving a dierential equation
dy
dx
= f(x, y) (7.1)
with the initial condition y
0
= y(x
0
) where x
0
and y
0
are given constants and x
0
not a
singular point of the function, is by Taylor series. Develop the Taylor series method for
equation (7.1).
(b) Using the Taylor series method nd an expression for the solution y(x) given that,
dy
dx
= x
3
y
and the initial condition y = 1 when x = 0. Use this expression to nd values of y for
(i) x = x
0
+h
(ii) x = x
0
+ 2h with h = 0.1
Question 7
(a) Runge-Kutta methods are popular Numerical techniques for solving ordinary dier-
ential equations. Given the equation,
dy
dx
= f(x, y) with y(a) = c,
(i) State the general Runge-Kutta method of order four for the equation
(ii) Hence using the Runge-Kutta fourth process, nd y(1) for the equation
y

= x +y, y(0) = 1, with h = 1


(iii) Compare the solution y(1) with the exact/analytic solution at that point.
(iv) Suggest anyway of increasing the accuracy in the numerical solution y(1).
(v) Explain the origin of the error in the numerical solution y(1).
(b) The oldest technique for obtaining a numerical solution of an ordinary dieren-
tial equation is the Eulers method. Derive the Eulers method together with its
truncation error for the equation,
dy
dx
= f(x, y) with y(a) = c
A B Tumwesigye CSC2103 2008/2009 111
Question 8
(a) Given
y

= f(x, y) with y(x


0
) = y
0
, (7.2)
derive the Eulers method for solving equation (2) 4 marks
(b)(i) For the equation
y

= y with y(0) = 1
with step length h = 0.01, nd y(0.04) using Eulers method.
Compare your results with the analytic values at the discrete points. Account for
the dierence in your results. 6 marks
(ii) State one advantage and one disadvantage of Eulers method as compared to Runge-
Kutta. 2marks
(c) Given a two point Gauss-Quadrature rule,
_
1
1
f(x) dx f() +f()
(i) determine the value of 3 marks
(ii) Hence or otherwise compute
_
4
0
sin t
t
dt.
. 5 marks
Question 9
(a) State the dening equations for the following nite dierence operators on a func-
tion f(x).
(i) (ii) (iii) E
k
(iv) _
4Marks
(b) Prove that E = e
hD
; where D is a dierential operator and E is a shift operator
acting on a function y(x).
4Marks
112 MAK-ICT
(c) Let (x
0
, y
0
), (x
1
, y
1
) , (x
n
, y
n
) be a given set of (n+1) points. Dene the divided
dierences
(i) [x
0
, x
1
]
(ii) [x
0
, x
1
, x
2
]
(iii) [x
0
, x
1
, x
2
, x
3
]
(iv) [x
0
, x
1
, x
2
, , x
n
]
5Marks
(d) Using Newtons divided dierence formula, t a cubic polynomial y(x) on the data
:
x 0 1 2 3
y(x) 1 0 1 10
Hence approximate y(0.5). 7Marks
Question 10
(a) (i) Dene the nite dierence operator on a function y(x)
(ii) Newtons Forward Dierence Interpretation formula (NFDIF) is popular in
interpolation of equispaced data near the beginning of tabular values. Given
the set of (n+1) values, (x
0
, y
0
), (x
1
, y
1
) , (x
n
, y
n
) of x and y, Show that.
y
n
(x) = y
0
+py
0
+
p(p 1)
2!

2
y
0
+ +
p(p 1)(p 2) (p n + 1)
n!

n
y
0
; x = x
0
+ph
10Marks
(b) Using NFDIF, show that
(i)
_
dy
dx
_
x=x
0
=
1
h
_
y
0


2
y
0
2
+

3
y
0
3


4
y
0
4
+
_
.
3Marks
(ii)
_
d
2
y
dx
2
_
x=x
0
=
1
h
2
_

2
y
0

3
y
0
+
11
12

4
y
0

_
.
A B Tumwesigye CSC2103 2008/2009 113
3Marks
(c) Given the data
x 0 2 4 6 8
y 7 13 43 145 367
compute
_
d
2
y
dx
2
_
x=0
. 4Marks
Question 11
(a) (i) State the Trapezoidal rule for approximating the integral
_
x
n
x
0
y(x) dx on
[x
0
, x
n
]. Write down the expression for the truncation error term.
(ii) Derive the rule in part 3(a)(i).
10Marks
(b) Approximate I =
_
1
0
dx
1+x
2
using Trapezoidal rule with stepsize h = 0.2. Hence
approximate the value of . Compare the numerical value of I with its analytic
value. Explain how the accuracy of I is aected by the value of h.
10Marks
Question 12
(a) (i) With a relevant sketch, describe the Bisection method for approximating the
root of a non-linear equation f(x) = 0 on [x
0
, x
n
].
4Marks
(ii) Use the bisection method to nd an approximation to the root of x
3
x1 = 0
on the interval [1, 2] correct to one decimal place. 7Marks
(b) Newtons Raphsons method is one of the popular schemes for solving a non-linear
equation f(x) = 0. Prove that the general Newton Raphsons iterative method for
nding the p
th
root of a positive number B is given by
x
n+1
=
1
p
_
(p 1)x
n
+
B
x
p1
n
_
.
Using this scheme, approximate
1

2
correct to 4 decimal places. Take initial guess
as 0.6 . 9Marks
114 MAK-ICT
Q13.(a) Common iterative schemes for solving non linear equations f(x) = 0 are of the form,
x
r+1
= g(x
r
) for r = 0, 1, . . .
(i) State what is meant by the iterative scheme being convergent. (2 marks)
(ii) State one advantage and one disadvantage of Newtons method. (2 marks)
(b)(i) Prove that the general Newton Raphsons method for nding the r
th
root of a
number (N > 0) is given by
x
n+1
=
1
r
(r 1)x
n
+
N
x
r1
n
(7 marks)
(ii) From b(i), deduce the corresponding iterative formula when r = 2, and use it to
nd an approximate value of
1

2
correct to 3 decimal places. Use initial approxima-
tion x
0
= 0.7. (7
marks)
(iii) State two limitations of the Newtons method. (2 marks)
Q14(a)(i) Explain what is meant by polynomial interpolation. (2 marks)
(ii) A function f(x) passes through the points (0, 1.0000) and (1, 2.7183). Construct
Lagranges interpolating polynomial satisfying the above data. Use the polynomial
to compute the approximate value of f(0.5). (4 marks)
(b)(i) Prove that the bound on the rounding error when using Lagranges polynomial
P
n
(x) does not blow beyond
n
k=0
[
k
[[L
k
(x)[ where is the rounding error in f
k
and
L
k
(x) are the Lagrange coecients. (4 marks)
(ii) Given that f(x) = e
x
for part a(ii) and data were rounded to four digits, show
that the eect of rounding errors in the on P
1
(x) maintains the same maximum
magnitude of
1
2
10
4
(4 marks)
(c) Give the dening equations for the nite dierence operators, , and E. Hence
prove the nite dierence identity +
1
2

2
= E
1
2
. (6 marks)
Q15(a)
_
x
x
0
f(x)dx
h
3
[f
0
+ 4f
1
+f
2
] is the popular Simpsons rule for approximating
an integral.
(i) State two major sources of error when the Simpsons rule is used to approximate a
denite integral. (2 marks)
A B Tumwesigye CSC2103 2008/2009 115
(ii) By interpolating f(x) by a Lagrange polynomial P
2
(x) of degree two,
i.ef(x) = P
2
(x) +E(x), derive the Simpsons rule. (9 marks)
(iii) Using the simpsons rule, approximate
_
1
0
x
2
dx and comment on your result. (5
marks)
(b) It is required to obtain
_
2
0
e
x
2
dx exact to 4 decimal places.
What should h be for Simpsons rule? (4 marks)
Q16(a) The second order central dierence approximation for f

(x) is given as
f

(x)
f(x +h) f(x h)
2h
.
(i) Give the geometrical interpretation of the central dierence approximation. (3
marks)
(ii) By the help of appropriate Taylor series, derive the central dierence approximation
with its truncation error. (4 marks)
(iii) Using the central dierence approximation, approximate f

(2) with h = 1.0 and


h = 0.1 for f(x) = x
2
. What do you notice from your results? (6 marks)
(b) Given that [e
0
[ and [e
1
[ are the maximum absolute errors committed when evaluating
f(x + h) and f(x h) respectively, and [e
0
[, [e
1
[ < e, show that for the central
dierence approximation,
(i) [Total error[
e
h
+
Mh
2
6
(4 marks)
(ii) Optimal h = (
3e
M
)
1
3
where M =max[f

(x)[ for a h c a +h. 3 marks


Chapter 8
Further Reading
1. Curtis F. G and Patrick O.W. (1994). Applied Numerical Analysis. Addison -
Wesley Publishing Company.
2. Maron M. J. (1982). Numerical Analysis. Macmillan Publishing Company.
3. Maron M.J. and Robert J.L. (1991). Numerical Analysis. Wadsworth Publishing
Company.
4. Samuel D. Coute (1980). Elementary Numerical Analysis. Mcgraw hill Kogakusha,
Ltd.
5. Masenge R.W.P. (1987). Fundamentals of Numerical Methods. University of Dar-
es-Salaam. Tanzania.
116

S-ar putea să vă placă și