Sunteți pe pagina 1din 32

CURVE FITTING TECHNIQUES

Introduction
Engineering problems involve collection of experimental
data and make use of them in designing an engineering
system. Assume that the data set consist of
n
measurements,
D = {(xk,yk), k=1,2..n), where, x1 < x2.. < xi.. < xn.
Fitting an appropriate function or curve y = f(x) to the
data set is often very helpful. Once this is done, various
types of manipulations may be made. Some of the
important applications are in

Prediction of the value of y for x other than at the


sample points; if x1 < x < x n, the process is known as
interpolation and if x is beyond the range of data set,
the process is termed as extrapolation.
Predicting the maximum and minimum values of the
function in the range x1 < x < x n
Finding the derivative, integral, etc. of the function.
Replacement of a complicated mathematical function
by a simpler equivalent. In this application, the data

points are generated directly from the complex function


by calculation.
Sometimes it becomes necessary to fit a certain function
to some experimental data, which may contain random
noise generated by the sensors. The effect of the noise
might be effectively minimised by performing more
measurements than the bare minimum and finding a
reliable model of the system as discussed below:
noise
Least Square Fit:
x
Physical
Data
set
derived
from
y
System

experimental measurements,
may contain a random
component due to noise
generated in the sensor.
To obtain a good estimate of the parameters of the system,
a relatively large number of measurements are to be
made, in comparison to the degree of freedom of the
system.
In such cases, it is more appropriate to find a curve that
describes the underlying trend of the data without the
necessity of having to pass through each data point. The
type of curve, usually a polynomial, used for this purpose
shall depend upon the nature of the underlying physical

system that generates the data. The function f(x) that


relates x with y should be such that the sum of the
squared errors is minimized; Mathematically this may be
stated as:
Find f(x) so that sum of the weighted square of the errors
E(a)=k=1,n wk{ f(xk) yk}2
Is minimised, where the data set D= {(xk,yk), k=1,n}
and wk> 0, denotes a weighing factor indicating
confidence level of the kth data in the set D. The
polynomial f(x) which minimises E(a) is referred to as the
weighted least-square fit. Often the weighing factors are
all taken to be 1. Although alternative error criteria could
be defined, it is the least-square criterion that is generally
preferred.
Straight Line Fit
In case the system behaviour is such that a straight line
represents input-output relation this functional relation
may be expressed as
f(x) = a1 + a2. x
The process of weighted least square fit consists in
calculating the coefficients, a1 and a2, of the polynomial
f(X) so as to minimise

E (a1,a2) = k=1,n wk{ f(xk) yk)}2


= k=1,n wk{ (a1 +a2.xk yk)}2
The necessary and sufficient conditions for this is
E/a1 = 2. k=1,n wk (a1 +a2.xk-yk) = 0 , and
E/a2 = 2. k=1,n wk(a1 +a2xk-yk).xk = 0
Above two equations may be written in matrix form as:
k=1,n wk
k=1,n wk.xk
2

k=1,n wk.xk
k=1,n wk.x

2
k

a1
.

k=1,n wk.yk
k=1,n wk.xk.yk

a2

The coefficient matrix on the left hand side depends on


the weighing factors and the independent variable but the
right hand side depends upon the weighing factor,
independent and dependent variables.

Example: Find a least square straight line fit to the data


set D ={(0,2.1), (1,2.85),(2.3.10),(3,3.20),(4,3.90)}
Soln:
The elements of the coefficient matrix have been
computed (assuming wk =1, k=1,2,..,5) as:
c11 = k=1,n wk = 1+1+1+1+1= 5
c12 =c21= k=1,n wk.xk = (0+1+2+3+4) x1 = 10
c22 = k=1,n wk.xk2 = (0+1+4+9+16) x 1 = 30
The elements of the vector B of the right hand side have values:
b1 = k=1,n wk.yk = (2.10+2.85+3.10+3.20+3.90) x 1 = 15.15
b2 =k=1,n wk.xk.yk = (0 x 2.10+1 x 2.85+2 x 3.10+ 3 x 3.20+ 4x
3.90) x1 = 34.25
The linear system of equations to be solved to determine a 1 and
a2 are
5

10

a1
.

10

30

15.15
=

a2

34.25

The solution to the above set is a1 = 2.24 and a2 = 0.395


So the required least-square fit is y = f(x) = 2.24 + 0.395 x

Polynomial Fit:
In case the underlying trend of the data is to be modeled with a
(m-1)th degree polynomial f(x), involving m coefficients, as
f(x) = a1+a2.x2+ .. +ajxj-1 ..+ amxm-1
Suppose, n> m measurements have been made to generate a set
of n data points. The method of straight line fit may be
extended to fit this polynomial to the data set with least square
error. Using weightage wk for the kth data, the weighted sumsquared error in this case may be written as
E= k=1,nwk{f(xk)-yk}2
= k=1,nwk{ a1+a2.xk2+ .. +ajxkj-1 ..+ amxkm-1-yk}2
for optimum choice of the coefficients, a 1, a2,aj, an ,
following conditions need be satisfied:
E/aj=2.k=1,nwk{a1+a2.xk2+ ..+ajxkj-1..+ amxkm-1-yk}.xjj-1} = 0 ,
=1,2,,m

The set of m equations written above may be presented


in matrix form as, [each summation extends over n terms]
wk

wk.xk

wk.xk wk.xk2
:

wk.xk2 wk.xk m-1

a1

wk.yk

wk.xk3 wk.xk m

a2

wk.xk.yk

wk.xkm-1 wk.xkm wk.xkm+1wk.xk 2m

and solved for ai for i=1,2.m

am

:
wk.xkm-1 .yk

The above set of equations may now be solved for m constants,


a1, a2, .aj, am.
Example: Find a polynomial of degree 2 to fit to the data set
with least square error.
D ={(0,-3),(0.5,-2.5),(1,-1),(1.5,1.5),(2.0,5.0)}, assume, wk =1,
k=1,2..5
Soln: The elements of the coefficient matrix C are
c11 = wk = 5;
c21 = wk.xk = 5;
c31 = wk.xk2 = 7.5

c12 = wk.xk = 5.;


c22 = wk.xk2 = 7.5;
c32 = wk.xk3 = 12.5

c13 = wk.xk2 = 7.5


c23 = wk.xk3 = 12.5
c33 = wk.xk4 = 22.125

elements of the right hand side vector are


b1 = wk.yk = 0;

b2 = wk.xk.yk = 10.00;

b3 = wk.xk2 .yk = 21.75

Solution of the equation C.A = B is


A=C-1.B = [-2.1143 -1.5429

2.5714]

So the required 2nd. degree polynomial is


f(x) = - 2.1143 - 1.5429.x + 2.5714. x2

Interpolation and Extrapolation


The problem basically consists in determination of a
function f(x) such that it passes through the given set of

data D={(xk,yk), k=1,2,,n}, and hence determining y for


any value of x. It is presumed that x1<x2< <xn.
In other words, the interpolating/ extrapolating function
must satisfy the n- constraints
yk= f(xk), k=1,2,n

.(i)

In case, x1< x <xn, the process is termed as interpolation.


When either x < x1 or x > xn, the process is known as
extrapolation. Various methods are available for the
purpose, some are discussed below.
Piece-wise linear interpolation:
This is a simple
procedure wherein straight line segments between two
adjacent data points are used for the purpose of
interpolation. Considering the data set
D={(x1,y1), (x2,y2),,(xk,yk),(xk+1,yk+1),..(xn,yn) }
The terminal points of the kth line segment Lk are (xk,yk)
and (xk+1,yk+1) and the equation of the line passing through
these points is
(y-yk) /(yk+1-yk) = (x-xk) /(xk+1-xk)
or, y = yk + {(x-xk).(yk+1-yk) } /(xk+1-xk) . (ii)

To extrapolate for y beyond the range of data set D, the


nearest line segments, L1 or Ln are used depending on
when x< x1 or x > xn respectively.
Example:
Consider data set D={(0,6),(1,0),(2,2),(3,-1),(4,3),(5,4)}.
(i) In case 2<x<3, say 2.5, eqn. for line segment L 3,
y= f(x) = y3+{(x-x3)/(x4-x3)}.(y4-y3)
= 2+{(x-2)/(3-2)}.(-1-2)=8- 3x
Therefo
re, for x=2.5, y= 8- 3x2.5 = 0.5
(ii) For x > 5, say x= 6.5, eqn. for line segment, L5,
y = f(x) = y5+{(x-x5)/(x6-x5)}.(y6-y5)
= 3 +{(x-4)/(5-4)}(4-3) = x-1
Therefore, for x=6.5, y= 6.5-1 =5.5
(iii) In case x<0, say x= -0.5, eqn. for line segment L 1,
y= f(x) = y1+{(x-x1)/(x2-x1)}.(y2-y1)
= 6+{(x-0)/(1-0)}.(0-6)= 6 - 6x
Therefore
, for x=-0.5, y= 6-6 x (-0.5) = 9
The method is very simple, but has a no. of demerits. The
interpolating function consists of a no. of straight line segments
(first degree polynomial), and is non-smooth having sharp
corners where the derivatives do not exist.

Polynomial interpolation: Polynomial interpolation is favored


because of some of its nice properties; e.g.,
i)

ii)
iii)

iv)

A polynomial f(x) = k=1,n akxk-1, of degree (n-1) is defined


by exactly n coefficients a1, a2,, an
It is a smooth curve and possesses derivatives of all order.
It can represent any continuous function, say F(x),
defined over interval a x b as accurately as is desired;
which means that given any , however small and
positive, a polynomial f(x) may be constructed such that |
f(x) F(x)| < for any x in the same interval as a
consequence of Weierstrass Approximation Theorem.
Evaluation of a polynomial

The number of floating point multiplications for evaluating


may be reduced from (2n-1) to n by use of Horners rule using
following algorithm:
Algorithm:

Set
for
{

down to

do

+
}
Example: Find

for x=5

Using above algorithm, start with f = 8


for k=4, f = 8x-3
for k=3, f = (8x-3)x+5
for k=2, f = ((8x-3)x+5)x+2
for k=1, f =(((8x-3)x+5)x+2)x+7
Therefore, for x=5, f = (((8.5-3).5+5).5+2).5+7 = 4767

Different methods for polynomial interpolation are


available, some are discussed below:
i) Polynomial fit: The simplest way to obtain a polynomial of
degree (n-1) is to make use of the n- constraints,
yk= f(xk) = k=1,n akxk-1, k=1,2,n.
A set of linear equations may be written making use of these
constraints and solved by any method to determine the value
of the coefficients, (a1, a2,, an) as shown below:
1

x1

x12

x13 .. x1n-1

a1

y1

x2

x22

x23 .. x2n-1

a2

y2

xn

xn2

xn3 .. xnn-1

:
an

.:
yn

The coefficient matrix on the left hand side is known as


Vandermonde Matrix and is non-singular. The solution to above
equations define the interpolating polynomial uniquely. The
main drawback of the method is that the Vandermonde matrix
become ill-conditioned for larger values of n and is not very
suitable for higher degree polynomials.

Example: To fit a 2nd degree polynomial through the data


set D={(0,6),(1,0),(2,2)}
Soln. The coefficients of the second degree polynomial
may be evaluated from the eqn.
f(x)

1
1
1

0
1
2

0
1
4

a1
a2
a3

==

6
0
2

Solution to the above set of eqn is a1=6, a2= -10 and a3= 4
Required interpolating polynomial is f(x) = 4x2 -10x + 6

Lagrange Interpolating Polynomials:


The main demerit of the above polynomial fit approach is
that the Vandermonde matrix becomes ill-conditioned for
larger values of n. An alternative formulation, called
Lagrange Interpolating Polynomial, defined below,
overcomes the drawback.

The kth Lagrange polynomial Qk, passing through the points (x1,
x2, , xn) is defined as
(x-x1). (x-x2). (x-xk-1). (x-xk+1). (x-xn)
Qk(x) =
(xk-x1). (xk-x2). (xk-xk-1). (xk-xk+1). (xk-xn)
which has the property that Qk(xj) =1 for j=k and is 0 for jk.
However, Qk(x) has non-zero values in between the data points.
This makes it very simple to write down the interpolating
polynomial through the data set D as
f(x) = k=1,n yk.Qk(x)
Example: consider the data set D={(0,6),(1,0),(2,2)}. Obtain an
interpolating polynomial of degree 2 using Lagrange
interpolating polynomial.
x2-3x+2

(x-1).(x-2)
Here, Q1(x) =

(0-1).(0-2)

(x-0).(x-2)

x2-2x

Here, Q1(x) =

(1-0).(1-2)

-1

(x-0).(x-1)

x2-x

Here, Q1(x) =

=
(2-0).(2-1)

The required polynomial is f(x)=6.Q1(x)+0.Q2(x)+2.Q3(x)


= 6/2.( x2-3x+2) +2/2. (x2-x) = 4x2 10x +6

Cubic Splines:
Different methods for polynomial interpolation are available.
Though they solve the basic interpolation problem, the result is
not always satisfactory, especially, when higher degree
polynomials are employed to fit the data set. This is illustrated
through the example given below:
Consider the simple rational function f(x) = 1/(1+x2).
The following data-set is generated from f(x) in the range -4 x
4 , in steps of 1
x
f(x)

-4
-3
0.058 0.1

-2
0.2

-1
0.5

0
1

1
0.5

2
0.2

3
4
0.1 0.058

f(x)
g(x)

An eighth degree interpolating polynomial g(x) may be defined


using above data set. The polynomial g(x) passes through all the
data points, but it does so by oscillating wildly between the
sample points, particularly towards the end of the data set as
shown in the figure.
In order to reduce the excessive swings between the samples,
piece-wise cubic interpolation may be employed. A set of cubic
polynomials may be fit between each pair of adjacent data
points. The coefficients of the polynomials are so chosen that the
smoothness of the functions is preserved. These polynomials are
known as cubic splines.
In cubic spline interpolation a cubic polynomial segment is used
to interpolate between each adjacent pair of data points of
D={(x1,y1), (x2,y2), ,(xn,yn)}. A set of (n-1) such polynomials,
called cubic splines shall be needed for interpolation over the
entire range of data in D. This approach has the advantage that
the extra degrees of freedom of the polynomial can be used to
impose additional constraints to ensure smoothness. The
polynomials are so chosen that continuity of the zeroth, first and
second derivatives are maintained at each data point. This
ensures smoothness of the interpolating function without undue
oscillation, which could result if a single polynomial of higher
degree were used to interpolate over the entire data range.

Let the polynomial representing the kth cubic spline


segment connecting points (xk,yk) and (xk+1,yk+1) of the
data set D be denoted by
sk(x) = ak(x-xk) + bk(xk+1-x) +{(x-xk)3.ck+1 + (xk+1-x)3.ck}/6hk (i)

where, ak, bk, ck, (k= 1,2,..,n-1) and cn denote constants


such that the function sk satisfies
a) The interpolation constraints, sk(xk)= yk, k=1,...(n-1)
b) continuity constraint, sk(xk+1)= yk+1 k=1,2,(n-1)
c) continuity of first derivative, sk(xk) = sk-1(xk)
k=2,3,.(n-1), and
d)continuity of second derivative sk(xk) = sk-1(xk)
k=2,3,.(n-1)
from (i) , by differentiation,
sk(x) = ak - bk + { (x-xk)2.ck+1 - (xk+1-x)2.ck}/2hk
and sk(x) = {(x-xk).ck+1 + (xk+1-x).ck}/hk

(ii)
(iii)

From (iii) it may be verified that, sk(xk) = ck and sk-1(xk)


= ck. This means that the very choice of sk(x) satisfies the
continuity of second derivative constraint (d) above and
also that ck itself is the second derivative of sk(x) at x = xk
Now application of constraint (a), sk(xk) = yk , above
results in condition
i.e.,

yk = bkhk + hk2ck / 6
bk = (6yk hk2ck)/6hk

(iii)

Similarly, application of constraint (b), s k(xk+1) = yk+1 ,


above results in condition
yk+1 = akhk + hk2 ck+1/6
i.e., ak = (6yk+1-hk2ck+1)/6hk

(iv)

ak and bk are dependent coefficients which may be


expressed in terms of coefficients ck and ck+1 using (iii)
and (iv). Therefore all ck , k=1,2,..,n should be determined
to define the spline functions.
Making use of the constraint set (c) , s k(xk) = sk-1(xk),
following equation is obtained:
ak-1- bk-1 - hk-1ck/2 = ak - bk hkck/2 for k=2,3(n-1)
Substituting for ak and bk
(6yk-hk-12ck)/6hk-1-(6yk-1 hk-12ck-1)/6hk-1- hk-1ck-1/2
=(6yk+1-hk2ck+1)/6hk-(6ykhk2ck)/6hk-hkck/2
or, (6yk-hk-12ck)/hk-1-(6yk-1 hk-12ck-1)/hk-1+3. hk-1ck
= (6yk+1-hk2ck+1)/hk-(6ykhk2ck)/hk-3.hkck
collecting coefficients of ck-1,ck,ck+1 and simplifying,
hk-1ck-1 + 2( hk-1 +hk)ck + hk .ck+1
= 6{(yk+1-yk) /hk- (yk/hk -yk-1)/hk-1} (v) [k=2,3,..n-1)]

(n-2) equations of the form (v) may be written for n no. of


unknown coefficients, c1, c2, , cn. Two remaining equations
needed to solve uniquely for the unknowns may be obtained by
setting c1and c2 to some arbitrary values.
The choice c1=c2 = 0 .

(vi)

means the radius of curvature for the 1 st and the last spline
functions at the terminal points x 1 and x2 have been chosen to be
infinity. This type of spline is known as a natural spline.
The system of linear equations (v) and (vi) may be recast in the
following matrix form and may be solved for c-coefficients and
hence finding the a and b coefficients.

1
0
0
h1 2(h1+h2)
h2
0
h2 2(h2+h3)
0
0
h3

0
h3
2(h3+h4)

0
0

0
0

0
0

0
0

0
0
h4

0
.
0
:

hn-2 2( hn-2+hn-1) hn-1


0
0
0

where, pk =

c1
c2
c3
c4
:

0
p2
p3
p4
.
:

cn-1
cn

pn-1
0

6 x {(yk+1-yk) /hk (yk yk-1)/hk-1}

----(vii)

Computational Algorithm:
1. For k=1 to (n-1) compute
{
hk = (xk+1 xk) and
pk = 6 x {(yk+1-yk) /hk (yk yk-1)/hk-1}
2. Form eqn (vii) and solve for c vector [use any suitable
method for its solution]
3. for k= 1 to (n-1)
{
find ak = (6yk+1-hk2ck+1)/6hk and
bk = (6yk hk2ck)/6hk
}
4. Compute for 1 k < n
sk(x) = ak(x-xk) + bk(xk+1-x) +{(x-xk)3.ck+1 + (xk+1x)3.ck}/6hk

Example: Consider the data set D= {(0,2), (1,3), (2,1)}.


Determine the two cubic spline functions for the purpose
of interpolation.
Here the step size, h1=h2 =1
Choosing c1= c3= 0 , the only unknown to be evaluated is
c2.

The right hand side vector p1 = p3 = 0


And p2 = 6 x {(y3-y2)/h2 (y2-y1)/h1} = 6 x {(1-3)/1-(32)/1} = -18
The tri-diagonal system of equations (vii) may be written
as
0

c1

h1 2(h1+h2) h2

c2

-18

c3

Since 2.(h1+h2) = 4, the solution to above set is c 1= 0, c2


= -4.5, and c3 = 0.
The unknown coefficients, a1, a2 and b1, b2 may be found
from (iv) and (iii) as:
a1 = (6y2-h12c2)/6h1 = {6 x 3 (-1 x 4.5)}/6 x
1 =3.75
a 2 = (6y3-h22c3)/6h2 = {6 x 1 (-1 x
0 )}/6 x 1 =1
b1= (6y1 h12c1)/6h1 = {6 x 2 (1 x 0)}/6 x 1
=2

b 2= (6y2 h22c2)/6h2 = {6 x 3 (1 x
4.5)}/6 x 1 = 3.75
The required spline functions are :
s1(x) = a1(x-x1) + b1(x2-x) +{(x-x1)3.c2 + (x2-x)3.c1}/6 x 1
= 3.75(x-0) + 2(1-x) +{(x-0)3. (-4.5) + (1-x)3 . 0}/6
=3.75x + 2 2x 0.75 x3 = 2 +1.75x 0.75 x3
and similarly, s2(x) = 6.5 - 2.75x 0.75(2-x)3

Finite differences and their application in interpolation:


For evenly spaced data set,
D= {(x1,y1), (x2,y2), ., (xn,yn)}
where, (xi+1- xi) = h for i = 1,2,..n-1, the concept of
differences in conjunction with Taylor series and binomial
expansion techniques may be applied to solve different
kinds of numerical problems, including curve fitting. This
has been possible because of strong relation between the
derivatives and differences.
For a given set of equally spaced values of the
independent variable, x, we define the
first forward difference (yi) at yi as yi = yi+1 -yi , and
the first backward difference (yi) at yi as yi = yi -yi-1
The higher ordered differences are defined in a similar
manner, e.g.,
2yi =yi+1 - yi = yi+2 -2yi+1+ yi
2yi =yi+1 - yi = yi-2 -2yi-1+ yi
3yi =2yi+1 - 2yi = yi+3 -3yi+2 +3yi+1- yi
2yi =yi+1 - yi = yi-3- 3yi-2 +3yi-1- yi
in a similar way,
nyi =n-1yi+1 - n-1yi = yi+n nC1.yi+n-1 +nC2yi+n-2
r n
n
and
1) . Cr.yi +n-r .(-1) yi

++ (-

nyi =n-1yi - n-1yi-1 = yi-n nC1.yi-n+1 + nC2.yi-n+2


r
n
++ (-1) yi n+r .(-1) yi
and so on. It may be seen that the coefficients of variable
y are same as those in the binomial expansion for (a-b)n.
Example: Consider the set of n data
D={ (0,-8),(2,0),(4,8),(6,64),(8,216) }.
Here, the data spacing h= 2 and both the difference tables
have been computed as shown below:
Forward difference table
2y 3y 4y

x y

0
2
4
6
8

8
0
8
48
56 96
152

-8
0
8
64
216

48
48

Backrward difference table


x y

y 2y 3y 4y

0
2
4
6
8

8
8
0
56 48
152 96

-8
0
8
64
216

48
48

It should be carefully noted that the contents are the same,


except for changes in the position of the data.
Both the differential operator D= d/dx(.) and the
difference operator and may be used symbolically, is
if they were algebraic quantities. The fundamental laws of
algebra hold for all these operators, e.g.,
D(c.y) =c.D(y) ; D(y+z)= D(y)+ D(z); Dm(Dny) = Dm+n(y)
(c.y)=c. (y); (y+z)= (y) + (z); m(ny) = m+n(y)

(c.y)=c.(y); (y+z)=(y)+(z); m(ny) = m+n(y)


These properties have been exploited to derive
relationship between the derivative operator and the
difference operators.
a) To show that ehD = 1+
From Taylors series expansion of f(x+h) about x, we
have,
f(x+h) = f(x) + h.f(x) + h2/2!( f(x) +h3/3!(f(x) + ..
= ( 1 + h.D/1! + h2D2/2! + h3D3/3! +. ) f(x)
= ehD f(x)
Let x = xi and therefore, x +h = xi+1
From above, (fi ) = f(xi+1) f(xi) = ehD f(xi) f(xi)
= (ehD 1)f(xi) = (ehD 1)fi
Thus it is seen that =ehD1 and hence, ehD = 1+

(i)

b) To show that e-hD = 1-


As before, from Taylors series expansion of f(x-h) about
x, we have,
f(x-h) = f(x) - h.f(x) + h2/2!( f(x) -h3/3!(f(x) + ..
= ( 1 - h.D/1! + h2D2/2! - h3D3/3! +. ) f(x)
= e-hD f(x)

Let x = xi and therefore, x -h = xi-1


From above, (fi )= f(xi) f(xi-1) = f(xi) - e-hD f(xi)
= (1-e-hD )f(xi) = (1-e-hD )fi
it is seen that =1- e-hD and hence, e-hD = 1- .(ii)
Two important formulas, called Grigori-Newton forward
and backward interpolation formulas may now be derived
based on (i) and (ii) above.
As said earlier, there is a strong relationship between the
derivatives and differences and one may be expressed in
terms of the other. These relationships may be derived as
under:

Differences in terms of derivatives:


a) Forward Differences:
From (i) ehD = 1+
Therefore, = e h.D - 1
= (1+hD/1! + (hd)2/2! + (hd)3/3! + (hd)4/4! +. ) 1
= (hD + h2D2/2 + h3D3/6 + h4D4/24+ ..)
2 = (e h.D 1)2 = e 2h.D - 2e h.D + 1

(1+ 2hD/1!+ 4h2D2 /2!+ 8h3D3 /3!+ 16h4D4 /4!+) 2.( 1+hD/1! + (hd)2/2! + (hd)3/3! + (hd)4/4! +. ) +1
= h2D2 + h3D3 + 7/12 x h4D4-
Similarly, 3 = h3D3 +3/2 x h4D4+5/4 x h5D5 +. ..and so on
b) Backward Differences:
From (ii) e-hD = 1-
Therefore, =1- e-hD
Proceeding as before,
= 1 - (1 - hD/1! + (hd)2/2! - (hd)3/3! + (hd)4/4! +. )
= hD/1! - h2d2/2 + h3d3/6 h4d4/24
2 = h2d2 - h3d3 + 7/12 x h4d4
3 = h3D3 - 3/2 x h4D4 + 5/4 x h5D5 + and so on
Derivatives in terms of dfifferences:
c) Forward Differences:
From (i) ehD = 1+
Therefore, hD = ln (1+) = - 2/2 + 3/3 -4/4 +
D = 1/h ( - 2/2 + 3/3 -4/4 + )

Similarly D2 = D.D = 1/h2( 2 - 3 11/12 x 4 +.)


D3 = D.D2 = 1/h3( 3 3/2 x 4 + 7/4 5 .)
D4 = D.D2 = 1/h3( 4 2 5 + 17/6 6 .)
d) Backward Differences:
From (ii) e-hD = 1-
Therefore, - hD = ln (1- )
=-( + 2/2 + 3/3 +4/4 + )
D = 1/h ( + 2/2 + 3/3 +4/4 + )
Similarly D2=D.D=1/h2(2+3+11/12 x 4+5/65 + )
D3=D.D2= 1/h3(3 + 3/2 x 4 + 7/4 x 5 + )
D4 = D.D2 = 1/h4(4 + 2 x 5 + 17/4 x 6 + )
Grigori-Newton forward interpolation formula:
Let f(x) be a Taylor series expandable function and its
values are known at evenly spaced interval of x in the
form of data set D= {(x1,y1), (x2,y2), ., (xn,yn)}. Let f(a)
be the value of f(x) at one of the pivotal points, say x = a.
The function f(x) at the neighbourhood of x = a, may be
expressed in terms of f(a) as
f(x) = f(a+p.h)
= f(a)+ph.f(a)+p2h2/2!f(a)+p3h3/3!f(a) + ..
= ( 1 + phD + p2h2D2/2! + p3h3D3/3! + .) f(a)
= ephDf(a)
= (ehd)p f(a)
= (1+)p f(a) using(i)

= (1 + pC1 + pC22 + pC3 3 +.) f(a)


binomial theorem, which means,

by

f(a+p.h) =f(a)+p.f(a)+p.(p-1)/2!.2f(a)
+p.(p-1)(p-2)/3!.3f(a) + .. (iii)
Thus a function f(x) may be expanded in an infinite series
about any of its pivotal points using above relation (iii). If
the series is truncated after n terms, the error shall be
proportional to nf(a).
Grigori-Newton backward interpolation formula:
Let f(x) be a Taylor series expandable function and its
values are known at evenly spaced values of x in the form
of data set D= {{(x1,y1), (x2,y2), ., (xn,yn)}. Let f(a) be
the value of f(x) at one of the pivotal points, say x = a.
The function f(x) at the neighbouring x = a, may be
expressed in terms of f(a) as
f(x) = f(a-p.h)
= f(a)-ph.f(a)+ p2h2/2! f(a) -p3h3/3!f(a) + ..
= ( 1 - phD + p2h2D2/2! - p3h3D3/3! + .) f(a)
= e-phDf(a)
= (e-hd)p f(a)
= (1- )p f(a) using(ii)

= (1 - pC1 + pC22 + pC3 3 +.) f(a)


binomial theorem, which means,

by

f(a-p.h) = f(a) - p.f(a)+p.(p-1)/2!.2f(a)


-p.(p-1)(p-2)/3!. 3f(a) + .. (iii)
Thus a function f(x) may be expanded in an infinite series
about any of its pivotal points using above relation (iii). If
the series is truncated after n terms, the error shall be
proportional to nf(a).
Example: Given a set on n data
D={ (0,-8),(2,0),(4,8),(6,64),(8,216),(10,512)}
Find f(7) using forward and backward interpolation
formulas assuming a = (i) 0, (ii) 2, (iii) 4, (iv) 6, (v) 8 and
(vi) 10.
Soln: Here, the data spacing h = 2 and both the forward
and backward difference tables have been computed as
from f(x)= (x-2)3shown below:
forward difference table
x

2y 3y 4y

0 -8 8
0
48 0
2 0 8
48 48 0
4 8 56 96 48 0
6 64 152 144 48
8 216 296 192
488 interpolation formula
using512
forward
10
12

1000

backward difference table


y 2y 3y 4y

0
2
4
6
8
10

-8
0
8
8
8
64 56
216 152
512 296

0
48 48
96 48
144 48

0
0

x = a+ p.h = 7 ; here, h=2


(i)

when a=0, 0+2.x.p = 7 ; therefore, p=3.5


interpolation formula

so using forward

f(7) = f(0+2 x 3.5)


= f(0)+3.5 x f(0) + 3.5.(3.5-1)/2!.2f(0)
+3.5.(3.5-1)(3.5- 2)/3!.3f(0) +.
= -8 + 3.5 x 8 + 0 + 3.5 x 2.5 x 1.5/6 x 48 = -8+28+105 = 125
(ii) when a=2, 2+2.p = 7 ; therefore, p=2.5
interpolation formula

so using forward

f(7) = f(2+ 2 x 2.5)


= f(2)+ 2.5f(2) + 2.5.(2.5-1)/2!.2f(2)
+2.5.(2.5-1)(2.5-2)/3!.3f(2)
= 0 +2.5 x 8 +2.5 x 1.5 x 48/2 + 2.5 x 1.5 x 0.5 x 48/6
= 0+20+ 90+ 15 = 125
(iii) when a=4, 4+2.x.p= 7 ; therefore, p=1.5
interpolation formula

so using forward

f(7) = f(4+2 x 1.5)


= f(4)+ 1.5f(4) + 1.5.(1.5-1)/2!.2f(4)
+1.5.(1.5-1)(1.5-2)/3!.3f(4)
= 8+1.5 x 56 + 1.5 x 0.5 x 96/2 -1.5 x 0.5 x 0.5 x 48/6
= 8+84+36-3 = 125
(iv) when a=6, 6+2.x.p= 7 ; therefore, p=0.5
interpolation formula
f(7) = f(6+0.5 x 2)
= f(6) + 0.5. f(6) + 0.5.(0.5-1)/2!.2f(6)

so using forward

+0.5.(0.5-1)(0.5-2)/3!.3f(6) +.
= 64 + 0.5 x 152 0.5 x 0.5 x 144/2+0.5 x 0.6 x1.5/6. 3f(6)
= 64 +76 18+.0625.3f(6)
= 122 + 0.0625.3f(6)
(erroneous since 3f(6) is not available)
If 3f(6)=48 were included f(7) could be corrected to 122+3=125
(v) when a= 8, 8-2.p= 7 ; therefore, p= 0.5
interpolation formula,

so using backward

f(7) = f(8) 0.5. f(8) + 0.5.(0.5-1)/2!. 2f(8)


0.5.(0.5-1)(0.5-2)/3!. 3f(8) +
= 216 0.5 x 152 0.5 x 0.5 x 96/2 0.5 x 0.5 x 1.5 x 48 /6
= 216 76 12 3 = 125
(vi) when a= 10, 10- 2 x .p =7 ; therefore, p= 1.5 so using backward
interpolation formula,
f(7) = f(10) 1.5 x f(10) + 1.5.(1.5-1)/2!. 2f(10)
1.5.(1.5-1)(1.5-2)/3!. 3f(10) +
= 512 1.5 x 296 +1.5 x 0.5 x 144/2 + 1.5 x 0.5 x 0.5 x 48 /6
= 512 444 + 54 + 3 = 125
It should be noted that for an nth degree polynomial, (n+1)th. derivative as
well as (n+1)th forward and backward differences are zero. So the
interpolation shall be exact if all differences up to the order (n+1) are
available; otherwise there shall be error which shall be of the order of
(n+1)th difference.
These formulas may be employed for extrapolation also. Consider
following examples:
(vii) to find f(12) in terms of f(0), note that f(12)= f(0+2 x 6), so p=6.
Using forward formula:

f(12)=f(0+2x6) = f(0) +p. f(0)+p(p-1)/2!.2f(0) + p(p-1)(p-2)/3!.3f(0)


= -8 + 6 x 8 + 6 x 5/2 x 10 +6 x 5 x 4/6 x 48 = 1000
(viii) again, f(-3) = f(10- 2 x 6.5) ( here p = 6.5)
f(-3)=f(10)6.5xf(10)+6.5.(6.5-1)/2!.2f(10)6.5(6.5-1)(6.5-2)/3!.
3f(10)
= 512 6.5 x 296 +6.5 x 5.5 x144- 6.5 x 5.5 x 4.5 x 48/6 = -125

S-ar putea să vă placă și