Sunteți pe pagina 1din 31

Newton-Raphson Method

1
Newton-Raphson Method

f(x)

f(xi)
x f x 
i, i
xi 1 = xi -
f(xi )
f (xi )

f(xi-1)


xi+2 xi+1 xi X

Figure 1 Geometrical illustration of the Newton-Raphson method.


2 http://numericalmethods.eng.usf.edu
Derivation

f(x)

AB
f(xi) B tan(  
AC

f ( xi )
f ' ( xi ) 
xi  xi 1
C  A X f ( xi )
xi 1  xi 
f ( xi )
xi+1 xi

Figure 2 Derivation of the Newton-Raphson method.


3
http://numericalmethods.eng.usf.edu
Algorithm for Newton-Raphson Method
1. Evaluate f (x )
2. Evaluate f x symbolically
3. Use an initial guess of the root, xi , to estimate
the new value of the root, xi 1 , as
f  xi 
xi 1 = xi -
f  xi 

4. Find the absolute relative approximation error

xi 1- xi
a =  100
xi 1
4
http://numericalmethods.eng.usf.edu
Step 5

Compare the absolute relative approximate error with


the pre-specified relative error tolerance .
s
Go to Step 2 using new
Yes
estimate of the root.
Is a s ?

No Stop the algorithm

Also, check if the number of iterations has exceeded the


maximum number of iterations allowed.

5 http://numericalmethods.eng.usf.edu
Basis of Bisection Method

Theorem An equation f(x)=0, where f(x) is a real continuous function,


has at least one root between xl and xu if f(xl) f(xu) < 0.
f(x)

x
x
xu

Figure 1 At least one root exists between the two points if the function is
real, continuous, and changes sign.

6
Basis of Bisection Method
f(x)

x x
xu

Figure 2 If functionf x  does not change sign between two points, roots of
the equationf x  0may still exist between the two points.

7
Secant Method – Derivation

The secant method can also be derived from geometry:


f(x)
The Geometric Similar Triangles
AB DC

f(xi) B AE DE
can be written as
f ( xi ) f ( xi 1 )

C
xi  xi 1 xi 1  xi 1
f(xi-1)

E D A
On rearranging, the secant
xi+1 xi-1 xi
X
method is given as

f ( xi )( xi  xi 1 )
Figure 2 Geometrical representation of xi 1  xi 
the Secant method. f ( xi )  f ( xi 1 )
8
Step 1

Calculate the next estimate of the root from two initial guesses

f ( xi )( xi  xi 1 )
xi 1  xi 
f ( xi )  f ( xi 1 )
Find the absolute relative approximate error

xi 1- xi
a =  100
xi 1

9
Step 2

Find if the absolute relative approximate error is greater than


the prespecified relative error tolerance.

If so, go back to step 1, else stop the algorithm.

Also check if the number of iterations has exceeded the


maximum number of iterations.

10
%The Newton Raphson Method
clc;
close all;
clear all;
syms x;
f=x^2-5*x+4 %Enter the Function here
g=diff(f); %The Derivative of the Function
n=input('Enter the number of decimal places:');
epsilon = 5*10^-(n+1)
x0 = input('Enter the intial approximation:');
for i=1:100
f0=vpa(subs(f,x,x0)); %Calculating the value of function
at x0
f0_der=vpa(subs(g,x,x0)); %Calculating the value of
function derivative at x0
y=x0-f0/f0_der; % The Formula
err=abs(y-x0);
if err<epsilon %checking the amount of error at each iteration
break
end
x0=y;
end
y = y - rem(y,10^-n); %Displaying upto
required decimal places
fprintf('The Root is : %f \n',y);
fprintf('No. of Iterations : %d\n',i);
Linear Regression
What is Regression?
What is regression? Given n data points ( x1 , y1 ), ( x2 , y2 ),......, ( xn , yn )
best fit y  f (x ) to the data.

Residual at each point is Ei  yi  f ( xi )

y
Ei  yi  f ( xi ) ( xn , yn )
( xi , yi )

y  f (x )

( x1 , y1 )
x

Figure. Basic model for regression

15
Linear Regression-Criterion#1
Given n data points ( x1 , y1 ), ( x2 , y2 ),......, ( xn , yn ) best fit y  a0  a1 x to the data.
n
Does minimizing  Ei work as a criterion?
i 1

( xi , yi )

Ei  yi  a0  a1 xi ( xn , yn )

( x2 , y2 )
( x3 , y3 )

y  a0  a1 x
( x1 , y1 )

x
Figure. Linear regression of y vs x data showing residuals at a typical point, xi .

16
Example for Criterion#1

Example: Given the data points (2,4), (3,6), (2,6) and (3,8), best fit the
data to a straight line using Criterion#1
n
Minimize  Ei
i 1

Table. Data Points 10

8
x y
6
2.0 4.0
y

3.0 6.0
2

2.0 6.0 0
0 1 2 3 4

3.0 8.0 x

Figure. Data points for y vs x data.


17
Linear Regression-Criteria#1
Using y=4x − 4 as the regression curve

Table. Residuals at each point for regression


model y=4x − 4
10
x y ypredicted E = y - ypredicted
8
2.0 4.0 4.0 0.0
6
3.0 6.0 8.0 -2.0

y
4
2.0 6.0 4.0 2.0
2
3.0 8.0 8.0 0.0
0
0 1 2 3 4

x
4
 Ei  0
i 1

Figure. Regression curve y=4x − 4 and y vs x data

18
Linear Regression – Criterion #1
4
 Ei  0 for regression models of y=4x-4
i 1

The sum of the residuals is minimized, in this case it is zero, but the regression
model is not unique.
Hence the criterion of minimizing the sum of the residuals is a bad criterion.

10

6
y

0
0 1 2 3 4

x
19
Linear Regression-Criterion#2
n
Will minimizing  i | work any better?
| E
i 1
2

Table. Residuals at each point for


regression model y=4x − 4

10
x y ypredicted E2 = y - ypredicted
8
2.0 4.0 4.0 0.0 6

y
3.0 6.0 8.0 -2.0 4

2.0 6.0 4.0 2.0 2

3.0 8.0 8.0 0.0 0


0 1 2 3 4
4
 | Ei |  4 x
i 1

Figure. Regression curve y= y=4x − 4 and y vs. x


data

20
Least Squares Criterion
The least squares criterion minimizes the sum of the square of the
residuals in the model, and also produces a unique line.
2
Sr   Ei    yi  a0  a1 xi 
n 2 n

i 1 i 1

xi , yi

Ei  yi  a0  a1 xi xn , y n

x2 , y2
x3 , y3

y  a0  a1 x
x1 , y1

Figure. Linear regression of y vs x data showing residuals at a typical point, xi .

21
Finding Constants of Linear Model
Minimize the sum of the square of the residuals:
2
Sr   Ei    yi  a0  a1 xi 
n 2 n

i 1 i 1

To find a0 and a1 we minimize Sr with respect to a1 and a0 .


S r n
 2  yi  a0  a1 xi  1  0
a0 i 1

S r n
 2  yi  a0  a1 xi  xi   0
a1 i 1

giving
n n n

a  a x   y
i 1
0
i 1
1 i
i 1
i

n n n

a x  a x   yi xi
2
0 i 1 i
i 1 i 1 i 1

22
Finding Constants of Linear Model
Solving for a0 and a1
n n n
n x i y i  x i  y i
i 1 i 1 i 1
a1  2
n
 n 
n x i2   x i 
i 1  i 1 
and
n n n n

x y
i 1
2
i
i 1
i   xi  xi y i
i 1 i 1
a0  2
n
 n 
n x 2
i   x i 
i 1  i 1 

a0  y  a1 x

23
Linear Regression (special case)
Given

( x1 , y1 ), ( x2 , y2 ), ... , ( xn, yn)


best fit

y  a1 x

n n n
n xi y i  xi  y i
i 1 i 1 i 1
a1  2
n
 n
n x   xi 
2
i
i 1  i 1 
Is this correct? 24
Linear Regression (special case cont.)

Residual at each data point

 i  yi  a1 xi
Sum of square of residuals
n
Sr    i
2

i 1

n 2

   yi  a1 xi 
i 1

25
Linear Regression (special case cont.)

Differentiate with respect to a1


n
dS r
  2 yi  a1 xi  xi 
da1 i 1

 
n
   2 yi xi  2a1 xi
2

i 1

dS r
0
da1
gives
n

x y i i
a1  i 1
n

x
i 1
2
i
26
Linear Regression (special case cont.)

Does this value of a1 correspond to a local minima or local


maxima? n

x y i i
a1  i 1
n

 i
x 2

i 1

 
n
dS r
   2 yi xi  2a1 xi
2

da1 i 1
d 2Sr n
  2 xi  0
2
2
da1 i 1

Yes, it corresponds to a local minima.


n

x y i i
a1  i 1
n

x
i 1
2
i
27
Linear Regression (special case cont.)
Is this local minima of Sr an absolute minimum of S r?
Sr

a1

28
Interpolation

29
• Problem Statement. Suppose that we want to determine the
coefficients of the parabola,
f (x) = p1x2 + p2x + p3, that passes through the last three density
values from:
x1 = 300 f (x1) = 0.616
x2 = 400 f (x2) = 0.525
x3 = 500 f (x3) = 0.457

30
31

S-ar putea să vă placă și