Documente Academic
Documente Profesional
Documente Cultură
1
Newton Raphson Example:
Iterative form Solve the linkage problem using
Newtons method
f ( ) f ( xi )
xi +1 = xi + f ( ) = R1 cos( ) R2 cos( ) + R3 cos( )
f ( xi )
f ( ) = R2 sin ( ) sin ( )
0 .0 f ( i )
i +1 = i +
f ( )
5 5 11
f ( ) = cos 40 cos + cos(40 )
3 2 6
5
f ( ) = sin 40 sin( 40 )
2
5 5 11
f ( 30) = cos 40 cos 30 + cos(40 30) = 0.039797 Convergence of Newton Raphson
3 2 6
5 No guarantee of convergence (unlike
f ( 30) = sin 40 sin( 40 30)
= 0.018786
2 180
bisection), e.g., f(x) =tan-1x
i f() f() Conditions for convergence (qualitative):
1 30 0.039797 0.018786 xo close enough to x
2 32.118463 0.0002144 0.020805 f(x) does not change sign near x
3 32.015423 0.00000502 0.020708 f(x) is not too nonlinear near x
4 32.015180 0.00000000 0.020707
5 32.015180 0.00000000
q
x k +1 x x k x
2
Newton--Raphson Method
Newton
Convergence
Newton Raphson
x 0 = Initial Guess, k = 0
Repeat {
f x k( )
x
(x k +1
) ( )
x k = f xk
k = k +1
} Until ?
x k +1 x k < threshold ? ( )
f x k +1 < threshold ?
x i +1 x i or f ( x i +1 ) f ( x i )
Newton--Raphson Method
Newton Newton--Raphson Method
Newton
Convergence Convergence
Convergence Checks Convergence Checks
Need a "delta-x" check to avoid false convergence Also need an "f ( x ) " check to avoid false convergence
f(x) f(x)
x k +1 x k > xa + xr x k +1
( )
f x k +1 > fa
x k +1 xk x* x*
X x k +1 x k
X
( )
f x k +1 < fa
x k +1 x k < xa + xr x k +1
3
Newton--Raphson Method
Newton Newton--Raphson Method
Newton
Convergence Convergence
Local Convergence
Convergence Depends on a Good Initial Guess
f(x)
x1 x1
X
x2 x0 x0
Newton--Raphson Method
Newton
Convergence
Local Convergence Do I expect a unique solution?
Convergence Depends on a Good Initial Guess Approximately where?
After answering the above questions
Newton Raphson gives an efficient means
of converging to the root, if it exists, or
of spectacularly failing to converge, indicating
(though not proving) that your root does not exist
nearby.
Newtons method can be extended to solve
a system of nonlinear equations
Newton--Raphson Method
Newton Newton--Raphson Method
Newton
Convergence Convergence
df k * k d 2 f df k k +1 * d2 f
0 = f ( x* ) = f ( x k ) + ( x )( x x ) + 2 ( x% )( x* x k ) 2 Subtracting ( x )( x x ) = 2 ( x% )( x k x* ) 2
dx dx dx d x
some x% [ x k , x* ] df k 1 d 2 f
Dividing through ( x k +1 x* ) = [ ( x )] ( x% )( x k x* )2
dx d 2x
Mean value theorem
truncates Taylor series df k 1 d 2 f
Let [ ( x )] ( x% ) = K k
dx d 2x
But
df k k +1 k by Newton then x k +1 x* K k x k x*
2
0 = f ( xk ) + ( x )( x x )
dx definition
Convergence is quadratic
4
Newton--Raphson Method
Newton
Newton--Raphson Method
Newton
Convergence
Convergence
Example 1
Local Convergence Theorem
f ( x) = x 2 1 = 0, find x ( x* = 1)
If df k
df ( x ) = 2 xk
a) bounded away from zero dx
dx
b)
d2 f
bounded
K is bounded
((
2 x k ( x k +1 x k ) = x k )
2
)
1
dx 2
(( ) (x ) )
2 2
2 x k ( x k +1 x* ) + 2 x k ( x* x k ) = x k *
Newton--Raphson Method
Newton Newton--Raphson Method
Newton
Convergence Convergence
Example 2 Example 1, 2
f ( x) = x 2 = 0, x* = 0
1
df
df k Note : not bounded
( x ) = 2 xk dx
dx away from zero
2 x k ( x k +1 0) = ( x k 0) 2
1 k
x k +1 0 =
2
(x 0 ) for x k x* = 0
1
or ( xk +1 x* ) = ( xk x* )
2
Convergence is linear
5
Newton
i )
f( )
f( System of Nonlinear
1 30 0.039797 0.018786
2 32.118463 0.0002144 0.020805
Equations
3 32.015423 0.00000502 0.020708
Given continuous f(x,y) and g(x,y)
4 32.015180 0.00000000 0.020707
5 32.015180 0.00000000 Find the value of x=x* and y=y* such that
f(x*,y*) = 0 and g(x*,y*) = 0
Secant (needs 2 values for the 1st iteration)
i )
f( )
g( i+1
i.e., the intersections of the f(x,y)=g(x,y)=0
f(x,y)=g(x,y)=0
0 30 -0.03979719 contours.
1 40 0.1949629 0.023476 31.695228
2 31.695228 -0.00657688 0.024268 31.966238
3 31.966238 -0.00101233 0.020533 32.015542
4 32.015542 0.00000749 0.020684 32.015180
5 32.015180 -0.00000001 0.020708 32.015180
Newton-Raphson Scheme
Newton-
Implemented in FE
Tangential stiffness matrix is formed and
decomposed at each iteration within a particular
step.
Rate of convergence is quadratic (relatively high)
Tangential stiffness is formed and decomposed at
each iteration, which can be prohibitively
expensive for large systems
6
NR derivation (cont.) Example:
After solving for xi, yi , Solve the following two equations
by Newtons Method:
xi +1 = xi + xi 1=30 & 2=0
f ( 1 , 2 ) = 6 cos 1 + 8 cos 2 13.064178 = 0
yi +1 = yi + yi
g( 1 , 2 ) = 6 sin 1 + 8 sin 2 2.571150 = 0
If the partial derivatives cannot be evaluated, therefore The derivatives are:
f i f ( x + xi ) f ( xi )
= f 1 = 6 sin 1 f 2 = 8 sin 2
xi xi
g 1 = 6 cos 1 g 2 = 8 cos 2
Evaluating the functions & derivatives at 1=30 & 2=0 (note radian conversion) 1 = 2.52053
f ( 30,0) = 6 cos 30 + 8 cos 0 13.064178 = 0.131975 2 = 4.708541
1
g ( 30,0 ) = 6 sin 30 + 8 sin( 0) 2.571150 = 0.428850 second iteration for the angles 1 & 2
f 1 ( 1 = 30) = 6 sin 30 = 0052360
. f 2 ( 2 = 0) = 8 sin 0 = 0 is found to be: 32.52053 & 4.708541
180 180
Repeat the above steps
g1 ( 1 = 30) = 6 cos 30 = 0090690
. g 2 ( 2 = 0) = 8cos 0 = 0139625
.
180 The following table presents the solution steps:
substituting in the difference formula
i 1 2 f ( 1 , 2 ) g( 1 , 2 ) 1 2