Sunteți pe pagina 1din 7

STR 613

613:: Advanced Fixed point iteration


Numerical Analysis  f(x)=
f(x)=0
0
 Re-
Re-arrange into x=g(x)
Instructor  Iterative form
Dr. Ahmed Amir Khalil
x i +1 = g ( x i )

Fixed point iteration Fixed point iteration


Example Example
5 5 11 1
f ( ) = cos 40 cos + cos(40 ) = 0 = cos 1 [R1 cos + R3 cos( )] = cos 1 (u( )) = g ( )
3 2 6 R
2
i i u(i) i+1 f(i+1) g(i)
1 1 30 0.850107 31.776742 -0.03979719 0.131899
= cos [R1 cos + R3 cos( )] = cos 1 (u( ) ) = g ( )
1
2 31.776742 0.848142 31.989810 -0.00491050 0.107995
R
2
3 31.989810 0.847932 32.012517 -0.00052505 0.105148
4 32.012517 0.847910 32.014901 -0.0005515 0.104845
5 32.014901 0.847908 32.015151 -0.00000578 0.104814
6 32.015151 0.847908 32.015177 -0.00000061 0.104810
7 32.015177 0.847908 32.015180 -0,00000060 0.104810
8 32.015180 0.847908 32.015180 -0.0000000 0.104810

Fixed point iteration Newton Raphson


Convergence
 Well-known
If g ( ) < 1  Powerful
 Approximate
method converges
f(x) by the
tangent g(x)
If g ( ) > 1
 Find the
diverges or solution for
converges slowly g(x)=0.

Dependent on choice of g(x)

1
Newton Raphson Example:
 Iterative form  Solve the linkage problem using
Newtons method
f ( ) f ( xi )
xi +1 = xi + f ( ) = R1 cos( ) R2 cos( ) + R3 cos( )
f ( xi )
f ( ) = R2 sin ( ) sin ( )

0 .0 f ( i )
i +1 = i +
f ( )
5 5 11
f ( ) = cos 40 cos + cos(40 )
3 2 6
5
f ( ) = sin 40 sin( 40 )
2

5 5 11
f ( 30) = cos 40 cos 30 + cos(40 30) = 0.039797 Convergence of Newton Raphson
3 2 6
5  No guarantee of convergence (unlike
f ( 30) = sin 40 sin( 40 30)
= 0.018786
2 180
bisection), e.g., f(x) =tan-1x
i f() f()  Conditions for convergence (qualitative):
1 30 0.039797 0.018786  xo close enough to x
2 32.118463 0.0002144 0.020805  f(x) does not change sign near x
3 32.015423 0.00000502 0.020708  f(x) is not too nonlinear near x
4 32.015180 0.00000000 0.020707
5 32.015180 0.00000000

Newton Raphson Newton--Raphson Method


Newton
f ( xi ) Algorithm
x i +1 = x i Define iteration
f ( x i ) Do k = 0 to .
1
df
x k +1 = x k ( x k ) f ( x k )
dx
until convergence

q
x k +1 x x k x

2
Newton--Raphson Method
Newton
Convergence
Newton Raphson
x 0 = Initial Guess, k = 0
Repeat {
f x k( )
x
(x k +1
) ( )
x k = f xk

k = k +1
} Until ?

x k +1 x k < threshold ? ( )
f x k +1 < threshold ?

Convergence of Newton Raphson


Convergence criteria
 No guarantee of convergence (unlike
bisection), e.g., f(x) =tan-1x
 Conditions for convergence (qualitative):
 xo close enough to x
 f(x) does not change sign near x
 f(x) is not too nonlinear near x

x i +1 x i or f ( x i +1 ) f ( x i )

Newton--Raphson Method
Newton Newton--Raphson Method
Newton
Convergence Convergence
Convergence Checks Convergence Checks

Need a "delta-x" check to avoid false convergence Also need an "f ( x ) " check to avoid false convergence
f(x) f(x)
x k +1 x k > xa + xr x k +1
( )
f x k +1 > fa

x k +1 xk x* x*
X x k +1 x k
X

( )
f x k +1 < fa
x k +1 x k < xa + xr x k +1

3
Newton--Raphson Method
Newton Newton--Raphson Method
Newton
Convergence Convergence
Local Convergence
Convergence Depends on a Good Initial Guess

f(x)

x1 x1
X
x2 x0 x0

Newton--Raphson Method
Newton
Convergence
Local Convergence  Do I expect a unique solution?
Convergence Depends on a Good Initial Guess  Approximately where?
 After answering the above questions
 Newton Raphson gives an efficient means
 of converging to the root, if it exists, or
 of spectacularly failing to converge, indicating
(though not proving) that your root does not exist
nearby.
 Newtons method can be extended to solve
a system of nonlinear equations

Newton--Raphson Method
Newton Newton--Raphson Method
Newton
Convergence Convergence
df k * k d 2 f df k k +1 * d2 f
0 = f ( x* ) = f ( x k ) + ( x )( x x ) + 2 ( x% )( x* x k ) 2 Subtracting ( x )( x x ) = 2 ( x% )( x k x* ) 2
dx dx dx d x
some x% [ x k , x* ] df k 1 d 2 f
Dividing through ( x k +1 x* ) = [ ( x )] ( x% )( x k x* )2
dx d 2x
Mean value theorem
truncates Taylor series df k 1 d 2 f
Let [ ( x )] ( x% ) = K k
dx d 2x
But
df k k +1 k by Newton then x k +1 x* K k x k x*
2
0 = f ( xk ) + ( x )( x x )
dx definition
Convergence is quadratic

4
Newton--Raphson Method
Newton
Newton--Raphson Method
Newton
Convergence
Convergence
Example 1
Local Convergence Theorem
f ( x) = x 2 1 = 0, find x ( x* = 1)
If df k
df ( x ) = 2 xk
a) bounded away from zero dx
dx

b)
d2 f
bounded
K is bounded
((
2 x k ( x k +1 x k ) = x k )
2
)
1
dx 2
(( ) (x ) )
2 2
2 x k ( x k +1 x* ) + 2 x k ( x* x k ) = x k *

Then Newtons method converges given a


sufficiently close initial guess (and 1
convergence is quadratic) or ( x k +1 x* ) = ( x k x* ) 2
2 xk
Convergence is quadratic

Newton--Raphson Method
Newton Newton--Raphson Method
Newton
Convergence Convergence
Example 2 Example 1, 2
f ( x) = x 2 = 0, x* = 0
1
df
df k Note : not bounded
( x ) = 2 xk dx
dx away from zero

2 x k ( x k +1 0) = ( x k 0) 2
1 k
x k +1 0 =
2
(x 0 ) for x k x* = 0

1
or ( xk +1 x* ) = ( xk x* )
2
Convergence is linear

Secant Method Secant Method


 If f(x) is not  If f(x) is not
available available or difficult
to evaluate
f ( xi )
x i+1 = x i +
g ( x i )
f ( x i ) f ( x i 1 )
g ( x i ) =
x i x i 1

5
Newton

i )
f( )
f( System of Nonlinear
1 30 0.039797 0.018786
2 32.118463 0.0002144 0.020805
Equations
3 32.015423 0.00000502 0.020708
 Given continuous f(x,y) and g(x,y)
4 32.015180 0.00000000 0.020707
5 32.015180 0.00000000  Find the value of x=x* and y=y* such that
f(x*,y*) = 0 and g(x*,y*) = 0
Secant (needs 2 values for the 1st iteration)

i )
f( )
g( i+1
 i.e., the intersections of the f(x,y)=g(x,y)=0
f(x,y)=g(x,y)=0
0 30 -0.03979719 contours.
1 40 0.1949629 0.023476 31.695228
2 31.695228 -0.00657688 0.024268 31.966238
3 31.966238 -0.00101233 0.020533 32.015542
4 32.015542 0.00000749 0.020684 32.015180
5 32.015180 -0.00000001 0.020708 32.015180

Newton-Raphson Scheme
Newton-
Implemented in FE
 Tangential stiffness matrix is formed and
decomposed at each iteration within a particular
step.
 Rate of convergence is quadratic (relatively high)
 Tangential stiffness is formed and decomposed at
each iteration, which can be prohibitively
expensive for large systems

NR derivation NR derivation (cont.)


 Assuming that an initial value of xi and yi f x i xi + f y yi = f i
i
are obtained
g x i x i + g y y i = g i
 Taylor Expansion i

f ( x*, y*) = f i + f x ( x * xi ) + f y i ( y * yi )+... = 0


i generally [ A]{} = { f i }
g ( x*, y*) = g i + g x i ( x * xi ) + g y ( y * yi ) +... = 0
i where f1 x f1 x2
.. .. f1 xn
x1
Neglecting the higher derivatives and rearranging 1 x
f 2 x1 f2 .. .. f2 xn 1
x2 2
f x i xi + f y yi = f A= . . .. .. . & = .
i i .
. . .. .. .

g x i xi + g y yi = g i f fn x
i n x1
fn x2
.. ..
n x n

6
NR derivation (cont.) Example:
After solving for xi, yi , Solve the following two equations
by Newtons Method:
xi +1 = xi + xi 1=30 & 2=0
f ( 1 , 2 ) = 6 cos 1 + 8 cos 2 13.064178 = 0
yi +1 = yi + yi
g( 1 , 2 ) = 6 sin 1 + 8 sin 2 2.571150 = 0
If the partial derivatives cannot be evaluated, therefore The derivatives are:
f i f ( x + xi ) f ( xi )
= f 1 = 6 sin 1 f 2 = 8 sin 2
xi xi
g 1 = 6 cos 1 g 2 = 8 cos 2

Example: Solving the two equations

Evaluating the functions & derivatives at 1=30 & 2=0 (note radian conversion) 1 = 2.52053
f ( 30,0) = 6 cos 30 + 8 cos 0 13.064178 = 0.131975 2 = 4.708541
1
g ( 30,0 ) = 6 sin 30 + 8 sin( 0) 2.571150 = 0.428850 second iteration for the angles 1 & 2

f 1 ( 1 = 30) = 6 sin 30 = 0052360
. f 2 ( 2 = 0) = 8 sin 0 = 0 is found to be: 32.52053 & 4.708541
180 180
Repeat the above steps
g1 ( 1 = 30) = 6 cos 30 = 0090690
. g 2 ( 2 = 0) = 8cos 0 = 0139625
.
180 The following table presents the solution steps:
substituting in the difference formula
i 1 2 f ( 1 , 2 ) g( 1 , 2 ) 1 2

0.052360 1 + 0.0 2 = 0.131975 1 30 0 0.131975 0.428850 2.52 -4.708541


2 32.5205 -4.708541 -0.319833E-1 -0.223639E-2 -0.500219 0.333480
0.090690 1 + 0.139626 2 = 0.428850 3 32.020311 -4.375061 -0.328234E-3 -0.111507E-3 -0.00513 0.004073
4 32.015181 -4.370988 -0.405454E-7 -0.112109E-7 -0.000001 0.00000

S-ar putea să vă placă și