Documente Academic
Documente Profesional
Documente Cultură
Pn (x) =
k=0
f (k ) (x0 )
f '(x0 )(x x0 ) f ''(x0 )(x x0 )2
k
(x x0 ) = f (x0 ) +
+
+ ...
k!
1!
2!
f (n+1) ( )(x x0 )N +1
Taylor Remainder (truncation error): ! RN (x) =
! {x [x, x0 ]}
(N + 1)!
Error Growth two types:
Linear: ! n ~ kN 0 ,
Exponential: ! n ~ k N 0
f (b) f (a)
(average secant slope)
ba
Bisection Method: Bracket root with ! p0 , p1 then cut the interval in half and choose the correct half.
! c (a,b) , such that: ! f '(c) =
g( pn1 ) g( p) n
=
> ! n =| g'(c) | n1 where ! c [ pn1 , pn ]
pn1 p
n1
(i) If ! | g'(x) | < 1 for all ! x [a,b] , converges for any ! p0 [a,b]
(ii) If ! | g'( p) | < 1, converges for some ! p0 [a,b]
Identify Convergence of FP method: Taylor expand ! g( pn ) about ! p in ! n+1 =| g( pn ) g( p) | :
FPI Convergence: ! g'(c) =
g''( p) n 2 g'''( p) n 3
! n+1 = g'( p) n +
+
+ ... and take term with lowest order derivative of ! g( p) 0 ,
2!
3!
discard the higher order derivatives (small as ! n > )
| |
Convergence Rates: ! lim n+1 = where {: Order of convergence, : Asymptotic error constant}
n | |
n
Newtons Method: Derived by letting! g(x) = x (x) f (x) and setting ! g'( p) = 0 :
f ( pn )
! pn+1 = pn
, guaranteed quadratic convergence for some ! p0 [a,b] provided ! f '( p) 0 . If
f '( pn )
m 1
! f '( p) = 0 , then convergence is linear because ! g'( p) =
0.
m
f (x)
Modified Newtons Method: Let ! u(x) =
where ! f (x) has multiplicity ! m at ! x = p and
f '(x)
u( pn )
! u '( p) 0 . Apply Newton's Method to ! u(x) , thus ! pn+1 = pn
u '( pn )
f ( pn )( pn pn1 )
Secant Method: Newtons Method with approximate derivative ! pn+1 = pn
f ( pn ) f ( pn1 )
Requires two starting values, dont need to bracket root, super-linear convergence,
! =
5 +1
= 1.618 .
2
Aitkens ! Method: ! p! n = pn
2
( pn+1 pn )2
(pn )2
= pn 2
where ! pn = pn+1 pn . Compute
pn+2 2 pn+1 + pn
pn
!J =
f1
f1
x1
xn
! " !
fn
fn
!
x1
xn
1 | i = j
=
Kronecker Delta
j=0
0 | i j
N
N
(x xk )
Lagrange Polynomial: ! PN (x) = f (x j )LN , j (x) where ! LN , j (x) =
satisfies Kron. Delta
k=0 (x j x k )
j=0
Polynomial Interpolation: ! PN (x) =
f (x )b (x) where ! b (x ) =
j
i, j
k j
Easy to use, but inefficient: weighted sum of N+1 Nth order polynomials.
f ( N +1) ( ) N
Error term: ! f (x) PN (x) =
(x xi )
(N + 1)! i=0
Chebyshev Optimal Points: A minimal maximum error exists on the interval ! [1,1] if we use the
roots of the ! N + 1 order chebyshev polynomials for ! {xi } . ! xi = cos
the interval ! xi =
2(i + 1)
and can be mapped to
2(N + 1)
a+b ba
+
xi
2
2
Nevilles Method: Iteratively find interpolating polynomial by evaluating lower order polynomials:
!
p0,0 = f (x0 )
p0,1 =
p1,1 (x x0 ) p0,0 (x x1 )
x1 x0
p1,1 = f (x1 )
p2,2 = f (x2 )
p0,2 =
p1,2 =
p2,2 (x x1 ) p1,1 (x x2 )
x2 x1
p1,2 (x x0 ) p0,1 (x x2 )
x2 x1
Hermite Polynomial: Matches ! f (x) and ! f '(x) for ! {xi } | ! i = 0...N . Solution is ! 2N + 1 order:
! P2 N +1 (x) =
f (x )H
j
j=0
H ' 2 N +1 (xi ) = 0 }
'
j = 0,1,...N 1
''
j+1
j = 0,1,...N 2
j = 0,1,...N 2
'
'
! sn1 (xn ) = f (xn )
s0'' (x0 ) = 0
''
sn1
(xn ) = 0
f j+1 f j
h
+ O(h) | O(h) = f '' ( )
h
2
f j+2 + 4 f j+1 3 f j
h 2 ''
+ O(h 2 ) | O(h 2 ) =
f ( )
2h
3
f j+1 f j1
f ''' ( ) 2
+ O(h 2 ) | O(h 2 ) =
h
2h
6
f 2 f j+1 + f j
Second Forward Difference Approximation: ! f j'' = j+2
+ O(h) | O(h) = hf ''' ( )
h2
f j+1 2 f j + f j1
f (iv) ( ) 2
2
2
+
O(h
)
|
O(h
)
=
h
h2
12
j+1 j1 Mh 2
Truncation and Round-Off Error: ! total = truncation + roundoff = O(h )+
(for
2h
6
h
first centered difference)! lim total = & lim total = . Find ! hoptimal by setting ! total = 0 .
h0
h
h
m
[ f (x ) p! (x )]
for polynomial
j=0
m j+k m
! ak xi = yi xij , matrix form: !
i=1
i=1
k=0
x
x
!
xiN
E E
E
,
,...,
= 0 . ! N + 1 normal eqs:
a0 a1
aN
!
N
i
2
i
"
#
!
xi2 N
a0
a1
!
aN
y
y x
i
i i
!
yi xiN
Non-polynomial Curve Fitting (ex): Linearize data y = beax > ln y = lnb + ax and create linear fit,
or solve normal equations using nonlinear system of equations root finding method.
b
N
h 2 ' h 3 ''
f (x)dx = hfa +
fa +
fa + ... = ai fi + (error term) Look at
2!
3!
i=0
order of polynomial that can be integrated exactly to determine order of error: ! PN( N +1) (x) = 0
Newton-Cotes Closed Formulas: ! h =
ba
where ! N + 1 points used
N
h 2 ' h 3 '' h
h 3 ''
fa +
fa = ( fa + fb )
f ( ) | O(h 3 )
2!
3!
2
12
Simpsons Rule: Use centered difference approximation will first derivative eliminated
ba
where ! N + 1points used
N +2
2h 3 ''
f ( ) can integrate linear poly. exactly
3!
3h
9h 3 ''
Other Rule: ! I(b) =
[ f0 + f1 ] + 4 f ( ) can only integrate linear poly. exactly
2
P canbeapproximated exactly
N is even
N +1
Newton-Cotes Rule: !
N is odd
PN canbeapproximated exactly
h 3 ''
h3
ba
f
(
)
=
12 j 12 N f '' ( ) = 12 h 2 f '' ( ) O(h 2 ) loses factor of h
j=1
N
2
2
h
b a 4 (iv)
Composite Simpsons Rule: ! I(b) =
f0 + 2 f2 j + 4 f2 j1 + fN +
h f ( ) || O(h 4 )
3
180
j=1
j=1
i=0
i=0
ai i ai = b a where = max ( i
i=0
) and dx = (b a) = a
Romberg Integration: Iteratively compute quadrature each time combining quadratures of different
panel sizes to cancel the leading error term. ! Rm, j
h 2 j+2
4 j Rm, j1 Rm1, j1
=
error: ! E ~ O m
4 j 1
2
R0,0 = RN
! h / 2 R1,0 = R2 N
h/4
R2,0 = R4 N
h/8
R3,0 = R8 N
4R1,0 R0,0
4 1
4R1,0 R0,0
R2,1 =
4 1
4R1,0 R0,0
R3,1 =
4 1
R1,1 =
4 2 R2,1 R1,1
42 1
2
4 R3,1 R2,1
=
42 1
R2,2 =
R3,2
R3,3 =
4 3 R3,2 R2,2
43 1
Adaptive Quadrature: Continually subdivide an interval in half, computing a quadrature rule until
criterion is filled (ex. Simpsons Rule):
j
1
! Si, j Si, j+1 Si+1, j+1 < M where is the toleranceand M is specifictoquadraturerule . ! i is
2
the section index and ! j is the refinement.
b
exactly the max order polynomial with min # points. Can integrate ! 2N 1 order polynomial exactly
with ! N points.
ba N
a+b ba
! f (x)dx =
ai f (xi ) where xi =
+
xi and ai are the gaussian weights
2 i=1
2
2
a
b