Sunteți pe pagina 1din 5

Taylor Polynomial: !

Pn (x) =

k=0

f (k ) (x0 )
f '(x0 )(x x0 ) f ''(x0 )(x x0 )2
k
(x x0 ) = f (x0 ) +
+
+ ...
k!
1!
2!

f (n+1) ( )(x x0 )N +1
Taylor Remainder (truncation error): ! RN (x) =
! {x [x, x0 ]}
(N + 1)!
Error Growth two types:
Linear: ! n ~ kN 0 ,
Exponential: ! n ~ k N 0

(Classify linear vs exponential for recursive equations?)


Intermediate Value Theorem: If ! f C[a,b] and k is between ! f (a) and ! f (b) , then there exists a
number ! c (a,b) for which ! f (c) = k
Mean Value Theorem: If ! f C[a,b] and differentiable on ! (a,b) , then there exists a number

f (b) f (a)
(average secant slope)
ba
Bisection Method: Bracket root with ! p0 , p1 then cut the interval in half and choose the correct half.
! c (a,b) , such that: ! f '(c) =

Continue until within accuracy limit.


Fixed Point Iteration: ! P is a fixed point of ! g(x) , if ! g( p) = p . Provable that ! p exists on ! [a,b] if:
(i) ! g(x) is continuous on ! [a,b] With (ii) sufficient but not necessary (not uniqueness)
(ii) ! g(x) is bounded on ! [a,b] by ! [a,b] With (i) sufficient but not necessary (not uniqueness)
(iii) ! | g'(x) | < 1 for all ! x [a,b] Guarantees uniqueness

g( pn1 ) g( p) n
=
> ! n =| g'(c) | n1 where ! c [ pn1 , pn ]
pn1 p
n1
(i) If ! | g'(x) | < 1 for all ! x [a,b] , converges for any ! p0 [a,b]
(ii) If ! | g'( p) | < 1, converges for some ! p0 [a,b]
Identify Convergence of FP method: Taylor expand ! g( pn ) about ! p in ! n+1 =| g( pn ) g( p) | :
FPI Convergence: ! g'(c) =

g''( p) n 2 g'''( p) n 3
! n+1 = g'( p) n +
+
+ ... and take term with lowest order derivative of ! g( p) 0 ,
2!
3!
discard the higher order derivatives (small as ! n > )
| |
Convergence Rates: ! lim n+1 = where {: Order of convergence, : Asymptotic error constant}
n | |
n
Newtons Method: Derived by letting! g(x) = x (x) f (x) and setting ! g'( p) = 0 :
f ( pn )
! pn+1 = pn
, guaranteed quadratic convergence for some ! p0 [a,b] provided ! f '( p) 0 . If
f '( pn )
m 1
! f '( p) = 0 , then convergence is linear because ! g'( p) =
0.
m
f (x)
Modified Newtons Method: Let ! u(x) =
where ! f (x) has multiplicity ! m at ! x = p and
f '(x)
u( pn )
! u '( p) 0 . Apply Newton's Method to ! u(x) , thus ! pn+1 = pn
u '( pn )
f ( pn )( pn pn1 )
Secant Method: Newtons Method with approximate derivative ! pn+1 = pn
f ( pn ) f ( pn1 )

Requires two starting values, dont need to bracket root, super-linear convergence,
! =

5 +1
= 1.618 .
2

Aitkens ! Method: ! p! n = pn
2

( pn+1 pn )2
(pn )2
= pn 2
where ! pn = pn+1 pn . Compute
pn+2 2 pn+1 + pn
pn

! p! 0 from ! p0 , p1 , p2 , ! p! 1 from ! p1 , p2 , p3 , etc. Faster convergence, ! =| g'( p) |2 , still linear ! = 1 .


Stephensons Method: Same formula as Aitkens, but get! p! 0 from ! p0 , p1 , p2 then! p3 = g( p! 0 ) Next
get! p! 1 from ! p3 , p4 , p5 then ! p6 = g( p! 1 ) . Converges quadratically (! = 2 ).
Newtons Method System of Non-Linear Equations: ! xn+1 = g(xn ) = xn J 1 (xn ) f (xn ) where

!J =

f1
f1

x1
xn

! " !
fn
fn

!
x1
xn

1 | i = j
=
Kronecker Delta
j=0
0 | i j
N
N
(x xk )
Lagrange Polynomial: ! PN (x) = f (x j )LN , j (x) where ! LN , j (x) =
satisfies Kron. Delta
k=0 (x j x k )
j=0
Polynomial Interpolation: ! PN (x) =

f (x )b (x) where ! b (x ) =
j

i, j

k j

Easy to use, but inefficient: weighted sum of N+1 Nth order polynomials.

f ( N +1) ( ) N
Error term: ! f (x) PN (x) =
(x xi )
(N + 1)! i=0
Chebyshev Optimal Points: A minimal maximum error exists on the interval ! [1,1] if we use the
roots of the ! N + 1 order chebyshev polynomials for ! {xi } . ! xi = cos
the interval ! xi =

2(i + 1)
and can be mapped to
2(N + 1)

a+b ba
+
xi
2
2

Nevilles Method: Iteratively find interpolating polynomial by evaluating lower order polynomials:
!

pi,i (x) = f (xi )


pi, j (x) =

pi+1, j (x)(x xi ) pi, j1 (x)(x x j ) final solution is ! p0,N (x) . Ex:


x j xi

p0,0 = f (x0 )
p0,1 =

p1,1 (x x0 ) p0,0 (x x1 )
x1 x0

p1,1 = f (x1 )
p2,2 = f (x2 )

p0,2 =
p1,2 =

p2,2 (x x1 ) p1,1 (x x2 )
x2 x1

p1,2 (x x0 ) p0,1 (x x2 )
x2 x1

Hermite Polynomial: Matches ! f (x) and ! f '(x) for ! {xi } | ! i = 0...N . Solution is ! 2N + 1 order:
! P2 N +1 (x) =

f (x )H
j

j=0

2 N +1, j (x) + f '(x j ) H 2 N +1, j (x) where:


j=0

H 2 N +1, j (x) = [1 2(x x j )L N , j (x j )]L2N , j (x) | { H 2 N +1 (xi ) = i, j

H ' 2 N +1 (xi ) = 0 }

'

H 2 N +1, j (x) = (x x j )L2N , j (x) | { H 2 N +1 (xi ) = 0 H '2 N +1 (xi ) = i, j }


f (2 N +2) ( ) N
Error Term: !
(x x j )2

(2N + 2)! j=0


Cubic Spline: Make 1st and 2nd derivatives continuous across interval. Conditions:
1. s j (x j ) = f (x j )

j = 0,1,...N 1

! 2. s j (x j+1 ) = f (x j+1 ) j = 0,1,...N 1


3. s j ' (x j+1 ) = s 'j+1 (x j+1 )
4. s j (x j+1 ) = s (x j+1 )
''

''
j+1

j = 0,1,...N 2
j = 0,1,...N 2

s0' (x0 ) = f ' (x0 )

'
'
! sn1 (xn ) = f (xn )

s0'' (x0 ) = 0

''
sn1
(xn ) = 0

Clamped Cubic Spline *most accurate

Natural Cubic Spline *most common

First Forward Difference Approximation (2 pts): ! f j' =

f j+1 f j
h
+ O(h) | O(h) = f '' ( )
h
2

First Forward Difference Approximation (3 pts): ! f j' =

f j+2 + 4 f j+1 3 f j
h 2 ''
+ O(h 2 ) | O(h 2 ) =
f ( )
2h
3

f j+1 f j1
f ''' ( ) 2
+ O(h 2 ) | O(h 2 ) =
h
2h
6
f 2 f j+1 + f j
Second Forward Difference Approximation: ! f j'' = j+2
+ O(h) | O(h) = hf ''' ( )
h2

First Centered Difference Approximation: ! f j' =

Second Centered Difference Approximation: ! f j'' =

f j+1 2 f j + f j1
f (iv) ( ) 2
2
2
+
O(h
)
|
O(h
)
=
h
h2
12

j+1 j1 Mh 2
Truncation and Round-Off Error: ! total = truncation + roundoff = O(h )+

(for
2h
6
h

first centered difference)! lim total = & lim total = . Find ! hoptimal by setting ! total = 0 .
h0
h
h
m

Polynomial Curve Fitting: Minimize error, ! E =

[ f (x ) p! (x )]

for polynomial

j=0

! p! (x) = pN (x) = aN x N + aN 1 x N 1 + ...+ a1 x + a0 by setting !

m j+k m
! ak xi = yi xij , matrix form: !
i=1
i=1
k=0

x
x

!
xiN

E E
E
,
,...,
= 0 . ! N + 1 normal eqs:
a0 a1
aN
!

N
i

2
i

"
#

!
xi2 N

a0
a1

!
aN

y
y x
i

i i

!
yi xiN

Non-polynomial Curve Fitting (ex): Linearize data y = beax > ln y = lnb + ax and create linear fit,
or solve normal equations using nonlinear system of equations root finding method.
b

Numerical Quadrature: ! I(b) =

N
h 2 ' h 3 ''
f (x)dx = hfa +
fa +
fa + ... = ai fi + (error term) Look at
2!
3!
i=0

order of polynomial that can be integrated exactly to determine order of error: ! PN( N +1) (x) = 0
Newton-Cotes Closed Formulas: ! h =

ba
where ! N + 1 points used
N

Trapezoid Rule: Use forward difference approximation to keep two terms:


! I(b) = hfa +

h 2 ' h 3 '' h
h 3 ''
fa +
fa = ( fa + fb )
f ( ) | O(h 3 )
2!
3!
2
12

Can integrate linear poly. exactly

Simpsons Rule: Use centered difference approximation will first derivative eliminated

h 2 ' h 3 '' h 4 ''' h 5 iv


I(b) = I(c) + hfc +
fc +
fc +
fc +
fc
2!
3!
4!
5!
h 2 ' h 3 '' h 4 ''' h 5 iv
I(a) = I(c) hfc +
fc
fc +
fc
fc
2!
3!
4!
5!
!
2h 3 '' 2h 5 (iv)
fc +
f ( )
3!
5!
h
h 5 (iv)
I(b) = [ fa + 4 fc + fb ]
f ( )
3
90
I(b) = 2hfc +

Newton-Cotes Open Formulas:! h =

ba
where ! N + 1points used
N +2

2h 3 ''
f ( ) can integrate linear poly. exactly
3!
3h
9h 3 ''
Other Rule: ! I(b) =
[ f0 + f1 ] + 4 f ( ) can only integrate linear poly. exactly
2
P canbeapproximated exactly

N is even
N +1
Newton-Cotes Rule: !

N is odd
PN canbeapproximated exactly

Midpoint Rule: ! I(b) = 2hf0 +

Composite Integration: Use low-order Newton-Cotes formulas on sub intervals = panels


N 1
N h 3 ''
h
h 3 ''
Composite Trapezoid Rule: ! I(b) = f0 + 2 f j + f N +
f ( j ) = Nf ( j )
2
12
j=1 12
j=1

Error becomes: ! Etruncaction =

h 3 ''
h3
ba
f
(

)
=
12 j 12 N f '' ( ) = 12 h 2 f '' ( ) O(h 2 ) loses factor of h
j=1
N

when applied N times (composite)


N
N
1
1

2
2
h
b a 4 (iv)
Composite Simpsons Rule: ! I(b) =
f0 + 2 f2 j + 4 f2 j1 + fN +
h f ( ) || O(h 4 )
3
180
j=1
j=1

Numerical Integration Round-Off Error: Optimal error always with smallest h


! Eround =

i=0

i=0

ai i ai = b a where = max ( i

i=0

) and dx = (b a) = a

Romberg Integration: Iteratively compute quadrature each time combining quadratures of different
panel sizes to cancel the leading error term. ! Rm, j

h 2 j+2
4 j Rm, j1 Rm1, j1
=
error: ! E ~ O m

4 j 1
2

where ! m = # of panel doublings and ! j = # of error removals


h

R0,0 = RN

! h / 2 R1,0 = R2 N
h/4

R2,0 = R4 N

h/8

R3,0 = R8 N

4R1,0 R0,0
4 1
4R1,0 R0,0
R2,1 =
4 1
4R1,0 R0,0
R3,1 =
4 1
R1,1 =

4 2 R2,1 R1,1
42 1
2
4 R3,1 R2,1
=
42 1

R2,2 =
R3,2

R3,3 =

4 3 R3,2 R2,2
43 1

Adaptive Quadrature: Continually subdivide an interval in half, computing a quadrature rule until
criterion is filled (ex. Simpsons Rule):
j

1
! Si, j Si, j+1 Si+1, j+1 < M where is the toleranceand M is specifictoquadraturerule . ! i is
2
the section index and ! j is the refinement.
b

Gaussian Quadrature: Set of optimal points for approximating !

f (x)dx in that can integrate


a

exactly the max order polynomial with min # points. Can integrate ! 2N 1 order polynomial exactly
with ! N points.

ba N
a+b ba
! f (x)dx =
ai f (xi ) where xi =
+
xi and ai are the gaussian weights

2 i=1
2
2
a
b

S-ar putea să vă placă și