Sunteți pe pagina 1din 90

Chapter 5 Finite Difference Methods

Math6911 S08, HM Zhu

References

1. Chapters 5 and 9, Brandimarte 2. Section 17.8, Hull 3. Chapter 7, Numerical analysis, Burden and Faires

2
Math6911, S08, HM ZHU

Outline

Finite difference (FD) approximation to the derivatives Explicit FD method Numerical issues Implicit FD method Crank-Nicolson method Dealing with American options Further comments

3
Math6911, S08, HM ZHU

Chapter 5 Finite Difference Methods

5.1 Finite difference approximations

Math6911 S08, HM Zhu

Finite-difference mesh
Aim to approximate the values of the continuous function f(t, S) on a set of discrete points in (t, S) plane Divide the S-axis into equally spaced nodes at distance S apart, and, the t-axis into equally spaced nodes a distance t apart (t, S) plane becomes a mesh with mesh points on (i t, jS) We are interested in the values of f(t, S) at mesh points (i t, jS), denoted as

fi , j = f ( it, j S )

Math6911, S08, HM ZHU

The mesh for finite-difference approximation


fi , j = f ( it, j S )
Smax=MS S

?
j S

i t
Math6911, S08, HM ZHU

T=N t

t
6

Black-Scholes Equation for a European option with value V(S,t)


V 1 2 2 2V V + S + rS rV = 0 2 S t 2 S where 0 < S < + and 0 t < T with proper final and boundary conditions
Notes: This is a second-order hyperbolic, elliptic, or parabolic, forward or backward partial differential equation Its solution is sufficiently well behaved ,i.e. well-posed
Math6911, S08, HM ZHU

(5.1)

Finite difference approximations


The basic idea of FDM is to replace the partial derivatives by approximations obtained by Taylor expansions near the point of interests
For example, f ( S,t ) t t t for small t, using Taylor expansion at point ( S ,t )
t 0

= lim

f ( S ,t + t ) f ( S ,t )

f ( S ,t + t ) f ( S ,t )

f ( S ,t + t ) = f ( S ,t ) +
Math6911, S08, HM ZHU

f ( S ,t ) t

t + O ( t )

)
8

Forward-, Backward-, and Centraldifference approximation to 1st order derivatives


central backward forward

t t
Forward: Backward: Central: f ( t,S ) t f ( t,S )

t + t
+ O ( t ) + O ( t ) + O ( t )

f ( t + t,S ) f ( t,S )

t f ( t,S ) f ( t t,S )

t f ( t,S ) t

t f ( t + t,S ) f ( t t,S ) 2t

Symmetric Central-difference approximations to 2nd order derivatives


2 f ( t,S ) S
2

f ( t,S + S ) 2 f ( t,S ) + f ( t,S S )

( S )

+ O ( S )

Use Taylor's expansions for f ( t,S + S ) and f ( t,S S ) around point ( t,S ) : f ( t,S + S ) = ? + f ( t,S S ) = ?
10
Math6911, S08, HM ZHU

Finite difference approximations


fi +1, j fi , j f Forward Difference: , t t fi , j fi 1, j f Backward Difference: , t t fi +1, j fi 1, j f Central Difference: , t 2t As to the second derivative, we have: fi , j +1 fi , j f S S fi , j fi , j 1 f S S fi , j +1 fi , j 1 f S 2S

2 f fi , j +1 fi , j fi , j fi , j 1 S 2 S S S fi , j +1 2 fi , j + fi , j 1 = 2 S ( )
Math6911, S08, HM ZHU

11

Finite difference approximations


Depending on which combination of schemes we use in discretizing the equation, we will have explicit, implicit, or Crank-Nicolson methods We also need to discretize the boundary and final conditions accordingly. For example, for European Call,
Final Condition: f N , j = max ( j S K , 0 ) , for j = 0 ,1,...,M Boundary Conditions: fi ,0 = 0 , for i = 0 ,1,...,N r ( N i ) t fi ,M = S max Ke where Smax = M S.

12

Math6911, S08, HM ZHU

Chapter 5 Finite Difference Methods

5.2.1 Explicit Finite-Difference Method

Math6911 S08, HM Zhu

Explicit Finite Difference Methods


f f 1 2 22f In + rS + S = rf , at point (i t , j S ), set 2 t S 2 S f i , j f i 1 , j f backward difference: t t f i , j +1 f i , j 1 f central difference: , S 2S and f i , j + 1 + f i , j 1 2 f i , j 2f , r f = rf i , j , S = j S 2 2 S S
Math6911, S08, HM ZHU

Explicit Finite Difference Methods


Rewriting the equation, we get an explicit schem e: f i 1 , j = a *j f i , j 1 + b*j f i , j + c*j f i , j +1 where 1 a = t 2 j 2 rj 2 2 2 * b j = 1 t j + r
* j

(5.2)

c =
* j
Math6911, S08, HM ZHU

( ( 1 t ( 2

) ) + rj )

for i = N - 1, N - 2 , ..., 1, 0 and j = 1, 2 , ..., M - 1 .

Numerical Computation Dependency

S Smax=MS
(j+1)S j S (j-1)S
x x x x

0
Math6911, S08, HM ZHU

(i-1)t it

T=N t

Implementation
1. Starting with the final values f N , j , we apply (5.2) to solve f N 1, j for 1 j M 1. We use the boundary condition to determine f N 1,0 and f N -1,M . 2. Repeat the process to determine f N 2 , j and so on

17
Math6911, S08, HM ZHU

Example
We compare explicit finite difference solution for a European put with the exact Black-Scholes formula, where T = 5/12 yr, S0=$50, K = $50, =30%, r = 10%. Black-Scholes Price: $2.8446 EFD Method with Smax=$100, S=2, t=5/1200: $2.8288 EFD Method with Smax=$100, S=1, t=5/4800: $2.8406

18
Math6911, S08, HM ZHU

Example (Stability)
We compare explicit finite difference solution for a European put with the exact Black-Scholes formula, where T = 5/12 yr, S0=$50, K = $50, =30%, r = 10%. Black-Scholes Price: $2.8446

EFD Method with Smax=$100, S=2, t=5/1200: $2.8288 EFD Method with Smax=$100, S=1.5, t=5/1200: $3.1414 EFD Method with Smax=$100, S=1, t=5/1200: -$2.8271E22

19
Math6911, S08, HM ZHU

Chapter 5 Finite Difference Methods

5.2.2 Numerical Stability

Math6911 S08, HM Zhu

Numerical Accuracy
The problem itself The discretization scheme used The numerical algorithm used

21
Math6911, S08, HM ZHU

Conditioning Issue

Suppose we have mathematically posed problem: y = f ( x) where y is to evaluated given an input x. Let x* = x + x for small change x. If hen f x* is near f ( x ) , then we call the problem is well - conditioned . Otherwise, it is ill-posed/ill-conditioned.

( )

22
Math6911, S08, HM ZHU

Conditioning Issue
Conditioning issue is related to the problem itself, not to the specific numerical algorithm; Stability issue is related to the numerical algorithm One can not expect a good numerical algorithm to solve an illconditioned problem any more accurately than the data warrant But a bad numerical algorithm can produce poor solutions even to well-conditioned problems

23
Math6911, S08, HM ZHU

Conditional Issue

The concept "near" can be measured by further information about the particular problem: f ( x ) f x* f ( x)

( )

x
x

( f ( x ) 0)

where C is called condition number of this problem. If C is large, the problem is ill-conditioned.

24
Math6911, S08, HM ZHU

Floating Point Number & Error


Let x be any real number. Truncated floating point number : x fl ( x ) = .x1 x2 where x1 0, 0 xi 9, d : an integer, precision of the floating point system Floating point or roundoff error : fl ( x ) x
25
Math6911, S08, HM ZHU

Infinite decimal expansion : x = .x1 x2

xd

10 e xd 10 e

e : an bounded integer

Error Propagation
When additional calculations are done, there is an accumulation of these floating point errors. Example : Let x = 0.6667 and fl ( x ) = 0.667 10 0 where d = 3. Floating point error : fl ( x ) x = 0.0003 Error propagatio n : fl ( x ) x = 0.00040011
2 2
26
Math6911, S08, HM ZHU

Numerical Stability or Instability


Stability ensures if the error between the numerical soltuion and the exact solution remains bounded as numerical computation progresses. That is, f x* (the solution of a slightly perturbed problem) is near f * ( x )( the computed solution) . Stability concerns about the behavior of fi , j f ( it, jS ) as numerical computation progresses for fixed discretization steps t and S. 27
Math6911, S08, HM ZHU

( )

Convergence issue
Convergence of the numerical algorithm concerns about the behavior of fi , j f ( it, jS ) as t, S 0 for fixed values ( it, jS ) . For well-posed linear initial value problem, Stability Convergence (Lax's equivalence theorem, Richtmyer and Morton, "Difference Methods for Initial Value Problems" (2nd) 1967)
28
Math6911, S08, HM ZHU

Numerical Accuracy
These factors contribute the accuracy of a numerical solution. We can find a good estimate if our problem is wellconditioned and the algorithm is stable
Stable: f * ( x ) f x* ( ) f ( x) f ( x) Well-conditioned: f ( x ) f ( x )
* *

29
Math6911, S08, HM ZHU

Chapter 5 Finite Difference Methods

5.2.3 Financial Interpretation of Numerical Instability

Math6911 S08, HM Zhu

Financial Interpretation of instability (Hall, page 423-4)


If f S and 2 f S 2 are assum ed to be the sam e at ( i + 1, j ) as they are at ( i, j ), we obtain equations of the form : f f f f =a +b +c (5.3)
i,j j i +1 , j 1 j i +1 , j j i +1 , j +1

where 1 1 1 1 2 2 d t j t r j = 1 + r t 2 2 1 + r t 1 1 2 2 bj = 0 1 j t = 1 + r t 1 + r t 1 1 1 1 2 2 j = u c t j + t r j = 2 1 + r t 2 1 + r t for i = N - 1, N - 2 , ..., 1, 0 and j = 1, 2 , ..., M - 1 . Math6911, S08, HM ZHU j= a

Explicit Finite Difference Methods

u 0

i +1, j +1 i +1, j

i , j
d

i +1, j 1

These coefficients can be interpreted as probabilities times a discount factor. If one of these probability < 0, instability occurs.
Math6911, S08, HM ZHU

Explicit Finite Difference Method as Trinomial Tree

Check if the mean and variance of the Expected value of the increase in asset price during t: E [ ] = S d + 0 0 + S u = rjS t = rS t Variance of the increment:
2 2 2 2 2 2 E 0 S S = j S t = S t = + + ( ) ( ) ( ) d 0 u 2 2 2 2 2 2 2 2 2 2 = S t r S t S t Var [ ] = E E ( ) [ ] 2

which is coherent with geometric Brownian motion in a risk-neutral world

Math6911, S08, HM ZHU

Change of Variable

Define Z = ln S. The B-S equation becomes f 2 f 2 2 f +r + = rf 2 2 Z 2 Z t The corresponding difference equation is fi +1, j fi , j t or
* fi , j = *j f i +1, j 1 + * f + j i +1, j j f i +1, j +1

2 fi +1, j +1 f i +1, j 1 2 f i +1, j +1 2 fi +1, j 1 + fi +1, j +1 +r + = rfi , j 2 2 2 Z 2 Z (5.4)


34

Math6911, S08, HM ZHU

Change of Variable

where
2 2 1 t t * j = + r 1 + r t 2 2Z 2 Z 2 t 2 1 * 1 = j 2 1 + r t Z 2 2 1 t t * j = + r 2 1 + r t 2 2 Z 2 Z

It can be shown that it is numerically most efficient if Z = 3t .


35
Math6911, S08, HM ZHU

Reduced to Heat Equation


Get rid of the varying coefficients S and S by using change of variables: 1 1 ( k 1) x ( k +1)2 4 S = Ee x , t = T 2 2 , V (S , t ) = E e 2 u ( x, )
1 2 k =r 2 Equation (5.1) becomes heat equation (5.5):

u 2u = x2
Math6911, S08, HM ZHU

(5.5)
36

for < x < + and > 0

Explicit Finite Difference Method


m With u n = u (n x,m ), this involves solving a system of

finite difference equations of the form :


m +1 m m m m 2 un un u u un + 2 +1 n n 1 + O ( ) = + O (x ) 2 (x )

Ignoring terms of O ( ) and O (x ) , we can approximat e


2

this by
m +1 m m m ( ) 1 2 un u u = un + + +1 n n 1

1 2 T + 2 where = , for and 0 1 -N n N m , ,...M = = (x )2 Math6911, S08, HM ZHU

Stability and Convergence (P. Wilmott, et al, Option Pricing)


Stability: The solution of Eqn (5.5) is

1 1 ; ii) Unstable if > i) Stable if 0 < = 2 2 ( x ) 2


Convergence: 1 If 0 < , then the explicit finite-difference approximation 2 converges to the exact solution as , x 0
m (in the sense that un u ( n x, m ) as , x 0)

Rate of Convergence is O ( )
Math6911, S08, HM ZHU

38

Chapter 5 Finite Difference Methods

5.3.1 Implicit Finite-Difference Method

Math6911 S08, HM Zhu

Implicit Finite Difference Methods


f f 1 2 22f In + rS + S = r f , we use 2 t S 2 S f i +1,j f i ,j f forw ard difference: t t f i , j + 1 f i , j 1 f central difference: , S 2S and f i , j + 1 + f i , j 1 2 f i , j 2f , rf = rf i , j 2 2 S S
Math6911, S08, HM ZHU

Implicit Finite Difference Methods


Rewriting the equation, we get an implicit scheme: a j f i,j 1 + b j f i,j + c j f i,j +1 = f i +1, j where 1 a j = t 2 j 2 + rj 2 2 2 b j = 1 + t j + r cj
Math6911, S08, HM ZHU

(5.6)

( 1 = t ( 2

) ) + rj )

for i = N - 1, N - 2 , ..., 1, 0 and j = 1, 2 , ..., M - 1 .

Numerical Computation Dependency

S Smax=MS
(j+1)S j S (j-1)S
x x x x

0
Math6911, S08, HM ZHU

(i-1)t it (i+1)t

T=N t

Implementation
Equation (5.6) can be rewritten in matrix form: Cfi = fi +1 + bi

( 5.7 )
T

where fi and bi are (M 1) dimensional vectors fi = fi ,1 , fi ,2 , fi ,3 , fi ,M 1 ,b i = a1 fi ,0 ,0 , 0 , and C is (M 1) (M 1) symmetric matrices b1 a 2 C=0 0 c1 b2 a3 0 c2 b3 0 aM 1 0 0 cM 2 bM 1 , 0 , cM 1 fi ,M


T

Implementation
1. Starting with the final values f N , j , we need to solve a linear system (5.7) to obtain f N 1, j for 1 j M 1 using LU factorization or iterative methods. We use the boundary condition to determine f N 1,0 and f N -1,M . 2. Repeat the process to determine f N 2 , j and so on

44
Math6911, S08, HM ZHU

Example
We compare implicit finite difference solution for a European put with the exact Black-Scholes formula, where T = 5/12 yr, S0=$50, K = $50, =30%, r = 10%. Black-Scholes Price: $2.8446 IFD Method with Smax=$100, S=2, t=5/1200: $2.8194 IFD Method with Smax=$100, S=1, t=5/4800: $2.8383

45
Math6911, S08, HM ZHU

Example (Stability)
We compare implicit finite difference solution for a European put with the exact Black-Scholes formula, where T = 5/12 yr, S0=$50, K = $50, =30%, r = 10%. Black-Scholes Price: $2.8846

IFD Method with Smax=$100, S=2, t=5/1200: $2.8288 IFD Method with Smax=$100, S=1.5, t=5/1200: $3.1325 IFD Method with Smax=$100, S=1, t=5/1200: $2.8348

46
Math6911, S08, HM ZHU

Implicit vs Explicit Finite Difference Methods

i +1, j +1 i , j i +1, j i +1, j 1


Explicit Method
Math6911, S08, HM ZHU

i , j +1 i , j i , j 1
Implicit Method (always stable)

i +1, j

Implicit vs Explicit Finite Difference Method

The explicit finite difference method is equivalent to the trinomial tree approach: Truncation error: O(t) Stability: not always The implicit finite difference method is equivalent to a multinomial tree approach: Truncation error: O(t) Stability: always
48
Math6911, S08, HM ZHU

Other Points on Finite Difference Methods


It is better to have ln S rather than S as the underlying variable in general Improvements over the basic implicit and explicit methods: Crank-Nicolson method, average of explicit and implicit FD methods, trying to achieve
Truncation error: O((t)2 ) Stability: always
Math6911, S08, HM ZHU

Chapter 5 Finite Difference Methods

5.3.2 Solving a linear system using direct methods

Math6911 S08, HM Zhu

Solve Ax=b

A x=b Various shapes of matrix A

Lower triangular

Upper triangular

General
51

Math6911, S08, HM ZHU

5.3.2.A Triangular Solvers

Math6911 S08, HM Zhu

Example: 3 x 3 upper triangular system

4 6 1 x1 100 0 1 1 x = 10 2 20 x3 0 0 4
x3 = 20 / 4 = 5 x2 = 10 x3 = 5 4 x1 = 100 x3 6 * x2 = 65 x1 = 65 4
Math6911, S08, HM ZHU

53

Solve an upper triangular system Ax=b

(0

0 aii

x1 x2 ain ) = aii xi + aij x j = bi ,i = 1, , n j >i x n

xn = bn ann aii ,i = 1, xi = b a x i ij j > j i


Math6911, S08, HM ZHU

n 1
54

Implementation

Function x = UpperTriSolve(A, b) n = length(b); x(n) = b(n)/A(n,n); for i = n-1:-1:1 sum = b(i); for j = i+1:n sum = sum - A(i,j)*x(j); end x(i) = sum/A(i,i); end
55
Math6911, S08, HM ZHU

5.3.2.B Gauss Elimination

Math6911 S08, HM Zhu

To solve Ax=b for general form A

To solve Ax = b : Suppose A = LU. Then Ax = LUx = b z


Math6911, S08, HM ZHU

Solve two triangular systems : 1) solve z from Lz = b 2) solve x from Ux = z

57

Gauss Elimination

Goal: Make A an upper triangular matrix through using fundamental row operations For example, to zero the elements in the lower triangular part of A 1) Zero a21
1 a 21 a11 0 0 0 a11 1 0a21 a 0 1 31 a12 a22 a32 a13 a11 a23 = 0 a33 a31 a12 a21 a12 a22 a11 a32 a13 a21 a13 a23 a11 a33
58

Math6911, S08, HM ZHU

E 21 A

Gauss Elimination

2) Zero a31
1 0 a31 a11 a11 0 0 1 0 0 0 1 a31 a12 a22 a21 a12 a11 a11 a13 a a 12 13 a21 a13 a21 a13 a21 a12 a23 a23 = 0 a22 a11 a11 a11 a33 a31 a12 a31 a13 a33 0 a32 a11 a11 a11 a12 a13 = 0 a a 22 23 0 a32 a33
59

a32

E 31 E 21 A
Math6911, S08, HM ZHU

Gauss Elimination

3) Zero a32
1 0 1 0 0 a32 a22 0 a11 0 0 0 1 a12 a22 a32 a13 a11 a23 = 0 a33 0 a12 a22 0 a13 a23 U a32 a23 a33 a22

E 32 E 31 E 21 A = U lower triangular
Math6911, S08, HM ZHU

60

Gauss Elimination
= 0 0 1

Claim 1:
E 32 E 31 E 21

1 a 21 a1 1 a 31 a1 1

0 1 a 32 a 22

Claim 2:

( E 32 E 31 E 21 )
Math6911, S08, HM ZHU

1 a 21 = a11 a 31 a11

0 1 a 32 a 22

0 0 1

61

LU Factorization

Therefore, through Gauss Elimination, we have


E 32 E 31 E 21A = U A = (E 32 E 31 E 21 ) U
1

A = LU

It is called LU factorization. When A is symmetric,

A = LLT which is called Cholesky Decomposition


62
Math6911, S08, HM ZHU

To solve Ax=b for general form A

To solve Ax = b becomes 1) Use Gauss elimination to make A upper triangular, i.e. L1Ax = L1b Ux = z 2) Solve x from Ux = z This suggests that when doing Gauss elimination, we can do it to the augmented matrix [A b] associated with the linear system.

63
Math6911, S08, HM ZHU

An exercise

% build the matrix A A = [2, -1, 0; -1, 2, -1; 0, -1, 2]

2 1 x1 0 1 2 1 x = 0 2 1 2 x 3 4

% build the vector b x_true = [1:3]'; b = A * x_true; % lu decomposition of A [l, u] = lu(A) % solve z from lz = b where z = ux z = l\b; % solve x from ux = z x = u\z

64

Math6911, S08, HM ZHU

General Gauss Elimination

row i

row j -

a ji aii

row i

65
Math6911, S08, HM ZHU

What if we have zero or very small diagonal elements?


Somtimes, we need to permute rows of A so that Gauss eliminatio n can be computed or computed stably, i.e., PA = LU. This is called partial pivoting. For example 1 3 0 1 0 1 2 3 PA = = 1 0 2 3 0 1 1 0 0.0001 0.0001 1 1 = = LU A= - 9999 1 10,000 1 0 1 0 1 1 0 1 0.0001 1 1 = = LU PA = 1 1 0.0001 1 0 .9999 Math6911, S08, HM ZHU 0 1 0 A= 2

66

Can we always have A = LU?

No! If det ( A (1:k,1:k ) ) 0 for k =1,...,n-1, then A Rnxn has an LU factorization. Proof: See Golub and Loan, Matrix Computation, 3rd edition

67
Math6911, S08, HM ZHU

Chapter 5 Finite Difference Methods

5.4.1 Crank-Nicolson Method

Math6911 S08, HM Zhu

Explicit Finite Difference Methods


f f 1 2 22f + rS + S = rf , W ith 2 t S 2 S we can obtain an explicite form : f i +1 , j f i , j t rj S + O ( t ) = 2S 1 2 2 2 f i + 1 , j + 1 + f i + 1 , j 1 2 f i + 1 , j j ( S ) S2 2

f i +1 , j +1 f i +1 , j 1

+ rf i +1 , j + O
Math6911, S08, HM ZHU

(( S ) )
2

Implicit Finite Difference Methods


f f 1 2 22f + rS + S = r f, W ith 2 t S 2 S we can obtain an im plicit form : f i +1 , j f i , j t rj S + O ( t ) = 2S 1 2 2 2 f i , j + 1 + f i , j 1 2 f i , j j ( S ) S2 2

f i , j +1 f i , j 1

+ rf i , j + O
Math6911, S08, HM ZHU

(( S ) )
2

Crank-Nicolson Methods: average of explicit and implicit methods (Brandmarte, p485)

fi+1, j fi, j

t rjS fi+1, j +1 fi+1, j 1 fi, j +1 fi, j 1 + 2 2S 2S fi, j +1 + fi, j 1 2 fi, j 1 2 2 2 fi +1, j +1 + fi +1, j 1 2 fi +1, j j ( S ) + 2 2 S S 4 r 2 + ( fi+1, j + fi, j ) + O ( S ) 2
Math6911, S08, HM ZHU

+ O ( t ) =
2

Crank-Nicolson Methods
R e w ritin g th e e q u a tio n , w e g e t a n C ra n k -N ic o ls o n s c h e m e : j f i , j 1 + 1

j f i + 1 , j 1
w h e re

( )f + (1 + ) f
j j

i,j

j f i , j +1 = + j f i +1 , j +1 (5 .5 )

i +1 , j

t 2 j 2 rj 4 t 2 j2 + r j = 2 t j= 2 j 2 + rj 4 fo r i = N - 1 , N - 2 , ..., 1 , 0 a n d j = 1 , 2 , ..., M - 1 .

j=

Math6911, S08, HM ZHU

Numerical Computation Dependency

S Smax=MS
(j+1)S j S (j-1)S
x x x x x x

0
Math6911, S08, HM ZHU

(i-1)t it (i+1)t

T=N t

Implementation
Equation (5.5) can be rewritten in matrix form: M1fi = M 2fi +1 + b where fi and bi are (M 1) dimensional vectors fi = fi ,1 , fi ,2 , fi ,3 , fi ,M 1 , , M 1 ( fi ,M + f i +1,M ) 0 M 2 1 M 1 0 0
T T

b= 1 ( fi ,0 + fi +1,0 ) , 0 1 1 1 1 2 2 M1 = 0 3 0

and M1 and M 2 are (M 1) (M 1) symmetric matrices

M 1

Implementation

and 0 0 1 1 + 1 1 0 + 2 2 2 M2 = 0 3 M 2 0 M 1 1 + M 1 0

75
Math6911, S08, HM ZHU

Example
We compare Crank-Nicolson Methods for a European put with the exact Black-Scholes formula, where T = 5/12 yr, S0=$50, K = $50, =30%, r = 10%. Black-Scholes Price: $2.8446 CN Method with Smax=$100, S=2, t=5/1200: $2.8241 CN Method with Smax=$100, S=1, t=5/4800: $2.8395

76
Math6911, S08, HM ZHU

Example (Stability)
We compare Crank-Nicolson Method for a European put with the exact Black-Scholes formula, where T = 5/12 yr, S0=$50, K = $50, =30%, r = 10%. Black-Scholes Price: $2.8446 CN Method with Smax=$100, S=1.5, t=5/1200: $3.1370 CN Method with Smax=$100, S=1, t=5/1200: $2.8395

77
Math6911, S08, HM ZHU

Example: Barrier Options


Barrier options are options where the payoff depends on whether the underlying assets price reaches a certain level during a certain period of time.
Types: knock-out: option ceases to exist when the asset price reaches a barrier Knock-in: option comes into existence when the asset price reaches a barrier

78
Math6911, S08, HM ZHU

Example: Barrier Option


We compare Crank-Nicolson Method for a European downand-out put, where T = 5/12 yr, S0=$50, K = $50, Sb=$50 =40%, r = 10%. What are the boundary conditions for S?

79
Math6911, S08, HM ZHU

Example: Barrier Option


We compare Crank-Nicolson Method for a European downand-out put, where T = 5/12 yr, S0=$50, K = $50, Sb=$50 =40%, r = 10%. Boundary conditions are

f ( t,S max ) = 0 and f ( t,Sb ) = 0

Exact Price (Hall, p533-535): $0.5424 CN Method with Smax=$100, S=0.5, t=1/1200: $0.5414

80
Math6911, S08, HM ZHU

Appendix A.

Matrix Norms

Math6911 S08, HM Zhu

Vector Norms
- Norms serve as a way to measure the length of a vector or a matrix - A vector norm is a function mapping x n to a real number x s.t. x > 0 for any x 0; x = 0 iff x = 0 c x = c x for any c x + y x + y for any x, y n - There are various ways to define a norm x x
p

p xi i =1 max xi
n 1 i n

(p = 2 is the Euclidean norm )


= ?, v 2 = ?

Math6911, S08, HM ZHU

For example, v = [2 4 - 1 3]. v 1 = ?, v

82

Matrix Norms
- Similarly, a matrix norm is a function mapping A mn to a real number A s.t. A > 0 for any A 0; A = 0 iff A = 0 c A = c A for any c A + B A + B for any A, B mn - Various commonly used matrix norms A
p

sup
x0

Ax x
m p

A A

aij i =1 j =1
m n

A 1 max

1 j n i =1

ij

max
1 i m

a
j =1

ij

A 2 A T A , the spectral norm, where

(B ) max{k : k is an eigenvalue of B}

An Example

2 4 1 A= 3 1 5 2 3 1 A =? A 2 =? A
F

A1 =?
Math6911, S08, HM ZHU

=?
84

Basic Properties of Norms

Let A, B

nn

and x, y . Then
n

1. x 0; and x = 0 x = 0 2. x + y x + y 3. x = x where is a real number 4. Ax A x 5. AB A B


85
Math6911, S08, HM ZHU

Condition number of a square matrix

All norms in n mn are equivalent. That is, if and are norms on n , then c1 ,c2 > 0 such that for all x n , we have c1 x x

c2 x

Condition Number of A Matrix: C A A1 , where A nn . The condition number gives a measure of how close a matrix is close to singular. The bigger the C, the harder it is to solve Ax = b.
86
Math6911, S08, HM ZHU

Convergence

- vectors x k converges to x x k x converges to 0 - matrix A k 0 A k 0 0

87
Math6911, S08, HM ZHU

Appendix B.

Basic Row Operations

Math6911 S08, HM Zhu

Basic row operations

Three kinds of basic row operations: 1) Interchange the order of two rows or (equations)
0 1 0a11 1 0 0a21 0 0 1 a31 a12 a22 a32 a13 a21 a22 a23 = a11 a12 a33 a31 a32 a23 a13 a33

89
Math6911, S08, HM ZHU

Basic row operations

2) Multiply a row by a nonzero constant


c 0 0a11 a12 0 1 0a21 a22 0 0 1 a31 a32 a13 ca11 ca12 a23 = a21 a22 a33 a31 a32 ca13 a23 a33

3) Add or subtract rows 1 0 0 a11 a12 a13 a11 a12 1 1 0 a a a a22 a12 22 23 = a21 a11 21 a32 0 0 1 a31 a32 a33 a31
Math6911, S08, HM ZHU

a13 a23 a13 a33


90

S-ar putea să vă placă și