Sunteți pe pagina 1din 244

Numerical Methods in Finance

Stphane CRPEY, vry University


stephane.crepey@univ-evry.fr
CIMPAJordanie, September 2005
Figure 1: Local volatility on the DAX index
CIMPAJordanie September 2005
Contents
1 Random numbers 2
2 Pseudo random generators 3
2.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Properties required for a good pseudo-random numbers
generator . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Constructing pseudo-random number generators . . . . . . 6
3 Low-discrepancy sequences 8
3.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 General remarks on low discrepancy sequences . . . . . . 10
3.3 Sobol sequences . . . . . . . . . . . . . . . . . . . . . . . 11
S. Crpey Page 2
CIMPAJordanie September 2005
4 Simulation of non-uniform random variables or vectors 14
4.1 Inverse method . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Simulation of Gaussian standard variables . . . . . . . . . 14
4.3 Simulation of Gaussian vectors . . . . . . . . . . . . . . . 17
5 Principle of the Monte Carlo Simulation 19
5.1 Limit theorems . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Estimation principle . . . . . . . . . . . . . . . . . . . . . 20
5.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6 Variance Reduction Techniques 24
6.1 Antithetic Variables . . . . . . . . . . . . . . . . . . . . . 24
6.2 Control Variables . . . . . . . . . . . . . . . . . . . . . . 26
6.3 Importance Sampling . . . . . . . . . . . . . . . . . . . . 27
S. Crpey Page 3
CIMPAJordanie September 2005
6.4 Efciency of the Monte Carlo methods . . . . . . . . . . . 29
7 Quasi Monte Carlo Simulation 31
7.1 General principle . . . . . . . . . . . . . . . . . . . . . . 31
7.2 Koksma-Hlawka inequality . . . . . . . . . . . . . . . . . 31
7.3 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8 Computing the Greeks by (Quasi) Monte Carlo 35
8.1 Finite Differences . . . . . . . . . . . . . . . . . . . . . . 35
8.2 Derivation of the payoff . . . . . . . . . . . . . . . . . . . 36
8.3 Payoff regularization . . . . . . . . . . . . . . . . . . . . 37
9 (Quasi) Monte Carlo algorithms for vanilla options 40
9.1 (Q)MC BS1D . . . . . . . . . . . . . . . . . . . . . . . . 40
S. Crpey Page 4
CIMPAJordanie September 2005
9.2 (Q)MC BS2D . . . . . . . . . . . . . . . . . . . . . . . . 57
10 Simulation of processes 80
10.1 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . 80
10.2 BlackScholes model . . . . . . . . . . . . . . . . . . . . 83
10.3 General diffusions: Euler and Milshtein scheme . . . . . . 85
10.4 Heston model . . . . . . . . . . . . . . . . . . . . . . . . 87
10.5 Monte Carlo Simulation for Processes . . . . . . . . . . . 89
11 (Quasi) Monte Carlo methods for Exotic Options 91
11.1 Lookback options . . . . . . . . . . . . . . . . . . . . . . 91
11.2 Andersen and Brotherton-Ratcliffe Algorithm for Look-
back Options . . . . . . . . . . . . . . . . . . . . . . . . 94
11.3 Barrier options . . . . . . . . . . . . . . . . . . . . . . . 104
S. Crpey Page 5
CIMPAJordanie September 2005
11.4 Asian options . . . . . . . . . . . . . . . . . . . . . . . . 107
12 Trees for vanilla options 110
12.1 Cox-Ross-Rubinstein as an approximation to Black-Scholes 110
12.2 Algorithm (CRR) . . . . . . . . . . . . . . . . . . . . . . 116
12.3 Variants of the CRR tree . . . . . . . . . . . . . . . . . . 123
12.4 Trinomial trees . . . . . . . . . . . . . . . . . . . . . . . 126
12.5 Algorithm (Kamrad-Ritchken) . . . . . . . . . . . . . . . 131
12.6 Miscellaneous remarks . . . . . . . . . . . . . . . . . . . 138
13 Trees for exotic options 144
13.1 Inaccuracy of the direct method for barrier options . . . . 144
13.2 The Ritchken algorithm for barrier options . . . . . . . . . 145
13.3 Customization of trees . . . . . . . . . . . . . . . . . . . 145
S. Crpey Page 6
CIMPAJordanie September 2005
14 Finite Differences for European Vanilla Options 149
14.1 Localization and Discretization . . . . . . . . . . . . . . . 149
14.2 The -scheme . . . . . . . . . . . . . . . . . . . . . . . 158
14.3 Explicit Method . . . . . . . . . . . . . . . . . . . . . . . 160
14.4 Implicit Methods . . . . . . . . . . . . . . . . . . . . . . 161
15 Finite Differences for American Vanilla Options 167
15.1 Variational inequality in nite dimension . . . . . . . . . . 167
15.2 Linear complementarity problem . . . . . . . . . . . . . . 168
15.3 Splitting methods . . . . . . . . . . . . . . . . . . . . . . 173
16 Finite Difference -scheme Algorithm for Vanilla Options 176
17 Finite Differences for 2D Vanilla Options 189
S. Crpey Page 7
CIMPAJordanie September 2005
17.1 Numerical integration by an A.D.I. Method . . . . . . . . 192
17.2 American Options . . . . . . . . . . . . . . . . . . . . . . 193
17.3 Algorithm (A.D.I. BS2D) . . . . . . . . . . . . . . . . . . 194
18 Finite Differences for Exotic Options 208
18.1 Lookback Options . . . . . . . . . . . . . . . . . . . . . . 208
18.2 Barrier Options . . . . . . . . . . . . . . . . . . . . . . . 209
18.3 Asian Options . . . . . . . . . . . . . . . . . . . . . . . . 211
19 Dynamic tests 215
19.1 Delta-hedging . . . . . . . . . . . . . . . . . . . . . . . . 215
19.2 Dynamic tests using Brownian Bridge in the BlackScholes
model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
20 Conclusion 224
S. Crpey Page 8
CIMPAJordanie September 2005
This lecture is largely based on the public releases of the option pricing
software and documentation system PREMIA developed since 1999 by
the MATHFI project at INRIA www.inria.fr and CERMICS
cermics.enpc.fr
In particular all code samples, written in the C programming language,
are extracted from PREMIA (except for minor modications).
The aim of the Premia project is threefold: rst, to assist the R&D
professional teams in their day-to-day duty, second, to help the academics
who wish to perform tests of a new algorithm or pricing method without
starting from scratch, and nally, to provide the graduate students in the
eld of numerical methods for nance with open-source examples of
implementation of many of the algorithms found in the literature
www-rocq.inria.fr/mathfi/Premia/index.html
S. Crpey Page 1
CIMPAJordanie September 2005
1 Random numbers
Sample random variables or vectors in R
d
, d 1
First get sequences of uniform random variables or vectors u
n
over
[0, 1]
d
. Then transform the u
n
into x
n
with specic distributions
Sample iid random variables over [0, 1] and group them in buckets of size
d
Or use quasi random generators (low-discrepancy sequences) over [0, 1]
d
A bad generator can lead to false results for a considered simulation
S. Crpey Page 2
CIMPAJordanie September 2005
2 Pseudo random generators
Simulation of pseudo-uniform variables over [0, 1] [31, 30, 29, 36, 34, 23]
2.1 Denition
A pseudo-random number generator is a structure ( = (S, s
0
, T, U, G)
where S is a nite set of states, s
0
S is the initial state, the mapping
T : S S is the transition function, U is a nite set of outputs symbols,
and G : S U is the output function.
This denition was introduced by LEcuyer in [31] or [30] for instance.
Since S is nite, the sequence of states is ultimately periodic. The period
is the smallest positive integer such that for some integer 0 and for
all n , s
+n
= s
n
. The smallest with this property is called the
transcient. When = 0, the sequence is said to be purely periodic.
The resolution of a generator is the largest number x such that all output
S. Crpey Page 3
CIMPAJordanie September 2005
values are multiples of x. It determines the maximal number of different
values we can obtain with the generator.
2.2 Properties required for a good pseudo-random
numbers generator
Large period length, at least 2
60
Good equidistribution properties and statistical independence of
successive pseudorandom numbers
The generator should pass statistical tests for uniformity and
independence [23, 30]: general tests like chi-square or
Kolmogorov-Smirnov tests; specic tests like equidistribution test,
serial test, gap test, partition test, etc.. Note that since generated
sequences are deterministic, we can always nd a test the sequence
will fail.
S. Crpey Page 4
CIMPAJordanie September 2005
Little intrinsic structure
Successive values produced by some of the described generators have
a lattice structure in any given dimension.
Efciency, fast generation algorithm, requiring not too much memory
space
Especially if we use many generators together or in parallel
Repeatability (xing a given seed)
Very useful for practical applications. Otherwise we can use the
current time (computer clock) to initialize the generators
Portability
It means that the generator will produce exactly the same sequence
on different computers or with different compilers
Unpredictability
It means that we can not predict the next generated value by the
algorithm from the previous ones (though this is less important for
S. Crpey Page 5
CIMPAJordanie September 2005
nance than for other applications, cryptography in particular)
2.3 Constructing pseudo-random number generators
The simplest methods to construct random number generators are linear
methods. Linear methods use a linear recurrence relation to compute the
next value from the previous ones
Linear Congruential Generators (LCG):
The n
th
random number is given by
u
n
=
U
n
m
[0, 1]
where
U
n
= (aU
n1
+c) mod m
where m > 0, a > 0 and c are xed integers
S. Crpey Page 6
CIMPAJordanie September 2005
Such generators produce a lot of regularity in sequences and an
unfavorable lattice structure
Random numbers generator of LEcuyer with Bayes & Durham
shufing procedure : Combination of two short periods LCG to obtain a
longer period generator
S. Crpey Page 7
CIMPAJordanie September 2005
3 Low-discrepancy sequences
Such sequences neither are random nor pseudo-random but deterministic
and successive values are not independent [37, 36, 35, 32]
However they satisfy good properties of equidistribution on [0, 1]
d
and we
have that
1
N

N
i=1
f(
i
)
_
[0,1]
d
f(u)du
3.1 Denition
Notations:
We note [[0, x[] = y = (y
1
, . . . , y
d
) [0, 1]
d
, y x ; we consider that
y x if and only if for all j = 1, . . . , d : y
j
x
j
.
(x) denotes the volume of [[0, x[]. (x) = x
1
x
d
.
We note I
d
= [0, 1]
d
the closed d-dimensional unit cube.
S. Crpey Page 8
CIMPAJordanie September 2005
For = (
n
)
n1
a sequence in I
d
and x I
d
, we note :
D
n
(, x) =
1
n
n

i=1
1
[|0,x|]
(
i
) (x)
Denitions:
A sequence (
n
)
n
1 is said to be equidistributed on [0, 1]
d
if
x [0, 1]
d
, lim
n+
1
n
n

i=1
1
[|0,x|]
(
i
) = (x)
The value D

n
() dened as:
D

n
() = sup
xI
d[D
n
(, x)[
is called the star discrepancy for the rst n terms of the sequence.
The discrepancy is a very important notion for Quasi-Monte Carlo
simulation. It measures how a given set of points is distributed in
S. Crpey Page 9
CIMPAJordanie September 2005
I
d
= [0, 1]
d
. It can be viewed as a quantitative measure for the
deviation from the uniform distribution.
A sequence () is said to be a low-discrepancy sequence if its
discrepancy satises D
N
= O(
(log N)
d
N
) or if it is asymptotically
better than the one of a random sequence obtained from the law of
iterated logarithm as O((
log log N
N
)
1
2
).
3.2 General remarks on low discrepancy sequences
Quasi-random numbers combine the advantage of a random sequence that
points can be added incrementally, with the advantage of a lattice.
For large dimension d, the theoretical bound (log N)
d
/N may only be
meaningful for extremely large values of N.
Low discrepancy sequences are very useful for low dimension. In high
dimension d, a lattice can only be rened by increasing the number of
S. Crpey Page 10
CIMPAJordanie September 2005
points by a factor 2
d
.
Orthogonal projections: if a d-dimensional sequence is uniformly
distributed in I
d
, then two-dimensional sequences formed by pairing
coordinates should also be uniformly distributed. The appearance of
non-uniformity in these projections is an indication of potential problems
in using a quasi-random sequence for integration [32].
3.3 Sobol sequences
The Sobol sequence [42] is one of the most used sequences for
Quasi-Monte Carlo simulation and one of the most successful for
nancial applications. Its construction is based on primitive polynomials
in the eld Z
2
and bitwise XOR (Exclusive Or) operations
S. Crpey Page 11
CIMPAJordanie September 2005
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Sobol dim 1 and 2
Figure 2: Orthogonal projection on the rst two coordinates of the rst
10000 points of the Sobol sequence in dimension 160
S. Crpey Page 12
CIMPAJordanie September 2005
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Sobol dim 159 and 160
Figure 3: Orthogonal projection on the last two coordinates of the rst
10000 points of the Sobol sequence in dimension 160
S. Crpey Page 13
CIMPAJordanie September 2005
4 Simulation of non-uniform random
variables or vectors
4.1 Inverse method
Simulation of a variable with known (or computable) inverse cumulative
distribution function F
- First, simulate a variable u uniformly distributed on [0, 1], that is a
random number from one of the pseudo-random numbers generators or
one of the quasi-random number generators
- Then take x = F
1
(u)
Example: To simulate an exponential random variable with parameter ,
take
1

ln(1 U).
4.2 Simulation of Gaussian standard variables
S. Crpey Page 14
CIMPAJordanie September 2005
One direct method to generate standard Gaussian variables: Box-Mller
transformation
Box-Mller transformation:
If (u, v) is uniformly distributed on [0, 1]
2
then x and y dened by
x =

2 log usin(2v)
y =

2 log ucos(2v)
are distributed as independent standard gaussians.
Proof: Let be uniform over [0, 2] and
2
be exponential with
parameter 1/2, independent from . Then the pair
(X, Y ) = ( cos , sin ) is standard Gaussian. Indeed, for every
S. Crpey Page 15
CIMPAJordanie September 2005
measurable and bounded function f
Ef(X, Y ) =
_
+
0
_
2
0
f( cos , sin )
1
2
e

2
2
d(
2
)
d
2
=
_
+
0
_
2
0
f( cos , sin )de

2
2
d
2
=
_
R
_
R
f(x, y)e

x
2
+y
2
2
dxdy
2
2
Remarks: This method requires two independent random values to
obtain two gaussian variables
- It must not be used when random numbers u and v are generated from
two successive values of a one-dimensional low-discrepancy sequence,
because they are not independent in this case. To apply one of these
algorithms with Quasi Monte Carlo simulation, you should generate u
S. Crpey Page 16
CIMPAJordanie September 2005
and v independently, that is necessarily from two different
one-dimensional sequences or from one two-dimensional sequence.
Simulating gaussian variable with the inverse method: The inverse
cumulative distribution
1
has not an explicit form. Thus to use the
inverse method to simulate gaussian variable we need an approximation of

1
. A very good and quick approximation is given by Moros algorithm.
4.3 Simulation of Gaussian vectors
Simulate of a N-dimensional Gaussian vector V with zero mean and a
covariance matrix is done by the following way:
- First we compute the lower triangulary matrix obtained with the
Cholesky decomposition of , that is such that =
t
. We have :

ii
=
_

ii

i1
k=1
(
ik
)
2

ji
=

ij

P
i1
k=1

ik

jk

ii
for j = i + 1, . . . , N
S. Crpey Page 17
CIMPAJordanie September 2005
- We generate N independent gaussian standard variables g
i
. We note
G = (g
1
, . . . , g
N
).
- We compute V = G.
Then V is distributed as ^(0, ).
Two-dimensional case:
Assume that the covariance matrix is expressed by:

2
1

1

2

2
2

Then is given by the following matrix:

11

12

21

22

1
0

2
_
1
2

This will be used in case of simulation of correlated two-dimensional


brownian motions, for instance to price options in a BS2D model
S. Crpey Page 18
CIMPAJordanie September 2005
5 Principle of the Monte Carlo Simulation
A general method for evaluating an integral as an expected value, based
on the Strong Law of Large Numbers (LLN) and the Central Limit
Theorem
It provides an unbiased estimator and the error on the estimate is
controlled within a condence interval
5.1 Limit theorems
Strong Law of Large Numbers:
1
N
N

i=1
(x
i
)
a.s.
E[(X)]
if x
i
are i.i.d to X and E[[(X)[] < +.
Central Limit Theorem: The estimator converges in law to a gaussian
S. Crpey Page 19
CIMPAJordanie September 2005
standard distribution.

_
1
N
N

i=1
(x
i
) E[(X)]
_
L
^(0, 1)
where
2
= Var[(X)] and
2
< +.
5.2 Estimation principle
We want to estimate the following parameter I:
I = E[(X)]
where is some function on T 1
n
over 1and X = (X
1
, . . . , X
n
) is
a n-dimensional vector of random variables with law .
I can be expressed as an integral
I =
_
D
(x)d(x)
An unbiased estimator of I for N trials with the Standard Monte Carlo
S. Crpey Page 20
CIMPAJordanie September 2005
method is dened by:

N
=
1
N
N

i=1
(x
i
)
with x
i
i.i.d to .
Variance of the estimator is given as:

2
N
=

2
N
with unbiased estimator:

2
N
=
1
N 1
_
1
N
N

i=1

2
(x
i
)
2
N
_
Variance decreases to 0 when N +. It means that the greater N is,
the more accurate the estimator is. The speed of convergence of
N
to I is

N
. It is independant of the dimension n.
S. Crpey Page 21
CIMPAJordanie September 2005
A condence interval IC = [A, B] for the threshold (condence level)
1 2 is such that P(A < I < B) = 1 2 and it is built as follows:
IC = [
N
z

N
;
N
+z

N
]
where z

=
1
(1 ) and
1
is the inverse cumulative distribution
function of the standard gaussian law.
For instance, if the threshold is chosen to 95% then = 2, 5% and
z

1.96.
5.3 Properties
We briey summarize some advantages and disadvantages of the
Standard Monte Carlo method.
Advantages:
- This method does not require regularity or differentiability
properties for the function . Thus we can implement this method
very easily if we are able to generate the variable X according to .
S. Crpey Page 22
CIMPAJordanie September 2005
- The estimator is unbiased.
- The error on the estimate can be controlled by the Central Limit
Theorem, and we can build a condence interval.
Disadvantages:
- We have to realize a lot of simulations to obtain an accurate
estimator. Therefore computing time can be very high.
S. Crpey Page 23
CIMPAJordanie September 2005
6 Variance Reduction Techniques
We saw that a disadvantage of the standard Monte Carlo Simulation is its
required computing time. Thus we are now interested in Accelerated
Monte Carlo Simulation.
To reduce computing time, we can use variance reduction techniques
[38]. The principle is to rewrite the parameter I in order to express it in
function of a new random variable with smaller variance
2
. Then we
need a smallest number of iterations to obtain the same accuracy on the
estimate.
6.1 Antithetic Variables
Principle of antithetic variables is to introduce some correlation between
the terms of the estimate.
When simulation is done by the inverse cumulative distribution function,
we use uniform numbers u
i
on [0, 1]. For this method, we use each u
i
S. Crpey Page 24
CIMPAJordanie September 2005
twice, as u
i
and 1 u
i
. These both variables have same law but are not
independant. We note x
i
and x

i
the variables generated from u
i
and
1 u
i
respectively.
An unbiased estimator of I with N trials is dened by:

N
=
1
2N
N

i=1
((x
i
) +(x

i
))
with x
i
i.i.d to .
Variance of the estimator is given as:

2
N
=
1
2N
(V ar[(X)] +Cov((X), (X

)))
The following theorem give sufcient conditions to obtain a variance
reduction with this method.
Theorem: If is a monotone, continuous, derivable function then
(
ant
N
)
2

1
2
(
std
N
)
2
S. Crpey Page 25
CIMPAJordanie September 2005
Factor 1/2 is due to the sample size for the antithetic method: in fact, the
estimator contains 2N terms.
6.2 Control Variables
Principle of this method is to introduce an other model for which we have
an explicit solution and to estimate the difference between our rst
parameter I and the new one.
I = E[(X)]
= E[(X) (X)] +E[(X)]
where is a function such that E[(X)] = m is known.
An unbiased estimator of I with N trials is dened by:

N
=
1
N
N

i=1
((x
i
) (x
i
)) +m
S. Crpey Page 26
CIMPAJordanie September 2005
with x
i
i.i.d to .
Variance of the estimator is given as:

2
N
=
1
N
V ar[(X) (X)]
=
1
N
(V ar[(X)] +V ar[(X)] 2Cov((X), (X)))
Variance reduction with regard to standard Monte Carlo simulation is not
guaranteed by this method. To decrease the variance, functions and
must have a large positive correlation. It implies a specic choice for the
control variate .
6.3 Importance Sampling
The basic idea of importance sampling consists in concentrating the
distribution of the sample points in the most contributive parts of the
space. For that, we introduce an new density which changes the initial
S. Crpey Page 27
CIMPAJordanie September 2005
density of X.
I =
_
D
(x)
(x)
(x)
d(x)
= E

_
(X)
(X)
(X)
_
We obtain the following estimator:

N
=
1
N
N

i=1
(x
i
)
(x
i
)
(x
i
)
with x
i
i.i.d to .
Variance of the estimator is expressed as:

2
N
=
1
N
V ar

_
(X)
(X)
(X)
_
is named the importance function. It must verify that (x) > 0, x E
such that (x)(x) > 0.
Variance reduction with regard to standard Monte Carlo simulation is not
S. Crpey Page 28
CIMPAJordanie September 2005
guaranteed by this method. It depends on the choice of , which is not an
easy step. However, the minimum of the variance is reached for the
following importance density

called the optimal density.

=
[(x)[(x)
_
[(y)[(y)dy
Usually this density is unknown and it contains the term I as soon as
> 0.
6.4 Efciency of the Monte Carlo methods
We introduce a criterion to compare the efciency of the various
simulation methods, standard simulation or with variance reduction
techniques. This criterion takes into account the computing time required
by the simulation for each method. It is independent on the sample size.
S. Crpey Page 29
CIMPAJordanie September 2005
Efciency of the method j with regard to the method i is dened by:
(i, j) = lim
N
i
,N
j
+

N
i
(i)

N
j
(j)

t
N
i
(i)
t
N
j
(j)
The method j is considered to be more efcient than the method i if
(i, j) 1
If we assume that the computing time is proportional to the sample size,
that is t
N
i
(i) = k
i
N
i
then:
(i, j) =

i

k
i
k
j
where k is a factor, which exprimes the complexity of the algorithm for
the considered method. For instance, if (i, j) = 3, it means that method i
requires 9 times more time to obtain the same accuracy than method j. In
other words, with the same computing time, standard error for method j is
3 times smaller than the one of method i.
S. Crpey Page 30
CIMPAJordanie September 2005
7 Quasi Monte Carlo Simulation
7.1 General principle
Estimate
E[f(U)]
_
[0,1]
d
f(u)du
by

N
i=1
f(
i
) where is a d-dimensional low-discrepancy sequence
Special case: f F
1
X
. Then (X)
(law)
= f(U), so
E[f(U)] = E[(X)].
7.2 Koksma-Hlawka inequality
Denitions:
S. Crpey Page 31
CIMPAJordanie September 2005
The variation of a function f on I
d
= [0, 1]
d
in the sens of Vitali is
dened by :
V
d
(f) = sup
pP(I
d
)

Ap
[(f, A)[
where P(I
d
) is the set of all partitions of I
d
into subintervals,
p P(I
d
) denotes a partition and A p a subinterval.
(f, A) is the alternative sum of the values of f at the vertices of A.
V
d
(f) =
_
[0,1]
d

d
f
u
1
. . . u
d

du
1
. . . du
d
if the partial derivatives exist and are continuous on I
d
.
The variation of f on I
d
in the sense of Hardy and Krause is dened
by :
V (f) =
d

r=1

1i
1
<<i
r
d
V
r
(f; i
1
, . . . , i
r
)
where V
r
denotes the variation in the sense of Vitali on the
S. Crpey Page 32
CIMPAJordanie September 2005
restriction of f to the r dimensional face
(x
1
, . . . , x
d
) I
d
/x
k
= 1 ifk , i
1
, . . . , i
r

Theorem: If f has bounded variation V (f) on I


d
in the sense of
Hardy-Krause, then for any
1
, . . . ,
n
[0, 1]
d
, we have:
n 1,

1
n
n

i=1
f(
i
)
_
[0,1]
d
f(x)dx

V (f)D

n
()
7.3 Remarks
Through the Koksma-Hlawka inequality, we understand the need to have
sequences with discrepancy D
N
as small as possible
The Koksma-Hlawka inequality gives an a priori deterministic bound for
the error in the approximation of
_
[0,1]
d
f(x)dx by the sum
1
n

n
i=1
f(
i
).
This error is expressed in term of the discrepancy of the sequence and the
S. Crpey Page 33
CIMPAJordanie September 2005
variation of the function f. Nevertheless it is often difcult to calculate or
even to estimate the variation of f. Moreover, since for large dimension d,
the asymptotic bound (log N)
d
/N of low-discrepancy sequences may
only be meaningful for extremely large values of N, and because
(log N)
d
/N increases exponentially with d, then the bound in
Koksma-Hlawka inequality gives no relevant information until a very
large number of points is used
In opposition to Monte Carlo simulation, Quasi-Monte Carlo doesnt
provide an condence interval for the estimator. We cannot compute
empirical variance of the sample because successive terms are not
independent. This is due to the construction of the low-discrepancy
sequence
Another difference in comparison with Monte Carlo is that the
convergence rate for QMC simulation depends on the dimension d of the
considered model through discrepancy D
d
S. Crpey Page 34
CIMPAJordanie September 2005
8 Computing the Greeks by (Quasi) Monte
Carlo
In risk-neutral diffusion models, the option price is given by:
P(x) = E
RN
[(S
x
T
)]
where S
x
t
is the unique solution of
dS
t
= b(S
t
)dt +(S
t
)dW
t
, S
x
0
= x.
Option theory states that the option seller must hold at every date t
=

x
E[(S
x
T
)]
units of stock in his portfolio in order to hedge the option. The goal of this
section is to propose various (Q)MC methods to compute [13, 6, 14].
8.1 Finite Differences
S. Crpey Page 35
CIMPAJordanie September 2005
For xed h > 0, we approach by decentered nite differences
1
h
_
E[(S
x+h
T
)] E[(S
x
T
)]
_
or, preferably, by centered nite differences
1
2h
_
E[(S
x+h
T
)] E[(S
xh
T
)]
_
The two terms of these differences can be estimated by (Q)MC methods.
In so doing, it is better to use common random numbers to estimate the
two terms. For instance, in the special case of the BlackScholes model
S
x
t
= xexp((r

2
2
)t +W
t
), can be estimated by
1
2hM
M

i=1
_
((x +h)e
(r

2
2
)T+

Tg
i
) ((x h)e
(r

2
2
)T+

Tg
i
)
_
where the g
i
are standard Gaussian (Q)MC draws.
8.2 Derivation of the payoff
S. Crpey Page 36
CIMPAJordanie September 2005
In cases where is regular and where we know how to derive S
x
T
with
respect to x, can be formally computed as
=

x
E(S
x
T
) = E

x
(S
x
T
) = E

(S
x
T
)
S
x
T
x
For instance, in the BlackScholes model,
S
x
T
x
=
S
x
T
x
, so
=
1
x
E[

(S
x
T
)S
x
T
]
provided the derivative of exists, for instance in the sense of
distributions.
8.3 Payoff regularization
In the BlackScholes model, we have
E[(S
x
T
)] =
_
R
(y)p
T
(x, y) dy
S. Crpey Page 37
CIMPAJordanie September 2005
with
p
T
(x, y) =
1
y

2
2
T
e

1
2
2
T

log(
y
x
)(r

2
2
)T

2
So, formally
=
_
R
(y)

x
p
T
(x, y) dy
=
_
R
(y)

x
p
T
(x, y)
p
T
(x, y)
p
T
(x, y) dy
= E
_
(S
x
T
)

x
log(p
T
(x, S
x
T
))
_
.
Straightforward computation gives

x
log(p
T
(x, S
x
T
)) =
W
T
xT
Thus
= E
_
(S
x
T
)
W
T
xT
_
S. Crpey Page 38
CIMPAJordanie September 2005
which can be easily computed by (Q)MC.
The following result extends this methodology to more general diffusions.
Theorem: Assuming b and in class C
1
b
with positive, let S be the
solution of
dS
t
= b(S
t
)dt +(S
t
)dW
t
, S
x
0
= x.
Dening
Y
x
t
=
S
x
t
x
then Y is the unique solution of
dY
t
Y
t
= b
x
(S
t
)dt +
x
(S
t
)dW
t
, Y
x
0
= 1
Moreover, for every function in class C
0
b
= E
_
(S
x
T
)
1
T
_
T
0
Y
t
(S
t
)
dW
t
_
.
S. Crpey Page 39
CIMPAJordanie September 2005
9 (Quasi) Monte Carlo algorithms for vanilla
options
9.1 (Q)MC BS1D
Description: Computation, for a Call - Put - CallSpread or Digit
European Option, of its Price and its Delta with the Standard Monte Carlo
simulation. In the case of Monte Carlo simulation, the program provides
estimations for price and delta with a condence interval. In the case of
Quasi-Monte Carlo simulation, the program just provides estimations for
price and delta. For a Call, the implementation is based on the Call-Put
Parity relationship.
Input parameters:
StepNumber N
Generator_Type
S. Crpey Page 40
CIMPAJordanie September 2005
Increment inc
Condence Value
Output parameters:
Price P
Error Price
P
Delta
Error delta

Price Condence Interval: IC


P
=[Inf Price, Sup Price]
Delta Condence Interval: IC

=[Inf Delta, Sup Delta]


The underlying asset price evolves according to the Black and Scholes
model, that is:
dS
u
= S
u
((r d)du +dB
u
), S
Tt
= s
S. Crpey Page 41
CIMPAJordanie September 2005
then
S
T
= s exp((r d

2
2
)t) exp(B
t
)
where S
T
denotes the spot at maturity T, s is the initial spot, t is the time
to maturity.
The Price of an option at T t is:
P = E [exp(rt)f(K, S
T
, R)]
where f denotes the payoff of the option, K the strike and R the rebate
(for Digit option only).
The Delta is given by:
=

s
E[exp(rt)f(K, S
T
, R)]
S. Crpey Page 42
CIMPAJordanie September 2005
The estimators are expressed as:

P =
1
N
exp(rt)
N

i=1
P(i)
where P(i) = f(K, S
T
(i), K)

=
1
N
exp(rt)
N

i=1

s
P(i) =
1
N
exp(rt)
N

i=1
(i)
The values for P(i) and (i) are detailed for each option.
Put: The payoff is (K S
T
)
+
. We have:
P(i) = (K S
T
(i))
+
(i) =
_
_
_

S
T
(i)
s
=
S
T
(i)
s
if P(i) 0
0 otherwise
Call: The payoff is (S
T
K)
+
.
S. Crpey Page 43
CIMPAJordanie September 2005
The Call-Put Parity relations for price and delta are expressed by:
C = P +s exp(dt) K exp(rt)

C
=
P
+ exp(dt)
where C and P respectively denotes the Call and the Put prices. They
will be used for the Call simulation (in order to limit variance).
CallSpread: The payoff is (S
T
K
1
)
+
(S
T
K
2
)
+
.
We have:
P(i) =
_
(S
T
(i) K
1
)
+
(S
T
(i) K
2
)
+

(i) =
8
>
>
<
>
>
:
S
T
(i)
s
=
S
T
(i)
s
if S
T
(i) K
1
and S
T
(i) K
2

S
T
(i)
s
=
S
T
(i)
s
if S
T
(i) K
2
and S
T
(i) K
1
0 otherwise
Digit: The payoff is R1
{S
T
K0}
.
S. Crpey Page 44
CIMPAJordanie September 2005
We have:
P(i) = R1
{S
T
(i)K0}
To have an estimation of the Delta in the case of a Digit option, we
need to use the increment value inc at each iteration i as:

i
=
1
2s.inc
[P (S
T
(i)(s(1 +inc)), K, R) P (S
T
(i)(s(1 inc)), K, R)]
Code Sample:
static int MCStandard(...)
{
...
/*Value to construct the confidence interval*/
alpha= (1.- confidence)/2.;
z_alpha= Inverse_erf(1.- alpha);
/*Initialisation*/
flag= 0;
S. Crpey Page 45
CIMPAJordanie September 2005
s_plus= s*(1.+inc);
s_minus= s*(1.-inc);
mean_price= 0.0;
mean_delta= 0.0;
var_price= 0.0;
var_delta= 0.0;
/* CallSpread */
K1= p->Par[0].Val.V_PDOUBLE;
K2= p->Par[1].Val.V_PDOUBLE;
/*Median forward stock and delta values*/
sigma_sqrt=sigma*sqrt(t);
forward= exp(((r-divid)-SQR(sigma)/2.0)*t);
forward_stock= s*forward;
forward_delta= exp(-SQR(sigma)/2.0*t);
/* Change a Call into a Put to apply the Call-
S. Crpey Page 46
CIMPAJordanie September 2005
Put parity */
if((p->Compute) == &Call)
{
(p->Compute) = &Put;
flag= 1;
}
/*MC sampling*/
init_mc= InitGenerator(generator,simulation_dim
,N);
/* Test after initialization for the generator
*/
if(init_mc == OK)
{
mc_or_qmc= Rand_Or_Quasi(generator);
/* Begin N iterations */
S. Crpey Page 47
CIMPAJordanie September 2005
for(i=1 ; i<=N ; i++)
{
/* Simulation of a gaussian variable ac
cording to the generator type,
that is Monte Carlo or Quasi Monte Car
lo. */
g= Gaussians[mc_or_qmc](1, CREATE, 0, gen
erator);
exp_sigmaxwt=exp(sigma_sqrt*g);
S_T= forward_stock*exp_sigmaxwt;
U_T= forward*exp_sigmaxwt;
/*Price*/
price_sample=(p->Compute)(p->Par,S_T);
/*Delta*/
/*Digit*/
if ((p->Compute) == &Digit)
S. Crpey Page 48
CIMPAJordanie September 2005
{
price_sample_plus= (p->Compute)(p->
Par, U_T*s_plus);
price_sample_minus= (p->Compute)(p->
Par, U_T*s_minus);
delta_sample= (price_sample_plus-price
_sample_minus)/(2.*s*inc);
}
/* CallSpread */
else
if ((p->Compute) == &CallSpread )
{
delta_sample= 0;
if(S_T > K1)
delta_sample += U_T;
if(S_T > K2)
delta_sample -= U_T;
}
S. Crpey Page 49
CIMPAJordanie September 2005
/*Call-Put*/
else
if ((p->Compute) == &Put)
{
if (price_sample>0.)
delta_sample= -U_T;
else
delta_sample= 0.0;
}
/*Sum*/
mean_price+= price_sample;
mean_delta+= delta_sample;
/*Sum of squares*/
var_price+= SQR(price_sample);
var_delta+= SQR(delta_sample);
}
S. Crpey Page 50
CIMPAJordanie September 2005
/* End N iterations */
/* Price */
*ptprice= exp(-r*t)*(mean_price/(double) N);
*pterror_price= sqrt(exp(-2.0*r*t)*var_pri
ce/(double)N - SQR(*ptprice))/sqrt(N-1);
/*Delta*/
*ptdelta= exp(-r*t)*mean_delta/(double) N;
*pterror_delta= sqrt(exp(-2.0*r*t)*(var_de
lta/(double)N-SQR(*ptdelta)))/sqrt((double)N-1);
/* Call Price and Delta with the Call Put
Parity */
if(flag == 1)
{
*ptprice+= s*exp(-divid*t)- p->Par[0].Val
.V_DOUBLE*exp(-r*t);
S. Crpey Page 51
CIMPAJordanie September 2005
*ptdelta+= exp(-divid*t);
(p->Compute)= &Call;
flag = 0;
}
/* Price Confidence Interval */
*inf_price= *ptprice - z_alpha*(*pterror_p
rice);
*sup_price= *ptprice + z_alpha*(*pterror_p
rice);
/* Delta Confidence Interval */
*inf_delta= *ptdelta - z_alpha*(*pterror_d
elta);
*sup_delta= *ptdelta + z_alpha*(*pterror_d
elta);
}
return init_mc;
S. Crpey Page 52
CIMPAJordanie September 2005
}
Further Comments:
/* Value to construct the condence interval */
For example if the condence value is equal to 95% then the value z

used to construct the condence interval is 1.96.


/*Initialization*/
/*Median forward stock and delta values*/
Computation of intermediate values we use several times in the program.
/* Change a Call into a Put to apply the Call-Put parity */
We modify the parameters of the option; they will be reinitialized at the
end of the simulation program.
/*MC sampling*/
/* Begin N iterations */
/*Price*/
S. Crpey Page 53
CIMPAJordanie September 2005
At the iteration i, we obtain
S
T
(i) = s exp
__
r d

2
2
_
t
_
exp(B
t
(i))
P(i) = Payo(S
T
(i), K)
from a simulation of B
t
(i) with the selected generator as

tg
i
where g
i
is
a standard gaussian variable.
/*Delta*/
Calculation of Delta
i
with formula previously detailed for each option.
/*Digit*/
/*CallSpread*/
/*Call-Put*/
/*Sum*/
Computation of the sums

P
i
and

i
for the mean price and the mean
delta.
/*Sum of squares*/
S. Crpey Page 54
CIMPAJordanie September 2005
Computation of the sums

P
2
i
and

2
i
necessary for the variance
price and the variance delta estimations.
/* End N iterations */
/*Price*/
The price estimator is:
P =
1
N
exp(rt)
N

i=1
P(i)
The error estimator is
P
with :

2
P
=
1
N 1
_
1
N
exp(2rt)
N

i=1
P(i)
2
P
2
_
S. Crpey Page 55
CIMPAJordanie September 2005
/*Delta*/
=
1
N
exp(rt)
N

i=1
(i)
The error estimator is

with:

=
1
N 1
_
1
N
exp(2rt)
N

i=1
(i)
2

2
_
/* Call Price and Delta with the Call Put Parity */
We now compute the price and the delta for a call.
Parameters of the option are reinitialized.
/* Price Condence Interval */
The condence interval is given as:
IC
P
= [P z

P
; P +z

P
]
with z

computed from the condence value.


/* Delta Condence Interval */
S. Crpey Page 56
CIMPAJordanie September 2005
The condence interval is given as:
IC

= [ z

; +z

]
with z

computed from the condence value.


9.2 (Q)MC BS2D
Description: Computation, for a Call on Maximum, Put on Minimum,
Exchange or Bestof European Option, of its Price and its Delta with the
Standard Monte Carlo or Quasi-Monte Carlo simulation. In the case of
Monte Carlo simulation, this method also provides an estimation for the
integration error and a condence interval.
Input parameters:
StepNumber N
Generator_Type
S. Crpey Page 57
CIMPAJordanie September 2005
Increment inc
Condence Value
Output parameters:
Price P
Error Price
P
Deltas
1
,
2
Errors delta

1
,

2
Price Condence Interval: IC
P
=[Inf Price, Sup Price]
Delta Condence Intervals: IC

j
=[Inf Delta, Sup Delta]
The underlying asset prices evolve according to the two-dimensional
S. Crpey Page 58
CIMPAJordanie September 2005
BlackScholes risk-neutral dynamics, that is:
_

_
dS
1
u
= S
1
u
((r d
1
)du +
1
dB
1
u
), S
1
Tt
= s
1
dS
2
u
= S
2
u
((r d
2
)du +
2
dB
2
u
), S
2
Tt
= s
2
.
where S
j
T
denotes the spot at maturity T, s
j
is the initial spot and
(B
1
u
, u 0) and (B
2
u
, u 0) denote two real-valued Brownian motions
with instantaneous correlation .
Then we have:
_
_
_
S
1
T
= s
1
exp((r d
1


2
1
2
)t) exp(
11
B
1
t
)
S
2
T
= s
2
exp((r d
2


2
2
2
)t) exp(
21
B
1
t
+
22
B
2
t
)
where the parameters
11
,
12
,
21
,
22
are given in the following matrix
S. Crpey Page 59
CIMPAJordanie September 2005
:

1,1

1,2

2,1

2,2

1
0

2
_
1
2

such that
t
= where is the covariance matrix expressed by:

2
1

1

2

2
2

The price of an option is


P = E
_
exp(rt)f(K, S
1
T
, S
2
T
)

where f denotes the payoff of the option. K denotes the strike. t time to
maturity.
S. Crpey Page 60
CIMPAJordanie September 2005
The Deltas are given by:

1
=

s
1
E[exp(rt)f(K, S
1
T
, S
2
T
)]

2
=

s
2
E[exp(rt)f(K, S
1
T
, S
2
T
)]
The estimators are expressed as:

P =
1
N
exp(rt)

N
i=1
P(i)

j
=
1
N
exp(rt)

N
i=1

s
j
P(i) =
1
N
exp(rt)

N
i=1

j
(i)
The values for P(i) and
j
(i) are detailed for each option.
Put on the Minimum: The payoff is (K min(S
1
, S
2
))
+
.
P(i) =
`
K min(S
1
T
(i), S
2
T
(i))

+
S. Crpey Page 61
CIMPAJordanie September 2005
If P(i) 0 then:

1
(i) =
8
<
:
exp((r d
1


2
1
2
)t +
11
B
1
t
) if S
1
T
(i) S
2
T
(i)
0 otherwise

2
(i) =
8
<
:
exp((r d
2


2
2
2
)t +
21
B
1
t
+
22
B
2
t
) if S
1
T
(i) S
2
T
(i)
0 otherwise
Call on the Maximum: The payoff is (max(S
1
, S
2
) K)
+
.
P(i) =
`
max(S
1
T
(i), S
2
T
(i) K)

+
S. Crpey Page 62
CIMPAJordanie September 2005
If P(i) 0 then:

1
(i) =
8
<
:
exp((r d
1


2
1
2
)t +
11
B
1
t
) if S
1
T
(i) S
2
T
(i)
0 otherwise

2
(i) =
8
<
:
exp((r d
2


2
2
2
)t +
21
B
1
t
+
22
B
2
t
) if S
1
T
(i) S
2
T
(i)
0 otherwise
Exchange Option: The payoff is (S
1
ratio S
2
)
+
.
P(i) =
`
S
1
T
ratio S
2
T

1
(i) =
8
<
:
exp((r d
1


2
1
2
)t +
11
B
1
t
) if P(i) 0
0 otherwise

2
(i) =
8
<
:
ratio exp((r d
2


2
2
2
)t +
21
B
1
t
+
22
B
2
t
) if P(i) 0
0 otherwise
S. Crpey Page 63
CIMPAJordanie September 2005
BestOf Option: The payoff is [max(S
1
K
1
, S
2
K
2
)]
+
.
P(i) =

max(S
1
T
K
1
, S
2
T
K
2
)

+
If P(i) 0 then:

1
(i) =
8
<
:
exp((r d
1


2
1
2
)t) +
11
B
1
t
) if S
1
T
(i) K
1
S
2
T
(i) K
2
0 otherwise

2
(i) =
8
<
:
exp((r d
2


2
2
2
)t) +
21
B
1
t
+
22
B
2
t
) if S
1
T
(i) K
1
S
2
T
(i) K
2
0 otherwise
Code Sample:
static int Standard2DMC(...)
{
...
/* Value to construct the confidence interval *
/
S. Crpey Page 64
CIMPAJordanie September 2005
alpha= (1.- confidence)/2.;
z_alpha= Inverse_erf(1.- alpha);
/* Initialisation */
mean_price= 0.0;
mean_delta1= 0.0;
mean_delta2= 0.0;
var_price= 0.0;
var_delta1= 0.0;
var_delta2= 0.0;
/* Covariance Matrix */
/* Coefficients of the matrix Sigma such that
Sigma(tSigma)=Gamma */
sigma11= sigma1;
sigma12= 0.0;
sigma21= rho*sigma2;
sigma22= sigma2*sqrt(1.0-SQR(rho));
S. Crpey Page 65
CIMPAJordanie September 2005
sigma11_sqrt=sigma11*sqrt(t);
sigma12_sqrt=sigma12*sqrt(t);
sigma21_sqrt=sigma21*sqrt(t);
sigma22_sqrt=sigma22*sqrt(t);
/* Median forward stock and delta values */
forward1= exp(((r-divid1)-SQR(sigma1)/2.0)*t);
forward_stock1= s1*forward1;
forward_delta1= exp(-SQR(sigma1)/2.0*t);
forward2= exp(((r-divid2)-SQR(sigma2)/2.0)*t);
forward_stock2= s2*forward2;
forward_delta2= exp(-SQR(sigma2)/2.0*t);
if ( (p->Compute) == &BestOf)
{
K1= (p->Par[0].Val.V_PDOUBLE);
K2= (p->Par[1].Val.V_PDOUBLE);
}
S. Crpey Page 66
CIMPAJordanie September 2005
if ( (p->Compute) == &Exchange)
ratio= (p->Par[0].Val.V_PDOUBLE);
/* MC sampling */
init_mc= InitGenerator(generator, simulation_
dim,N);
/* Test after initialization for the generator
*/
if(init_mc == OK)
{
mc_or_qmc= Rand_Or_Quasi(generator);
/* Begin N iterations */
for(i=1 ; i<=N ; i++)
{
/*Gaussian Random Variables*/
S. Crpey Page 67
CIMPAJordanie September 2005
g_1= Gaussians[mc_or_qmc](simulation_dim, CR
EATE, 0, generator);
g_2= Gaussians[mc_or_qmc](simulation_dim, RE
TRIEVE, 1, generator);
exp_sigmaxwt1= exp(sigma11_sqrt*g_1);
S_T1= forward_stock1*exp_sigmaxwt1;
U_T1= forward1*exp_sigmaxwt1;
exp_sigmaxwt2= exp(sigma21_sqrt*g_1 + sigma2
2_sqrt*g_2);
S_T2= forward_stock2*exp_sigmaxwt2;
U_T2= forward2*exp_sigmaxwt2;
/*Price*/
price_sample=(p->Compute) (p->Par,S_T1, S_T2
);
/*Delta*/
S. Crpey Page 68
CIMPAJordanie September 2005
if ( price_sample > 0)
{
/*Call on on the Maximum*/
if ( p->Compute == &CallMax)
{
if (S_T1 > S_T2)
{
delta2_sample= 0.;
delta1_sample= U_T1;
}
else
{
delta1_sample= 0.0;
delta2_sample= U_T2;
}
}
/*Put on on the Minimum*/
if ( (p->Compute) == &PutMin)
S. Crpey Page 69
CIMPAJordanie September 2005
{
if (S_T1 < S_T2)
{
delta2_sample= 0.;
delta1_sample= -U_T1;
}
else
{
delta1_sample= 0.0;
delta2_sample= -U_T2;
}
}
/*Best of*/
if ( (p->Compute) == &BestOf)
{
if (S_T1- K1 > S_T2 - K2 )
{
delta2_sample= 0.;
S. Crpey Page 70
CIMPAJordanie September 2005
delta1_sample= U_T1;
}
else
{
delta1_sample=0.0;
delta2_sample= U_T2;
}
}
/*Exchange*/
if ( (p->Compute) == &Exchange)
{
delta1_sample= U_T1;
delta2_sample= -ratio*U_T2;
}
}
else
{
delta1_sample= 0.0;
S. Crpey Page 71
CIMPAJordanie September 2005
delta2_sample= 0.0;
}
/*Sum*/
mean_price+= price_sample;
mean_delta1+= delta1_sample;
mean_delta2+= delta2_sample;
/*Sum of squares*/
var_price+= SQR(price_sample);
var_delta1+= SQR(delta1_sample);
var_delta2+= SQR(delta2_sample);
}
/* End N iterations */
/* Price estimator */
*ptprice= exp(-r*t)*(mean_price/(double) N)
S. Crpey Page 72
CIMPAJordanie September 2005
;
*pterror_price= sqrt(exp(-2.0*r*t)*var_pri
ce/(double)N - SQR(*ptprice))/sqrt(N-1);
*inf_price= *ptprice - z_alpha*(*pterror_p
rice);
*sup_price= *ptprice + z_alpha*(*pterror_p
rice);
/* Delta1 estimator */
*ptdelta1= exp(-r*t)*mean_delta1/(double)
N;
*pterror_delta1= sqrt(exp(-2.0*r*t)*var_de
lta1/(double)N-SQR(*ptdelta1))/sqrt((double)N-1);
*inf_delta1= *ptdelta1 - z_alpha*(*pt
error_delta1);
*sup_delta1= *ptdelta1 + z_alpha*(*pt
error_delta1);
S. Crpey Page 73
CIMPAJordanie September 2005
/* Delta2 estimator */
*ptdelta2= exp(-r*t)*mean_delta2/(double)
N;
*pterror_delta2= sqrt(exp(-2.0*r*t)*var_de
lta2/(double)N-SQR(*ptdelta2))/sqrt((double)N-1);
*inf_delta2= *ptdelta2 - z_alpha*(*pt
error_delta2);
*sup_delta2= *ptdelta2 + z_alpha*(*pt
error_delta2);
}
return init_mc;
}
Further Comments:
/* Value to construct the condence interval */
For example if the condence value is equal to 95% then the value z

used to construct the condence interval is 1, 96.


S. Crpey Page 74
CIMPAJordanie September 2005
/*Initialization*/
/* Covariance Matrix */
/* Coefcients of the matrix such that
t
= */
/*Median forward stock and delta values*/
Computation of intermediate values we use several times in the program.
/*MC sampling*/
/* Begin N iterations */
/*Gaussian Random Variables*/
Generation of 2 gaussian variables g
1
and g
2
used for the Brownian
motions

tg
j
.
/*Price*/
At the iteration i, we obtain
P(i) = payo(K, S
1
T
(i), S
2
T
(i))
/*Delta*/
S. Crpey Page 75
CIMPAJordanie September 2005
Calculation of Delta
1
(i) and
2
(i) for the different cases with formula
given previously.
/*Call on the Maximum*/
/*Put on the Minimum*/
/*Best of*/
/*Exchange*/
/*Sum*/
Computation of the sums

P(i) and

j
(i) for the mean price and the
means delta.
/*Sum of squares*/
Computation of the sums

P(i)
2
and

(
j
(i))
2
necessary for the
variance price and the variances delta estimations.
/* End N iterations */
/*Price*/
S. Crpey Page 76
CIMPAJordanie September 2005
The price estimator is:
P =
1
N
exp(rt)
N

i=1
P(i)
The error estimator is
P
with :

2
P
=
1
N 1
_
1
N
exp(2rt)
N

i=1
P(i)
2
P
2
_
The condence interval is
IC
P
= [P z

P
; P +z

P
]
with z

computed from the condence value.


/*Delta*/
/* Delta1 estimator */
S. Crpey Page 77
CIMPAJordanie September 2005
The delta estimator is:

1
=
1
N
exp(rt)
N

i=1

1
(i)
The error estimator is

1
with:

1
=
1
N 1
_
1
N
exp(2rt)
N

i=1

2
1
(i)
2
1
_
The condence interval is given as:
IC

1
= [
1
z

1
;
1
+z

1
]
with z

computed from the condence value.


/* Delta2 estimator */
The delta estimator is:

2
=
1
N
exp(rt)
N

i=1

2
(i)
S. Crpey Page 78
CIMPAJordanie September 2005
The error estimator is

2
with:

2
=
1
N 1
_
1
N
exp(2rt)
N

i=1

2
2
(i)
2
2
_
The condence interval is given as:
IC

2
= [
2
z

2
;
2
+z

2
]
with z

computed from the condence value.


S. Crpey Page 79
CIMPAJordanie September 2005
10 Simulation of processes
10.1 Brownian Motion
Denitions: A Brownian motion is a continuous adapted process
B = B
t
, F
t
; 0 t < , dened on some probability space (, F, P),
with the properties that B
0
= 0 a.s and for 0 s < t, the increment
B
t
B
s
is independent of F
s
and is normally distributed with mean zero
and variance t s.
Simulation of B
t
: Simulation of B
t
is an easy step because we have that
/(B
t
) = ^(0, t)
- rst generate a gaussian standard variable g
- and then compute B
t
as

tg
Simulation (discretization) of a Brownian trajectory, 0 t T: We
now detail two approaches for simulating a Brownian path: the Forward
S. Crpey Page 80
CIMPAJordanie September 2005
one and the Backward one. Typically, for path-dependent options we have
to simulate B over = t
k
; k = 0, ..., M, t
0
= 0, t
M
= T.
Forward Simulation of B
t
over is given by:
B(0) = 0
B(t
k+1
) = B(t
k
) +

t
k+1
t
k
g
k
where (g
1
, ..., g
M
) are independent gaussian standard variables. If we use
a discretization with evenly spaced intervals of size h =
T
m
, we have:
B(0) = 0
B(t
k+1
) = B(t
k
) +

hg
k
Backward simulation with Brownian Bridge:
This other method is based on the following property for Brownian
S. Crpey Page 81
CIMPAJordanie September 2005
bridge:
/(B
u
, s < u < t[B
s
= x, B
t
= y) = ^(
tu
ts
x +
us
ts
y,
(tu)(us)
ts
)
and particularly
/(Bt+s
2
[B
s
= x, B
t
= y) = ^(
x+y
2
,
ts
4
)
This scheme consists in simulating B as
B(0) = 0
B(T) =

Tg
1
B(
T
2
) =
B(0)+B(T)
2
+
_
T
4
g
2
B(
3T
4
) =
B(0)+B(
T
2
)
2
+
_
T
8
g
3
B(
T
4
) =
B(
T
2
)+B(T)
2
+
_
T
8
g
4

S. Crpey Page 82
CIMPAJordanie September 2005
where (g
1
, ..., g
M
) are independent gaussian standard variables.
For this algorithm, we have to choose M as a power of 2. The rst step is
directly for 0 to T. Intermediates steps are lled by taking successive sub-
divisions of the time intervals into halves. It can be adapted with
subdivisions of different length by considering the conditional law of
brownian bridge between s and t.
Remark on these two schemes for MC and QMC simulations: We
need a vector of size M of independent gaussian variables. For MC, these
M variables can be simulated from the same pseudo random numbers
generator. However, for a QMC simulation we need to use a
M-dimensional low-discrepancy sequence to keep independence
property.
10.2 BlackScholes model
In the Black and Scholes model [3], the underlying asset price S
t
follows
S. Crpey Page 83
CIMPAJordanie September 2005
the diffusion:
dS
t
= S
t
dt +S
t
dB
t
and then the price is a geometric Brownian process:
S
t
= S
0
exp((

2
2
)t +B
t
)
In this particular case for which we have an explicit solution of the
diffusion process, simulation of price paths is based on simulation of
Brownian motion described in the last section. As for Brownian path
simulation, we present the forward and backward approaches.
Forward simulation: We have:
S(t
k+1
) = S(t
k
) exp((

2
2
)(t
k+1
t
k
) +(B(t
k+1
) B(t
k
)))
and for a discretization with evenly spaced intervals of size h, we simply
have:
S(t
k+1
) = S(t
k
) exp((

2
2
)h +

hg
k
)
S. Crpey Page 84
CIMPAJordanie September 2005
Backward simulation: To construct this scheme, we use Backward
simulation for Brownian path described int the previous point. To express
this scheme, we note y
t
= log(S
t
), that is y
t
= y
0
+ (

2
2
)t +B
t
y
T
= y
0
+ (

2
2
)t +

Tg
1
yT
2
=
y
0
+y
T
2
+
_
T
4
g
2
y3T
4
=
y
0
+y
T
2
2
+
_
T
8
g
3
yT
4
=
y
T
2
+y
T
2
+
_
T
8
g
4

We endly take S
t
k
= exp(y
t
k
)
10.3 General diffusions: Euler and Milshtein scheme
We consider the general diffusion process:
dX
t
= b(X
t
)dt +(X
t
)dB
t
S. Crpey Page 85
CIMPAJordanie September 2005
If we dont have any explicit solution for X
t
(like for Black and Scholes
model), we have to use approximation schemes with a discretization of
the process. The both most known schemes are Euler and Milshtein. They
both take into account a time discretization of step h.
The Euler approximation scheme for this diffusion is expressed as
X
t
k+1
= X
t
k
+b(X
t
k
)h +(X
t
k
)(B
t
k+1
Bt
k
)
Simulation is obtained with a forward algorithm by:
X
t
k+1
= X
t
k
+b(X
t
k
)h +(X
t
k
)

hg
k
for k = 0, ..., M 1.
The Milshtein approximation scheme for this diffusion is given by
S. Crpey Page 86
CIMPAJordanie September 2005
(space dimension = 1)
X
t
k+1
= X
t
k
+ (b(X
t
k
)
1
2

(X
t
k
)(X
t
k
))h+
+(X
t
k
)(B
t
k+1
Bt
k
) +
1
2

(X
t
k
)(X
t
k
)(B
t
k+1
Bt
k
)
2
Simulation is obtained with a forward algorithm by:
X
t
k+1
= X
t
k
+ (b(X
t
k
)
1
2

(X
t
k
)(X
t
k
))h+
+(X
t
k
)

hg
k
+
1
2

(X
t
k
)(X
t
k
)hg
2
k
for k = 0, ..., M 1.
10.4 Heston model
The best known Stochastic Volatility model is the Heston model [16],
which postulates, for the instantaneous variance v
t
, a CIR Bessel process
(square root process) correlated with the driver of S, namely, in
S. Crpey Page 87
CIMPAJordanie September 2005
risk-neutral form
_
_
_
dS
t
= S
t
(rdt +

v
t
dW
t
)
dv
t
= (v
t
v)dt +

v
t
dZ
t
(1)
where
- dW, Z) = dt
- is the speed of mean-reversion of the instantaneous variance
- v is the long-term variance mean
-

2

v
is the volatility of the instantaneous volatility VolOfVol.
The Heston model is complete if we have a vanilla option in the
replication portfolio.
If the Bessel process v
t
eventually reaches 0 (which is possible under
some sets of parameter values, and will typically be the case for values of
the model parameters calibrated on realistic equity index skews [18]) then
the embedded drift pushes it back to the positive side. So v
t
is always
nonnegative, as it should be. Mean reversion of v
t
is intended to lead to
S. Crpey Page 88
CIMPAJordanie September 2005
realistic properties for the term structure of the implied volatility. Vanilla
option prices and Greeks are available in this model by Fourier transform.
A classic discretization for process 1 on page 87 consists in discretizing
ln(S) by the Euler scheme and v by the Milshtein scheme, as
_
_
_
ln(S) = (r
v
2
)h +

vh(G+
_
1
2
G

)
v = ((v
t
v) +

2
4
)h +

vhG+
h
2
G
2
4
where (G, G

) is a standard Gaussian pair. The interest of using the


Milshtein scheme for the v component is to benet from the increased
rate of trajectorial convergence of the Milshtein scheme with respect to
the Euler scheme. So the Milshtein scheme is better suited to reproduce
the trajectorial properties of v
t
(like nonnegativity).
10.5 Monte Carlo Simulation for Processes
S. Crpey Page 89
CIMPAJordanie September 2005
In the case of Monte Carlo simulation for processes, we have
E[(X)]
1
N
N
X
i=1
(x
i
) =
(E[(X)] E[(X
h
)]) + (E[(X
h
)]
1
N
N
X
i=1
(x
h
i
))
So the error is the sum of two terms: the discretization error and the
Monte Carlo error. For usual discretization schemes such as the Euler or
the Milshtein scheme, the weak convergence rate is linear in h so that the
discretization error is as O(h) the strong convergence rate of the
Milshtein scheme is quadratic in h, but this is irrelevant here. As for the
Monte Carlo error, after scaling by 1/

N, it is asymptotically distributed
as ^(0, Var[(X
h
)]). Therefore in order to balance the two terms in the
error a natural choice is to take N of the order of M
2
.
S. Crpey Page 90
CIMPAJordanie September 2005
11 (Quasi) Monte Carlo methods for Exotic
Options
11.1 Lookback options
We consider an option with payoff (S
T
, m
T
) where (S
t
, t 0) is the
solution of the following diffusion equation
dS
t
= b(S
t
)dt +(S
t
)dW
t
and m
t
= sup
0st
S
s
. The idea to (Q)MC estimate this option is to
evaluate the maximum along the trajectory of the continuous Euler
scheme given by
m
t
= max
0st
S
s
where for kh t (k + 1)h
S
t
= S
kh
+b(S
kh
)(t kh) +(S
kh
)(W
t
W
kh
)
S. Crpey Page 91
CIMPAJordanie September 2005
with W
t
conditioned on both ends of each time interval (Brownian
Bridges). The following result gives a way of simulating the law of m
t
conditionally on (S
kh
, 0 k M).
Theorem: The law of m
k+1
= max
kht(k+1)h
S
t
conditionally on S
kh
and S
(k+1)h
can be simulated by
1
2
_
S
kh
+S
(k+1)h
+
_
(S
kh
S
(k+1)h
)
2
2
2
(S
kh
)hlog(U
k
)
_
where (U
k
, k 0) is a sequence of iid uniform random variables on [0, 1].
Proof: Let m
t
= sup
0st
W
s
. Il is well known [21] that the law of the
pair (W
t
, m
t
) has density
p(x, y) = 1
y0
1
yx
2(2y x)

2t
3
exp
_

(2y x)
2
2t
_
.
Denoting W
t
= W
t
+bt, one can show by the CameronMartin formula
S. Crpey Page 92
CIMPAJordanie September 2005
that the law of the pair (W
t
, sup
0st
W
s
) has density
p(x, y) = p(x, y) exp(bx
b
2
2
t).
So the law of sup
0st
W
s
conditional on W
t
= x does not depend on b.
By change of variables z = (2y x)
2
x
2
, one then shows that the
variable
(2 sup
0st
W
s
W
t
)
2
W
2
t
is, conditionally on W
t
,
1
2t
-exponentially distributed.
Therefore, conditionally on S
nh
and S
(k+1)h
sup
kht(k+1)h
S
t
(law)
=
1
2

S
kh
+S
(k+1)h
+
q
(S
(k+1)h
S
kh
)
2
+(S
kh
)
2
Y

where Y is an
1
2h
-exponential variable. 2
So each simulation run consists in generating by the Euler scheme a
trajectory S
kh
, 1 k M and a trajectory m
kh
, 1 k M, by using
S. Crpey Page 93
CIMPAJordanie September 2005
2M random draws. Note that if QMC was used, one should use a
2M-dimensional low-discrepancy sequence. Except for special cases such
as BlackScholes (see below), this would lead to use high-dimensional
low-discrepancy sequence, which is not recommended in general.
In the case of the BlackScholes model, the Euler discretization is exact,
provided one works in returns variable y = ln(S). So one can take M
equal to one and use a 2-dimensional low-discrepancy sequence.
11.2 Andersen and Brotherton-Ratcliffe Algorithm for
Lookback Options
Description : Computation, for a Lookback Option on Maximum, of
its Price and its Delta with the Standard Monte Carlo Simulation [1]. This
method also provides an estimation for the integration error and a
condence interval.
Input parameters:
S. Crpey Page 94
CIMPAJordanie September 2005
Number of iterations N
Generator_Type
Condence Value
Output parameters:
Price P
Error Price
P
Delta
Error delta

Condence Interval: [Inf Price, Sup Price]


Condence Interval: [Inf Delta, Sup Delta]
The underlying asset price evolves according to the Black and Scholes
S. Crpey Page 95
CIMPAJordanie September 2005
model, that is:
dS
u
= S
u
((r d)du +dB
u
), S
Tt
= s
then
S
T
= s exp
_
(r d

2
2
)t
_
exp(B
t
)
S
T
denotes the spot at maturity T, s is the initial spot, t the time to
maturity.
We note m
T
= max
[Tt,T]
(S
u
) the maximum reached before maturity.
The Price of an option is:
P = E [exp(rt)f(K, S
T
, m
T
)]
where f denotes the payoff of the option and K the strike.
The Delta is given by:
=

s
E[exp(rt)f(K, S
T
, m
T
)]
S. Crpey Page 96
CIMPAJordanie September 2005
The estimators are expressed as:

P =
1
N
exp(rt)

N
i=1
P(i)

=
1
N
exp(rt)

N
i=1

s
P(i) =
1
N
exp(rt)

N
i=1
(i)
The values for P(i) and (i) are detailed for each option.
Call Fixed Euro: The payoff is (max
[Tt,T]
(S
u
) K)
+
P(i) = (m
T
(i) K)
+

i
=
_
_
_
m
T
(i)
s
=
m
T
(i)
s
if P(i) 0
0 otherwise
Put Floating Euro: The payoff is (max
[Tt,T]
(S
u
) S
T
)
P(i) = (m
T
(i) S
T
(i))

i
=
m
T
(i)
s

S
T
(i)
s
=
m
T
(i) S
T
(i)
s
=
P(i)
s
S. Crpey Page 97
CIMPAJordanie September 2005
Simulation of the maximum m
T
: The conditional law induced by m
T
,
given the observation S
T
, has a very simple form, so that we can easily
simulate the maximum of S over the time interval [t, T].
Setting W
u
= ln S
u
, then obviously m
T
is the exponential of the
maximum of W, and the conditional probability distribution function of
the maximum of W, given W
Tt
= w
1
and W
T
= w
2
, is:
G
max
(w; w
1
, w
2
) =
_

_
1 exp
_

2
t
(w w
1
)(w w
2
)
_
if w max(w
1
, w
2
)
0 otherwise
In the Monte Carlo algorithm, the associated inverse function G
1
max
is
used in order to simulate values for m
T
. We have the following
expression:
G
1
max
(y; w
1
, w
2
) =
1
2
_
w
1
+w
2
+
_
(w
1
w
2
)
2
2
2
t ln(1 y)
_
S. Crpey Page 98
CIMPAJordanie September 2005
where y is uniform on [0, 1].
Then, at step i, m
T
(i) is simulated as follows:
- S
T
(i) is generated as s exp
_
(r d

2
2
)t
_
exp(B
t
(i)) with
B
t
(i) =

tg
i
and g
i
is a standard gaussian variable;
- y(i) is generated as a uniform variable on [0, 1];
- w
Tt
= ln s and w
T
(i) = ln S
T
(i);
- W
T
(i) is computed as G
1
max
(y(i); w
Tt
, w
T
(i));
- Finally, m
T
(i) = exp(W
T
(i)).
Code Sample
static double inverse_max(double s1, double s2,
double h,double sigma,double un)
{
return ((s1+s2)+sqrt(SQR(s1-s2)-2*SQR(sigma)*h*
log(1.-un)))/2.;
S. Crpey Page 99
CIMPAJordanie September 2005
}
static int LookBackSup_AndersenMontecarlo(...)
{
...
/* Monte Carlo sampling */
init_mc= InitGenerator(generator, simulation_
dim,N);
/* Test after initialization for the generator
*/
if(init_mc == OK)
{
/* Initialization of the model just allows
to use Monte Carlo method */
mc_or_qmc= Rand_Or_Quasi(generator);
for(i=1; i<=N; i++)
S. Crpey Page 100
CIMPAJordanie September 2005
/* Begin N iterations */
{
/* Simulation of a gaussian variable accor
ding to the generator type */
gs= Gaussians[mc_or_qmc](1, CREATE, 0, gen
erator);
/* Simulation of the maximum */
un= Uniform(generator);
exp_sigmaxwt= exp(sigma_sqrt*gs);
S_T= forward_stock*exp_sigmaxwt;
max_log_norm=inverse_max(log_s, log(S_T), t,
sigma, un);
S_max= exp(max_log_norm);
/* Price and Delta */
/* CallFixedEuro */
S. Crpey Page 101
CIMPAJordanie September 2005
if (p->Compute == &Call_OverSpot2)
{
price_sample= (p->Compute)(p->Par, strik
e, S_max);
delta_sample= 0;
if(log_strike>max_log_norm)
delta_sample=0.;
else delta_sample=S_max/s;
}
else
/* PutFloatingEuro */
if (p->Compute == &Put_StrikeSpot2)
{
price_sample= (p->Compute)(p->Par, S_T, S_
max);
delta_sample=S_T/s;
}
S. Crpey Page 102
CIMPAJordanie September 2005
/*Sum*/
mean_price+= price_sample;
mean_delta+= delta_sample;
/*Sum of squares*/
var_price+= SQR(price_sample);
var_delta+= SQR(delta_sample);
}
/* End N iterations */
...
}
return init_mc;
}
Further Comments
/* Test after initialization for the generator */
Test if the dimension of the simulation is compatible with the selected
S. Crpey Page 103
CIMPAJordanie September 2005
generator. In this implementation, initialization of the model just allows
to use Monte Carlo method. Note that if QMC was used, one should use a
2D low-discrepancy sequence.
/* Begin N iterations */
/* Simulation of a gaussian variable according to the generator type,
that is Monte Carlo or Quasi Monte Carlo. */
Call to the appropriate function to generate a standard gaussian variable.
/* Simulation of the maximum */
Computation of S
T
(s) and max
[Tt,T]
S
u
(s) from the initial spot value s.
The maximum value is simulated according to the conditional law
induced by s and S
T
.
/* End N iterations */
11.3 Barrier options
Barrier options are a special case of lookback options in which the
S. Crpey Page 104
CIMPAJordanie September 2005
quantity
E
_
f(S
T
, m
T
)[S
kh
, 0 k M
_
can be computed explicitely.
A barrier option is dened by the following payoff
f(S
T
)1
{m
T
L}
.
An approximation of the option price is given by
E
_
f(S
T
)1
{m
T
L}
_
where
m
T
= max
0kM
m
k
with m
k
= sup
kht(k+1)h
S
t
S. Crpey Page 105
CIMPAJordanie September 2005
Then
E
_
f(S
T
)1
{m
T
L}
[S
lh
, 0 l M
_
= f(S
T
)
M1

k=0
E(1
{m
k
L}
[S
lh
, 0 l M)
= f(S
T
)
M1

k=0
E(1
{m
k
L}
[S
kh
, S
(k+1)h
)
= f(S
T
)
M1

k=0

h
(S
kh
, S
(k+1)h
)
where

h
(x, y) =
_
1 exp(
2
(x)
2
h
(L x)(L y))
_
1
L>xy
S. Crpey Page 106
CIMPAJordanie September 2005
So
E
_
f(S
T
)1
{m
T
L}
_
= E
_
f(S
T
)
M1

k=0

h
(S
kh
, S
(k+1)h
)
_
.
Note that, in this case, no random draws are needed other than those used
for simulating S
T
, namely M random draws by simulation run where
M can be taken equal to one, in the special case of the BlackScholes
model.
11.4 Asian options
Asian options have payoffs of the following kind

_
S
T
,
1
T
_
T
0
S
u
du
_
.
A standard example is given by (x, y) = (y x)
+
. Putting
A
t
=
_
t
0
S
u
du, the pair (S
t
, A
t
) is Markovian.
S. Crpey Page 107
CIMPAJordanie September 2005
Straightforward application of the Euler scheme suggests to approximate
A
T
by A
h
T
= h(S
0
+S
1
+. . . +S
(M1)h
), which corresponds to a
Riemann sum for the integral A
T
. Nevertheless, this discretization works
poorly in practice. There do exist better discretizations of the integral, in
particular in the BlackScholes model, by using the law of the Brownian
Bridge between 0 and h:
W
u
=
u
h
W
h
+Z
u
where Z
u
is a centered Gaussian variable with variance
u
h
(h u). These
discretizations can be used in association with appropriate variance
reduction techniques [28]. For instance, in the BlackScholes model and
for (x, y) = (y K)
+
, an idea by Kemna and Vorst [22] is the
following one: if r and are small, then
1
T
_
T
0
S
t
dt is close to exp
_
1
T
_
T
0
log(S
t
) dt
_
.
S. Crpey Page 108
CIMPAJordanie September 2005
This suggests to choose
e
rT
(exp(Z) K)
+
where Z =
1
T
_
T
0
log(S
t
) dt, as a control variable. Z is Gaussian, so one
knows Ee
rT
(exp(Z) K)
+
explicitely.
S. Crpey Page 109
CIMPAJordanie September 2005
12 Trees for vanilla options
12.1 Cox-Ross-Rubinstein as an approximation to
Black-Scholes
The Cox-Ross-Rubinstein model is the Markov chain obtained by
replacing the Black-Scholes dynamics under the risk-neutral probability
by the CRR dynamics
S
n+1
(h) =
_
_
_
uS
n
(h) with proba p

d S
n
(h) with proba 1 p

with h =
T
N
, where T is the time to maturity of an option (ie the current
date is taken to be zero), N is the number of time steps in the tree, and
u = e

h
, d = e

h
, p

=
e
h
d
u d
S. Crpey Page 110
CIMPAJordanie September 2005
with r . Note that p

is the risk-neutral probability in the


generalized CRR scheme.
Convergence of the marginal law at maturity
Without loss of generality assume S
0
= 1. For R
E
P

[exp (iln S
N
(h))]
= E
P

_
exp
_
iln
N1

n=0
S
n+1
(h)
S
n
(h)
__
=
_
E
P

_
exp
_
iln
S
1
(h)
S
0
___
N
=
_
p

exp
_
i

h
_
+ (1 p

) exp
_
i

h
__
N
S. Crpey Page 111
CIMPAJordanie September 2005
and since p

=
e
h
d
ud

1
2
+

2
2

h +O(h)
E
P

[exp (iln S
N
(h))]
_
1 +
_
i
_


2
2
_

2
2
_
T
N
_
N
exp
__
i
_


2
2
_

2
2
_
T
_
= E
_
exp
_
i
__


2
2
_
T +B
T
___
= E
P

BS
[exp (iln S
T
)]
where P

BS
is the risk-neutral Black-Scholes probability. Therefore under
the risk neutral measure
S
N
(h) S
BS
T
(2)
in law as N .
This grants the convergence of the price of standard european options
S. Crpey Page 112
CIMPAJordanie September 2005
with payoffs continuous and bounded, e.g. Put options. The convergence
of the Call prices follows by Call-Put parity, since the CRR scheme
satises the Call-Put parity relationship.
Some remarks about the limit N
The limiting law depends only on p

e
iln(u)
+ (1 p

) e
iln(d)
through
its Taylor expansion up to o (h) . Thus u, d or/and p

could be altered as
long as the involved terms of the development are not modied.
The upper and lower value of the spot at maturity are u
N
= e

N
and
d
N
= e

N
whereas the ratio of 2 successive points is
u
d
= e
2

h
= e
2

T
N
. Thus at the same time, the scan of the law of
S
N
(h) goes to R

+
and the grid gets more and more dense. It is easy to
show that the points visited by the process S (h) will eventually become
dense in [0, t] R

+
.
S. Crpey Page 113
CIMPAJordanie September 2005
Convergence of the delta
Observe that

0
S
0
=
P
u,N1
P
d,N1
(u d)
= e
r(N1)h
E
P

[(uS
0
X)] E
P

[(dS
0
X)]
u d
where X is a random variable independent from S
0
so that

0
S
0
= e
r(N1)h
E
P

_
(uS
0
X) (dS
0
X)
u d
_
Assume now that is a C
1
function. Then
(uS
0
X) (dS
0
X) =
_
u
d
S
0
X

(aS
0
X) da
S. Crpey Page 114
CIMPAJordanie September 2005
so

0
S
0
= e
r(N1)h
1
(u d)
_
u
d
E
P

[S
0
X

(aS
0
X)] da
= e
r(N1)h
E
P

[S
0
X

(a (h) S
0
X)]
for some point a (h) in [d, u] by the mean value property. Assuming now
that (y) y

(y) is a Lipschitz function, we know that so is the CRR


price associated to the payoff , with the same Lipschitz constant which
doesnt depend on h. Therefore since [d, u] [1, 1] as h 0
lim
h0
E
P

[S
0
X

(a (h) S
0
X)] = lim
h0
E
P

[ (S
0
X)] (3)
Now by the standard convergence result, assuming that y

(y) is bounded
lim
h0
E
P

[ (S
0
X)] = E
P

BS
[ (S
t
)]
= e
rT
S
0

BS
0
Therefore the delta does converge towards the Black-Scholes delta, at
S. Crpey Page 115
CIMPAJordanie September 2005
least for sufciently smooth payoffs. For a Call (or Put) option the
argument to get ( 3) must be slightly modied, but it works.
12.2 Algorithm (CRR)
Description: This is the archetype of a Tree routine. It is described in
[9]. The dynamics of the underlying under the risk-neutral probability in
the BS1D model:
S
t
= S
0
e
(r)t+W
t
is replaced by the following: for i = 0, 1, . . . , N 1
S
(i+1)
= S
i
e

h
where h =
T
N
, T is the maturity of the option, = 1 with probability p

,
= 1 with probability 1 p

where the (conditional) risk-neutral


probability p

is chosen so that
E
_
e
(r)h
S
(i+1)
[S
i
_
= S
i
(4)
S. Crpey Page 116
CIMPAJordanie September 2005
where is the dividend rate.
Input parameters:
StepNumber N
Output parameters:
Price
Delta
The price of the european option P satises
E
_
e
rh
P
(i+1)
[P
i

= P
i
(5)
whereas the price of the american option is given in any state j at time i
by
P
i
= max
_
, E
_
e
rh
P
i+1
[P
i
_
(6)
where is the payoff of the option.
The algorithm is a backward computation of the option price, based on
S. Crpey Page 117
CIMPAJordanie September 2005
the Dynamic Programming equations ( 5) or ( 6), after a forward
computation of the possible values of the underlying at maturity S
j
N
for
j = 0, 1, . . . , N. Since the tree is a at tree, it is easily seen that the value
of the underlying at time i and level j (starting from below) is the same as
that at time i + 2 and level j + 1. In particular there are only 2N + 1
possible values of the underlying between time 0 and time N. For
computational purpose it is clever in the american case to compute only
once the corresponding value of the intrinsic value of the option, at the
beginning of the algorithm.
Code Sample:
static int CoxRossRubinstein_79(...)
{
...
/*Price, intrisic value arrays*/
P=(double *)malloc((N+1)*sizeof(double));
S. Crpey Page 118
CIMPAJordanie September 2005
iv=(double *)malloc((2*N+1)*sizeof(double)
/*Up and Down factors*/
h=t/(double)N;
a1= exp(h*(r-divid));
u = exp(sigma*sqrt(h));
d= 1./u;
/*Risk-Neutral Probability*/
pu=(a1-d)/(u-d);
pd=1.-pu;
if ((pd>=1.) || (pd<=0.))
return NEGATIVE_PROBABILITY;
pu*=exp(-r*h);
pd*=exp(-r*h);
/*Intrisic value initialisation*/
S. Crpey Page 119
CIMPAJordanie September 2005
upperstock=s;
for (i=0;i<N;i++)
upperstock*=u;
stock=upperstock;
for (i=0;i<2*N+1;i++)
{
iv[i]=(p->Compute)(p->Par,stock);
stock*=d;
}
/*Terminal Values*/
for (j=0;j<=N;j++)
P[j]=iv[2*j];
/*Backward Resolution*/
for (i=1;i<=N-1;i++)
for (j=0;j<=N-i;j++)
S. Crpey Page 120
CIMPAJordanie September 2005
{
P[j]=pu*P[j]+pd*P[j+1];
if (am)
P[j]=MAX(iv[i+2*j],P[j]);
}
/*Delta*/
*ptdelta=(P[0]-P[1])/(s*u-s*d);
/*First time step*/
P[0]=pu*P[0]+pd*P[1];
if (am)
P[0]=mAX(iv[N],P[0]);
/*Price*/
*ptprice=P[0];
...
S. Crpey Page 121
CIMPAJordanie September 2005
return OK;
}
Further Comments:
/*Up and Down factors*/
Here u = e

h
,d = e

h
.
/*Risk-Neutral Probability*/
Computation of p

=
e
(r)h
d
ud
(the value computed from ( 4 on
page 116)).
/*Intrinsic Value computation*/
Storage of the 2N + 1 possible values of the intrinsic value.
/*Price initialization*/
The price of the option at maturity. It involves only the values iv [2 j].
/*Backward Resolution*/
S. Crpey Page 122
CIMPAJordanie September 2005
Note that we dont re-compute the intrinsic value.
/*Delta*/
The delta here is the right hedging delta in the binomial model, namely

n
=
P
n+1
(uS
n
) P
n+1
(dS
n
)
uS
n
dS
n
.
There may be a more clever way to approximate the continuous-time
Black&Scholes delta.
/*First time step*/
/*Price*/
12.3 Variants of the CRR tree
To achieve the convergence in law ( 2 on page 112), many other choices
of u and d and q (denoting the probability inside the tree) may be done,
S. Crpey Page 123
CIMPAJordanie September 2005
regardless of any arbitrage or nancial consideration: the tree algorithm
becomes a numerical approximation algorithm among other ones, the only
purpose is to get a good convergence to the limiting price and delta [26]
All the following trees will remain recombining trees since this is true as
soon as u and d remain constant within the tree. Only the choice of u, d
and the probability is at hand here.
The Random Walk scheme As long as
S
T
= S
0
exp
__


2
2
_
T +B
T
_
a very natural choice is to
approximate the Brownian motion B by the standard Random Walk. This
leads to
u = e

2
2

h+

h
, d = e

2
2

h
and q =
1
2
.
The algorithm to get the option price is the straightforward discretized
S. Crpey Page 124
CIMPAJordanie September 2005
version of the risk-neutral expectation:
P
n
= e
rh
_
1
2
P
u,n+1
+
1
2
P
d,n+1
_
The convergence may be proved in the same way as before.
Notice that the discretized process is not a martingale.
The matching-3-moments scheme
An alternative route to convergence is the Central Limit Theorem. This
leads to the idea of matching the mean and variance of the conditional
laws of the approximating chain with those of the continuous process.
These are denoted the local consistency conditions [24, 25]. The
equations that u, d, q should satisfy are
qu + (1 q) d = e
h
qu
2
+ (1 q) d
2
e
2h
= e
2h
_
e

2
h
1
_
S. Crpey Page 125
CIMPAJordanie September 2005
Since one degree of freedom remains, a natural idea is to match also the
third moment, which gives the equation
qu
3
+ (1 q) d
3
= e
3h
e
3
2
h
The solution of this system is
u =
e
h
Q
2
_
1 +Q+
_
Q
2
+ 2Q3
_
d =
e
h
Q
2
_
1 +Q
_
Q
2
+ 2Q3
_
q =
e
h
d
u d
with Q = e

2
h
.
Notice that ud = e
2h
Q
2
> 1 : this tree is not symmetric.
12.4 Trinomial trees
S. Crpey Page 126
CIMPAJordanie September 2005
Along this line there is no need any longer to remain stucked with the
discrete-time no-arbitrage constraint one node-two sons. We may well
choose a 3-points scheme or p-points scheme or even a number of points
depending on N (this is useful for other kinds of limiting continuous-time
dynamics, like Lvy processes for instance [8]). From the previous
calculation its easy to see that the points and probabilities of the chosen
scheme should be constrained by:

p
j
exp (iln u
j
) = 1 +
_
i
_


2
2
_

2
2
_
h +o (h)
in the sense that these conditions ensure the convergence ( 2 on
page 112). Well see later that these conditions are equivalent to the local
consistency conditions, and also that they ensure a convergence of a much
more general type.
Note that from a computational point of view, in order to get a
recombining tree, a condition like u
j+1
/u
j
independant of j should be
S. Crpey Page 127
CIMPAJordanie September 2005
imposed.
A feature common to all the trinomial trees is that they allow a more
precise computation of the delta, gamma, theta of the option in a natural
nite-difference-like manner.
Trinomial schemes with matching two rst moments
Let u > m > d be the possible values of
S
n+1
(h)
S
n
(h)
, with probabilities
q
1
, q
2
, q
3
respectively.
In order to get a recombining tree we need only
ud = m
2
The two rst moment matching conditions give
q
1
u + q
2
m+ q
3
d = e
h
q
1
u
2
+ q
2
m
2
+ q
3
d
2
= e
2h
Q
Q as before.
S. Crpey Page 128
CIMPAJordanie September 2005
Since
q
1
+ q
2
+ q
3
= 1
it is seen that two unknowns remain.
The solution corresponding to the additional constraint
q
1
= q
2
= q
3
=
1
3
is
u = V +
_
V
2
m
2
d = V
_
V
2
m
2
m =
e
h
(3 Q)
2
with V =
e
h
(3+Q)
4
.
Many other choices can be done.
S. Crpey Page 129
CIMPAJordanie September 2005
The KamradRitchken tree
Kamrad and Ritchken [20] choose to take a symmetric 3-points
approximation to ln
_
S
h
S
0
_
and to match the 2 rst moments of this
quantity. More precisely, if v denote the upper state this leads to:
v (q
1
q
3
) =
_


2
2
_
h
v
2
(q
1
+q
3
) v
2
(q
1
q
3
)
2
=
2
h
They further simplify the last equality-still maintaining an o (h) matching
of the variance in
v
2
(q
1
+q
3
) =
2
h
Note that it can be checked by the calculation of the characteristic
function that this o (h) matching property is enough.
S. Crpey Page 130
CIMPAJordanie September 2005
By replacing v by

h this leads to
q
1
=
1
2
2
+
_


2
2
_

h
2
q
2
= 1
1

2
q
3
=
1
2
2

_


2
2
_

h
2
The parameter appears as a free parameter of the geometry of the tree,
which may be useful for some purposes. It is called the stretch parameter.
The value = 1.22474 which corresponds to q
2
=
1
3
is reported to be a
good choice for an ATM Call (or Put).
12.5 Algorithm (Kamrad-Ritchken)
Description: This is taken from [20]. It is a 3-node tree which is the
archetype of a trinomial tree. This is a at tree with 2N + 1 possible
S. Crpey Page 131
CIMPAJordanie September 2005
values of the underlying S throughout the options life.
Input parameters:
StepNumber N
StretchPrameter (should be greater than 1)
Output parameters:
Price
Delta
Code Sample:
static int KamradRitchken_91(...)
{
...
npoints=2*N+1;
/*Price, intrinsic value arrays*/
P=(double *)malloc(npoints*sizeof(double));
S. Crpey Page 132
CIMPAJordanie September 2005
iv=(double *)malloc(npoints*sizeof(double));
/*Up and Down factors*/
h=t/(double) N;
u=exp(lambda*sigma*sqrt(h));
d=1./u;
/*Discounted Probability*/
z=(r-divid)-SQR(sigma)/2.;
pu=(1./(2.*SQR(lambda))+z*sqrt(h)/(2.*lambda*
sigma));
pm=(1.-1./SQR(lambda));
pd=(1.-pu-pm);
pu*=exp(-r*h);
pm*=exp(-r*h);
pd*=exp(-r*h);
/*Intrinsic value initialisation and termina
S. Crpey Page 133
CIMPAJordanie September 2005
l values*/
upperstock=s;
for (i=0;i<N;i++)
upperstock*=u;
stock=upperstock;
for (i=0;i<npoints;i++)
{
iv[i]=(p->Compute)(p->Par,stock);
P[i]=iv[i];
stock*=d;
}
/*Backward Resolution*/
for (i=1;i<=N-1;i++)
{
npoints-=2;
S. Crpey Page 134
CIMPAJordanie September 2005
for (j=0;j<npoints;j++)
{
P[j] = pu*P[j]+pm*P[j+1]+pd*P[j+2];
if (am)
P[j] = MAX(iv[j+i],P[j]);
}
}
/*Delta*/
*ptdelta=(P[0]-P[2])/(s*u-s*d);
/*First time step*/
P[0]= pu*P[0]+ pm*P[1]+pd*P[2];
if (am)
P[0]=MAX(iv[N],P[0]);
/*Price*/
S. Crpey Page 135
CIMPAJordanie September 2005
*ptprice=P[0];
...
return OK;
}
Further Comments:
/*Price, intrinsic value arrays*/
/*Up and Down factors*/
Here u = e

h
, d = e

h
. The third node is m = 1.
/*Discounted Probability*/
These are computed by matching the two rst moment conditions with a
simplifying trick: the second moment condition is replaced by the
equality of the second moment of the conditional random walk in the tree
with the variance of the continuous limit logarithm of the Black-Scholes
diffusion: the variances still match at order o (h) so that convergence
S. Crpey Page 136
CIMPAJordanie September 2005
follows from Kushners theorem. The stretch parameter is free with the
following restrictions: it should be greater than 1 for the center-node
probability to be positive and smaller than

2
2

h
/*Intrisic value initialisation and terminal values*/
Since this is a at tree we store the intrinsic values in an array.
/*Backward Resolution*/
Notice that the indexing of the price array P is relative to the lower of the
underlying values at a xed time whereas the intrinsic value array
indexing iv is absolute. This accounts for the shift j in the index in
P[j]= MAX(iv[j+i],P[j])
/*Delta*/
We keep the formula of the CRR delta. Here it is no longer a
perfect-hedging delta in the discrete-time scheme since this is an
S. Crpey Page 137
CIMPAJordanie September 2005
incomplete market. The convergence can be proved in the same manner
as for the CRR delta. There maybe other more clever choices using the
center node.
/*First Time Step*/
/*Price*/
12.6 Miscellaneous remarks
Local consistency and convergence in law
Lets come back to the equality ensuring convergence in law:

p
j
exp (iln u
j
) = 1 +
_
i
_


2
2
_

2
2
_
h +o (h) (7)
S. Crpey Page 138
CIMPAJordanie September 2005
and assume:
_
_
_
p
j
= p
j,0
+p
j,1

h +p
j,2
h +o (h)
u
j
= 1 +u
j,1

h +u
j,2
h +o (h)
(8)
Then obviously

p
j,0
= 1,

p
j,1
=

p
j,2
= 0
We have
exp (iln u
j
) = exp
_
i
_
u
j,1

h +u
j,2
h
u
2
j,1
2
h
_
+o (h)
_
= 1 +iu
j,1

h +
_
i
_
u
j,2

u
2
j,1
2
_


2
2
u
2
j,1
_
h +o (h)
S. Crpey Page 139
CIMPAJordanie September 2005
and ( 7 on the page before) is equivalent to:
X
p
j,0
iu
j,1
= 0
X
p
j,0
(i(u
j,2

u
2
j,1
2
)

2
2
u
2
j,1
) +
X
p
j,1
iu
j,1
= [i(

2
2
)
2

2
2
]
and as long as p and u are real-valued:

p
j,0
u
j,1
= 0

p
j,0
_
u
j,2

u
2
j,1
2
_
+

p
j,1
u
j,1
=
_


2
2
_

p
j,0
u
2
j,1
=
2
or
_

p
j,0
u
j,1
= 0

p
j,0
u
j,2
+

p
j,1
u
j,1
=

p
j,0
u
2
j,1
=
2
(9)
S. Crpey Page 140
CIMPAJordanie September 2005
Lets look now at the local consistency equations for S :

p
j
u
j
= exp (h) +o (h)

p
j
u
2
j
= exp
__
2 +
2
_
h
_
+o (h)
which may be written down the following way:

p
j,0
u
j,1
= 0

p
j,0
u
j,2
+

p
j,1
u
j,1
=
2

p
j,0
u
j,1
= 0

p
j,0
u
2
j,1
+ 2

p
j,0
u
j,2
+ 2

p
j,1
u
j,1
=
_
2 +
2
_
S. Crpey Page 141
CIMPAJordanie September 2005
or alternatively

p
j,0
u
j,1
= 0

p
j,0
u
j,2
+

p
j,1
u
j,1
=

p
j,0
u
2
j,1
=
2
which is ( 9 on page 140). So, in the special case 8 on page 138, local
consistency is equivalent to convergence in law. This is consistent with
Kushners theorem [24, 25], which says that the local consistency
conditions, that is the matching of the rst and second conditional
moments of the increments of the approximating chain with those of the
continuous-time limit with accuracy o (h) , grants the convergence of the
expectations of usual functionals. In fact the setting in Kushners theorem
is a jump-diffusions stochastic control setting, and Kushners result deals
with the convergence of the optimal controlled chain to the optimal
controlled process. It is by far the most powerful tool for dealing with the
S. Crpey Page 142
CIMPAJordanie September 2005
convergence of Markov chain based algorithms .
Flat trees and American options
The algorithm for pricing American options is the natural backward
scheme:
P
n
= max
_
(S
n
) , e
rh

q
i
P
n+1,u
i
_
(10)
It gives the exact price in the CRR model for the case of the CRR tree
The algorithm requires the computation of the intinsic value at each node.
A computational interest of a at tree (like Kamrad-Ritchken or CRR) is
therefore that it allows the computation of the intrinsic values across all
the possible values of the underlying (either 2N + 1 or N + 1 for the
above trees) before performing the backward scheme. This may reduce
dramatically the computational cost of the algorithm.
S. Crpey Page 143
CIMPAJordanie September 2005
13 Trees for exotic options
13.1 Inaccuracy of the direct method for barrier options
Lets consider only the case of a Down-and-Out Call with a constant
rebate R attached to the limit L. The rst idea to price this option within
the CRR scheme is to apply directly the backward recurrence scheme. In
fact it is possible to show by calculus (although it is a bit tedious) that the
obtained price shall converge to the right Black-Scholes limit.
Nevertheless, it is observed that the convergence is very bad compared
with that for vanilla options. The reason is clear: let n
L
denote the index
such that
S
0
d
n
L
L > S
0
d
n
L
+1
Then the algorithm, N being xed, yields the same result for any value of
the barrier between S
0
d
n
L
and S
0
d
n
L
+1
. Therefore the convergence can
not be faster than, roughly speaking,
P
BS
L
_
d
n
L
d
n
L
+1
_

c

N
,
S. Crpey Page 144
CIMPAJordanie September 2005
(where P
BS
denotes the Black-Scholes price of the option) whereas the
convergence for vanilla European options is known to be of order
1
N
. An
alternative method is to feed the algorithm with the right value of the
barrier.
13.2 The Ritchken algorithm for barrier options
The idea here is to choose the stretch parameter such that the barrier is
hit exactly. We know that should be greater than one, intuitively among
many possibilities for , the closer is to one, the better.
The natural way to choose is the following: compute the value n
L
above, ie
n
L
=
_
ln
_
S
0
L
_

h
_
where [t] denotes the greater integer less or equal than t. Then take
=
1
n
L
ln
(
S
0
L
)

h
.
S. Crpey Page 145
CIMPAJordanie September 2005
Here convergence is reported to be like that for vanilla options.
13.3 Customization of trees
Trees are easy to customize due to the visualization of the paths of the
underlying in the Markov chain approximation
So it is often straightforward to design a tree algorithm for the pricing of a
somewhat involved contingent claim
This rst version of the algorithm may be improved later. Often in
practice it is fruitful for this second stage to consider the tree algorithm
like a nite-difference scheme (cf later sections) to gure out the
weakness of the approximation of the delta or of the price in a tricky
region.
Example: pricing time-dependant american options
The natural way to modify the basic algorithm is to replace the backward
S. Crpey Page 146
CIMPAJordanie September 2005
formula ( 10 on page 143) by
P
n
= max
_
(nh, S
n
) , e
rh

q
i
P
n+1,u
i
_
where (t, x) is the time-dependant payoff of the option.
A particular case is that of the so-called bermudan options where the
american right is in force only at a set of prescribed periods: for instance
between a xed date T
1
(excluded) and maturity (included). In case the
current date is prior to T
1
a natural idea is to apply the backward formula
( 10 on page 143) between step N 1 and n
1
where
(n
1
1) h T
1
< n
1
h
and then the standard CRR scheme.
This rst algorithm is very crude since it gives the same price, N being
xed, for any value of T
1
between (n
1
1) h and n
1
h. A way to feed the
algorithm with the right value is to use a KamradRitchken trinomial tree
with a stretch parameter
1
and a number of steps n
1
between times 0 and
S. Crpey Page 147
CIMPAJordanie September 2005
T
1
and
2
and n
2
between T
1
and T. In order to get a recombining tree
we get the following pasting condition at time T
1
:

1
_
T
1
n
1
=
2
_
T T
1
n
2
Do not forget that
1
and
2
should be greater than one. A possible way
to choose the parameters is: x rst
1
1 (for instance
1
= 1.2274),
choose n
1
, then take
n
2
=
_
n
1
(T T
1
)
T
1
_
+ 1
Then n
2
T
1
n
1
(T T
1
) thus
2
2
=
2
1
T
1
n
1
(TT
1
)
n
2

2
1
1.
S. Crpey Page 148
CIMPAJordanie September 2005
14 Finite Differences for European Vanilla
Options
For this family of options, the payoff is given by: (S
T
) , which is paid
at the maturity T.
14.1 Localization and Discretization
We recall that the price of an European option in the Black and Scholes
model
dS
t
S
t
= (r )dt +dW
t
can be formulated in terms of the solution to a Partial Differential
Equation. After logarithmic transformation X
t
= log(S
t
) the price at
time t of the option is V
t
= u(t, X
t
), where u solves the parabolic
S. Crpey Page 149
CIMPAJordanie September 2005
equation
8
<
:
u
t
(t, x) +

2
2

2
u
x
2
(t, x) + (r

2
2
)
u
x
(t, x) ru(t, x) = 0 in [0, T) R,
u(T, x) = (x), x R,
(11)
The notations are
- x the logarithm of the stock price
- the volatility
- r the interest rate
- the instantaneous rate of dividend
- the pay-off
- T the maturity
- R the real line (, +)
The canonical divergence form for a parabolic PDE (the one found in
S. Crpey Page 150
CIMPAJordanie September 2005
classic numerical analysis books) is
v



x
(
v
x
) +b
v
x
+av = f in (0, T)
with initial conditions (v(x, 0) given) and boundary conditions on (for
instance v(x, t) given for x ).
For smooth data and if > 0 and a 0, this equation has one and only
one solution which depends continuously on the data. The condition on a
is not essential but the solution v may grow exponentially in time if a is
not positive.
The change of variable v(, x) = u(T , e
x
) brings the BS equation in
canonical divergence form with =
2
/2, b = r
2
/2, and a = r.
Let x = log(S
0
). We start by limiting the integration domain in space:
the problem will be solved in a nite interval D := [x l, x +l] . One
chooses l so that
P(s [0, T], [X
x
s
[ l) (12)
S. Crpey Page 151
CIMPAJordanie September 2005
Once D is chosen, one discretizes in space and constructs the uniform
grid x
i
with
x
i
:= x l +
2i
M
, for 1 i M 1.
Let V
M
denote the space generated by the indicator function of [x
i
, x
i+1
[
for 0 i M 1.
One approximates the differential operator
A :=
1
2

x
2
+ (r

2
2
)

x
r
by a discrete operator A
h
acting on functions u
h
(t, ) dened on V
M
. The
easiest and most natural is to take:
A
h
u
h
(t, x
i
) =
1
2

2
u
h
x
2
(t, x
i
) + (r

2
2
)
u
h
x
(t, x
i
) ru
h
(t, x
i
)
S. Crpey Page 152
CIMPAJordanie September 2005
with

2
u
h
x
2
(t, x
i
) =
1
h
2
(u
h
(t, x
i+1
) 2u
h
(t, x
i
) +u
h
(t, x
i1
))
u
h
x
(t, x
i
) =
1
2h
(u
h
(t, x
i+1
) u
h
(t, x
i1
)).
Remark 14.1 If [r [ /
2
is not small then a less precise but more
stable nite difference approximation for this term is
u
h
x
(t, x
i
) =
_
_
_
1
h
(u
h
(t, x
i
) u
h
(t, x
i1
)) if r

2
2
< 0
1
h
(u
h
(t, x
i+1
) u
h
(t, x
i
)) if r

2
2
> 0
One then seeks the vector (u
h
(t, x
i
), 0 i M) such that, there holds
S. Crpey Page 153
CIMPAJordanie September 2005
In the case of natural Dirichlet boundary conditions
_

_
for 0 t T, 1 i M 1,
d
dt
u
h
(t, x
i
) +A
h
u
h
(t, x
i
) = 0
u
h
(T, x
i
) = (x
i
)
u
h
(t, x l) = (x l),
u
h
(t, x +l) = (x +l).
(13)
In the case of Neumann boundary conditions
_

_
for 0 t T, 1 i M 1,
d
dt
u
h
(t, x
i
) +A
h
u
h
(t, x
i
) = 0,
u
h
(T, x
i
) = (x
i
),
u
h
(t, x
1
) = u
h
(t, x l) +h

x
(x l),
u
h
(t, x
M1
) = u
h
(t, x +l) h

x
(x +l).
(14)
S. Crpey Page 154
CIMPAJordanie September 2005
Set u
h
(t) := (u
h
(t, x
1
), . . . , u
h
(t, x
M1
)) and
:=

2
2h
2

1
2h
(r

2
2
),
:=

2
h
2
r,
:=

2
2h
2
+
1
2h
(r

2
2
).
According to this new notation, the operator A
h
applied to u
h
(t, ) can be
described as follows:
A
h
u
h
(t, ) = M
h
u
h
(t) +v
h
,
with
S. Crpey Page 155
CIMPAJordanie September 2005
In the case of natural Dirichlet boundary conditions,
M
h
=
_

_
0 0 0
0 0
0 0
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0
0 0 0
_

_
(15)
and
v
h
=
_

_
(x l)
0
.
.
.
0
(x +l)
_

_
;
S. Crpey Page 156
CIMPAJordanie September 2005
In the case of articial Neumann boundary conditions,
M
h
=
_

_
+ 0 0 0
0 0
0 0
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0
0 0 0 +
_

_
(16)
and
v
h
=
_

_
h

x
(x l)
0
.
.
.
0
h

x
(x +l)
_

_
.
S. Crpey Page 157
CIMPAJordanie September 2005
Remark 14.2 When M = 2p + 1, x dont belong to
x
i
; 1 i M 1. Thus, at each time step, we use linear
interpolation to compute the option value corresponding to the initial
stock price u
h
(t, x) which is then approximated by
1
2
(u
h
(t, x
p
) +u
h
(t, x
p+1
)).
Now, let us discuss the discretization in time.
14.2 The -scheme
The standard -scheme ( [0, 1]) of the parabolic equation ( 11 on
page 149) may be summarized as follows: x a discretization step k such
that T = Nk and construct an approximation
u
h,k
(t, x) =
N

n=0
u
n
h
(x)1
[nk,(n+1)k[
(t)
S. Crpey Page 158
CIMPAJordanie September 2005
where u
0
h
, . . . , u
N
h
are the elements of V
M
satisfying
_

_
u
N
h
=
h
for 0 n N 1
u
n+1
h
u
n
h
k
+A
h
(u
n+1
h
+(u
n
h
u
n+1
h
)) = 0.
(17)
Besides, one must add the appropriate boundary conditions:
_
_
_
u
n
h
(x l) = (x l),
u
n
h
(x +l) = (x +l),
for Dirichlet boundary conditions and
_
_
_
u
n
h
(x
1
) = u
n
h
(x l) +

x
(x l)h
u
n
h
(x
N1
) = u
n
h
(x, +l)

x
(x +l)h.
for Neumann boundary conditions.
For = 0, we recover the Euler explicit scheme. Similarly, for = 1 the
S. Crpey Page 159
CIMPAJordanie September 2005
scheme is the fully implicit Euler scheme, and for =
1
2
it is the
Crank-Nicholson scheme.
Once we have computed u
h,k
, we recover the delta-hedging
=
1
e
x
u(t,x)
x
by its approximation on [nk, (n + 1)k[]x l, x +l[
given by

h
=
1
e
x
u
n
h
(x +h) u
n
h
(x h)
2h
.
14.3 Explicit Method
First, let us discuss the case = 0. Using the denition of A
h
, the
approximating scheme ( 17 on page 158) is reduced to
_

_
u
N
h
=
for 1 n M 1
u
n
h
(x
i
) = p
1
u
n+1
h
(x
i1
) +p
2
u
n+1
h
(x
i
) +p
3
u
n+1
h
(x
i+1
) +kv
h
i
S. Crpey Page 160
CIMPAJordanie September 2005
where
p
1
= k(

2
2h
2

b
2h
)p
2
= 1 k(r +

2
h
2
) p
3
= k(

2
2h
2
+
b
2h
)
with b = r
1
2

2
.
This scheme is stable if k
h
2

2
+rh
2
(and
2
> [b[h, but this is always
satised for h small enough).
14.4 Implicit Methods
When we choose 1 > 0, we have to solve at each time step, a linear
system of the type
Mu
k,h
(jk, ) = Nu
k,h
((j + 1)k) +kv
h
S. Crpey Page 161
CIMPAJordanie September 2005
where Mand Nare tridiagonal matrices of the type
_
_
_
_
_
_
_
_
_
_
_
_
_
b
1
c
1
0 0 0
a
2
b
2
c
2
0 0
0 a
3
b
3
c
3
0
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 a
M1
b
M1
c
M1
0 0 0 a
M
b
M
_
_
_
_
_
_
_
_
_
_
_
_
_
.
For example, in the case of natural Dirichlet boundary condition, Mis
given by
a
i
= k(
b
2h


2
2h
2
), b
i
= 1 +k(r +

2
h
2
), c
i
= k(
b
2h
+

2
2h
2
)
S. Crpey Page 162
CIMPAJordanie September 2005
for every i and Nis given by
a
i
= (1)k(

2
2h
2

b
2h
), b
i
= 1(1)k(r+

2
h
2
), c
i
= (1)k(
b
2h
+

2
2h
2
)
(18)
The fully implicit Euler, the CrankNicolson methods and all those with
,= 0 require the resolution of a linear system
Mu = v,
where u and v are M 1-dimensional vectors. Let us describe two
algorithms of resolution of such a linear system.
Gauss Factorization To solve such a system, the following Gauss
factorization is often used; it is based on the fact that a regular matrix can
be factorized into
M= LU
where L is a lower triangular matrix (all elements above the diagonal are
zero) and Uis an upper triangular matrix with all ones on its diagonal.
S. Crpey Page 163
CIMPAJordanie September 2005
The solution of the linear system LUz = v is decomposed into
Ly = v, Uz = y; the rst one is solved by a loop from M to 1 and the
second one by a loop from 1 to M. It is easy to see that Mtridiagonal
implies that L, Uare also tridiagonal and so only the upper diagonal of U
and the two diagonals of L need to be found. The resulting procedure is
known as Thomas algorithm [33, 27].

M
:= b
M
, y
M
:= v
M
For 1 i M 1, i decreasing :
b

i
= b
i
c
i
a
i+1
/b

i+1
,
y
i
= v
i
c
i
y
i+1
/b

i+1
.

z
1
= y
1
/b

1
For 2 i M, i increasing
z
i
= (y
i
a
i
z
i1
)/b

i
.
S. Crpey Page 164
CIMPAJordanie September 2005
Remark 14.3 Note that it is necessary that all the b

i
(called the pivots)
be non 0.
SOR Iterative Methods An alternative, which in the case of tridiagonal
systems is justied only by its programming simplicity, is to use the
Successive Over-Relaxation scheme for solving the linear system
Mu = v
The solution is computed as the limit of a converging sequence,
u = lim
p
u
p
. The basic steps are:
Step 0 : Choose u
0
0. Choose > 0, 1 < < 2. Set p = 0.
Step 1 : Form an intermediate vector h
p+1
= (h
p+1
i
)
1iN
by
h
p+1
i
=
1
M
ii
(v
i

i1

j=1
M
ij
u
p+1
j

m

j=i+1
M
ij
u
p
j
).
S. Crpey Page 165
CIMPAJordanie September 2005
Step 2 Dene u
p+1
by :
u
p+1
i
= u
p
i
+(h
p+1
i
u
p
i
))
Step 3 Set p = p + 1 and repeat until [u
p+1
u
p
[ < where is the
prescribed precision.
In practice one stores all the u
p
in the same computer memory, so the
exponent p does not appear in the computer program except as a loop
index.
S. Crpey Page 166
CIMPAJordanie September 2005
15 Finite Differences for American Vanilla
Options
15.1 Variational inequality in nite dimension
Consider the following approximating obstacle problem on
Q
l
= [0, T]
l
where
l
=]x l, x +l[.
_
_
_
max(
u
t
+Au, u) = 0
u(T, .) =
(19)
with a Dirichlet boundary condition u = on ]0, T[
l
.
In order to make the numerical analysis of obstacle problem ( 19), we
introduce a nite difference grid in space similar to the European case and
S. Crpey Page 167
CIMPAJordanie September 2005
construct an approximation:
u
h,k
(t, x) =
N

n=0
u
n
h
(x)1
[nk,(n+1)k[
(t)
where u
0
h
, . . . , u
N
h
are the elements of V
M
satisfying
_
_
_
u
N
h
=
h
(
u
n+1
h
u
n
h
k
+
_
A
h
(u
n+1
h
+(u
n
h
u
n+1
h
)), v
h
u
n
h
_
l
0 v
h

h
(20)
Let us describe in the two next sections the computational treatment of
variational inequalities in nite dimension ( 20). We refer to [19, 15] for a
detailed presentation and a better understanding of the numerical analysis
of variational inequalities.
15.2 Linear complementarity problem
It is well-known that the variational inequality in nite dimension ( 20)
S. Crpey Page 168
CIMPAJordanie September 2005
can be expressed as a linear complementarity problem.
At each time step n, we have to solve
_

_
MX G
X
(MX G, X ) = 0
(21)
with
_

_
M = I kA
h
X = u
n
G = (I +k(1 )A
h
)u
n+1
=
h
S. Crpey Page 169
CIMPAJordanie September 2005
Remark that M is a tridiagonal matrix:
M =
_
_
_
_
_
_
_
_
_
_
_
b c 0
a b c 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. a b c
0 a b
_
_
_
_
_
_
_
_
_
_
_
with
_

_
a = k
_


2
2h
2
+
1
2h
(r

2
2
)
_
b = 1 +k(

2
h
2
+r)
c = k
_

2
2h
2
+
1
2h
(r

2
2
)
_
There exist three classic algorithms which solve the linear
complementarity problem ( 21 on the page before):
A method by Brennan and Schwarz [5] This method consists in solving
S. Crpey Page 170
CIMPAJordanie September 2005
an auxiliary linear complementarity problem. We refer to Jaillet et al.
[19] for a rigorous justication of the convergence of this algorithm in the
case of American Put.
PSOR Method The linear complementarity problem ( 21 on page 169)
can be written as follows: nd vectors W = (w
i
)
0iM1
and
Z = (z
i
)
0iM1
in R
M
such that
_

_
W = MZ +V (22.1)
W 0, Z 0 (22.2)
(W, Z) = 0 (22.3)
(22)
where we have set Z = X and V = M G.
Such a linear complementarity problem can be solved with a
Projected-SOR scheme in the following manner:
Step 0: Choose z
0
0, 1 < < 2. Then, set p = 0.
S. Crpey Page 171
CIMPAJordanie September 2005
Step 1: Form:
y
p+1
i
=
1
M
ii
(v
i
+
i1

j=1
M
ij
z
p+1
j
+
m

j=i
M
ij
z
p
j
)
Step 2: Dene the new vector z
p+1
by:
z
p+1
i
= max((x
i
), z
p
i
+(y
p+1
i
z
p
i
))
Step 3:Set p = p + 1 and repeat until [z
p+1
z
p
[ < where is the
prescribed precision.
Convergence has been established by Cryer [11].
An Algorithm of Cryer This algorithm is based on a direct method and
is a modication of Saigals (1970) algorithm. The basic idea of this kind
of algorithm is : Choose an initial value which satises both (22.1) and
(22.2), maintain the two conditions during all steps and make satisfy
gradually the non-negative condition given in (22.3). The solution of the
S. Crpey Page 172
CIMPAJordanie September 2005
problem (22) is then obtained. Note that the matrix M is a Minkowski
matrix, namely a matrix with positive principal minors, positive diagonal
entries and non-positive off-diagonal entries. This implies in particular
that M
1
0. The Cryer algorithm is valid for all Minkowski matrix. In
the particular case, such as ours, where M is a tridiagonal Minkowski
matrix, an implementation of this basic method which minimizes the
amount of computation can be found in [12].
15.3 Splitting methods
We give an alternate method to solve variational inequalities in nite
dimension ( 20 on page 168) which is not related to linear
complementarity problems.
The splitting methods can be viewed as an analytic version of dynamic
programming. The idea contained in such a scheme is to split the
American problem in two steps: we construct recursively the approximate
S. Crpey Page 173
CIMPAJordanie September 2005
solution u
h,k
starting from u
N
= and computing u
n
h
for 0 n N in
two steps as follows:
Step 1 We solve the following Cauchy problem on
[nk, (n + 1)k[]x l, x +l[ with Dirichlet or Neumann boundary
conditions
_
_
_
w
t
+Aw = 0, (t, x) [nk, (n + 1)k[, ]x l, x +l[
w(j + 1, ) = u
n+1
h
(.)
Denote by S
k
[u
n+1
h
]() the solution w.
Step 2
u
n
h
() = max
_

h
(), S
k
[u
n+1
h
]()
_
BarlesDaherRomano ([4]) prove the convergence of this scheme
towards the option price, characterized as the unique viscosity solution
of 19 on page 167 satisfying suitable growth conditions [10]. As
described for the European case, we solve the rst step using -schemes.
S. Crpey Page 174
CIMPAJordanie September 2005
Moreover, we are able to prove that the approximate solution obtained by
splitting methods are bounded above by those obtained by methods
related to linear complementarity problem.
S. Crpey Page 175
CIMPAJordanie September 2005
16 Finite Difference -scheme Algorithm for
Vanilla Options
Description:
Input parameters:
SpaceStepNumber N
TimeStepNumber M
Theta
1
2
1
Boundary Condition Dirichlet(0) or Neumann(1) bound
Output parameters:
Price
Delta
Code Sample:
S. Crpey Page 176
CIMPAJordanie September 2005
static int boundary(...)
{
/*Natural Dirichlet Boundary Conditions*/
if (bound==0) {
*ptbound1=(p->Compute)(p->Par,exp(y-l));
*ptbound2=(p->Compute)(p->Par,exp(y+l));
}
/*Natural Neumann Boundary Conditions*/
else
{
if (( (p->Compute) == &Call))
{
*ptbound1=0.;
*ptbound2=-exp(y+l)*h;
}
if (( (p->Compute) == &Put))
{
S. Crpey Page 177
CIMPAJordanie September 2005
*ptbound1=-exp(y-l)*h;
*ptbound2=0.;
}
if (( (p->Compute) == &CallSpread )||((p->
Compute) == &Digit))
{
*ptbound1=0.;
*ptbound2=0.;
}
}
return OK;
}
static int GaussThetaSchema(...)
{
...
/*Time Step*/
S. Crpey Page 178
CIMPAJordanie September 2005
k=t/(double)M;
/*Space Localisation*/
vv=0.5*SQR(sigma);
z=(r-divid)-vv;
l=sigma*sqrt(t)*sqrt(log(1.0/PRECISION))+fabs(z
*t);
/*Space Step*/
h=2.0*l/(double)N;
/*Peclet Condition-Coefficient of diffusion aug
mented */
if ((h*fabs(z))<=vv)
upwind_alphacoef=0.5;
else {
if (z>0.) upwind_alphacoef=0.0;
else if (z<=0.) upwind_alphacoef=1.0;
S. Crpey Page 179
CIMPAJordanie September 2005
}
vv-=z*h*(upwind_alphacoef-0.5);
/*Lhs Factor of theta-schema*/
alpha=k*(-vv/(h*h)+z/(2.0*h));
beta=k*(r+2.*vv/(h*h));
gamma=k*theta*(-vv/(h*h)-z/(2.0*h));
for(PriceIndex=1;PriceIndex<=N-1;PriceIndex++)
{
A[PriceIndex]=theta*alpha;
B[PriceIndex]=1.0+theta*beta;
C[PriceIndex]=theta*gamma;
}
/*Neumann Boundary Condition*/
if (bound==1) {
B[1]=1.0+theta*beta+theta*alpha;
S. Crpey Page 180
CIMPAJordanie September 2005
B[N-1]=1.0+theta*beta+theta*gamma;
}
/*Rhs Factor of theta-schema*/
alpha1=k*(vv/(h*h)-z/(2.0*h));
beta1=-k*((r+2.*vv/(h*h));
gamma1=k*(vv/(h*h)+z/(2.0*h));
for(PriceIndex=1;PriceIndex<=N-1;PriceIndex++)
{
A1[PriceIndex]=(1.0-theta)*alpha1;
B1[PriceIndex]=1.0+(1.0-theta)*beta1;
C1[PriceIndex]=(1.0-theta)*gamma1;
}
/*Neumann Boundary Condition*/
if (bound==1) {
B1[1]=1.0+(1.0-theta)*beta1+(1.0-theta)*alpha1;
S. Crpey Page 181
CIMPAJordanie September 2005
B1[N-1]=1.0+(1.0-theta)*beta1+(1.0-theta)*gamma1;
}
/*Set Gauss*/
for(PriceIndex=N-2;PriceIndex>=1;PriceIndex--)
B[PriceIndex]=B[PriceIndex]-C[PriceIndex]*A[
PriceIndex+1]/B[PriceIndex+1];
for(PriceIndex=1;PriceIndex<N;PriceIndex++)
A[PriceIndex]=A[PriceIndex]/B[PriceIndex];
for(PriceIndex=1;PriceIndex<N-1;PriceIndex++)
C[PriceIndex]=C[PriceIndex]/B[PriceIndex+1];
/*Terminal Values*/
y=log(s);
for(PriceIndex=1;PriceIndex<N;PriceIndex++) {
Obst[PriceIndex]=(p->Compute)(p->Par,exp(y-l+
(double)PriceIndex*h));
P[PriceIndex]= Obst[PriceIndex];
S. Crpey Page 182
CIMPAJordanie September 2005
}
dummy=boundary(bound,p,y,l,h,&bound1,&bound2);
/*Finite Difference Cycle*/
for(TimeIndex=1;TimeIndex<=M;TimeIndex++)
{
/*Set rhs*/
S[1]=B1[1]*P[1]+C1[1]*P[2]+A1[1]*bound1+al
pha1*bound1;
for(PriceIndex=2;PriceIndex<N-1;PriceIndex+
+)
S[PriceIndex]=A1[PriceIndex]*P[PriceInd
ex-1]+
B1[PriceIndex]*P[PriceIndex]+
C1[PriceIndex]*P[PriceIndex+1];
S[N-1]=A1[N-1]*P[N-2]+B1[N-1]*P[N-1]+C1[N-1
S. Crpey Page 183
CIMPAJordanie September 2005
]*bound2+gamma1*bound2;
/*Solve the system*/
for(PriceIndex=N-2;PriceIndex>=1;PriceInd
ex--)
S[PriceIndex]=S[PriceIndex]-C[PriceIndex]
*S[PriceIndex+1];
P[1] =S[1]/B[1];
for(PriceIndex=2;PriceIndex<N;PriceIndex++)
P[PriceIndex]=S[PriceIndex]/B[PriceIndex]
-A[PriceIndex]*P[PriceIndex-1];
/*Splitting for the american case*/
if (am)
for(PriceIndex=1;PriceIndex<N;PriceIndex+
+)
P[PriceIndex]=MAX(Obst[PriceIndex],P[Pr
S. Crpey Page 184
CIMPAJordanie September 2005
iceIndex]);
}
Index=(int) floor ((double)N/2.0);
/*Price*/
*ptprice=P[Index];
/*Delta*/
*ptdelta = (P[Index+1]-P[Index-1])/(2.0*s*h);
...
return OK;
}
Further Comments:
/*Natural Dirichlet Boundary Conditions*/
S. Crpey Page 185
CIMPAJordanie September 2005
/*Natural Neumann Boundary Conditions*/
/*Time Step*/
Dene the time step k =
T
N
.
/*Space localisation*/
Dene the integration domain D = [l, l] using the probabilistic
estimate 12 on page 151.
/*Space Step*/
Dene the space step h =
2l
M
.
/*Peclet Condition*/
If [r [ /
2
is not small, then a more stable nite difference
approximation is used.
/* Lhs factor of theta scheme*/
Initialize the M issued from the implicit method in the cases of Dirichlet
and Neumann Boundary conditions.
/*Neumann Boundary Condition*/
The value of variable bound indicates the users choice of boundary
S. Crpey Page 186
CIMPAJordanie September 2005
condition. If bound = 1 then we modify M into account the Neumann
boundary condition.
/*Rhs factor of theta scheme*/
Initialize the matrix N issued from the explicit method in the cases of
Dirichlet and Neumann Boundary conditions.
/*Neumann Boundary Condition*/
If bound = 1 then we modify N into account the Neumann boundary
condition.
/*Set up Gauss*/
This part concerns the factorization LU of the tridiagonal matrix M. The
rst loop initialize U, whereas the others initialize L.
/*Terminal value*/
Put the value of the payoff saved in Obst into a vector P which will be
used to save the option value.
/*Finite difference Cycle*/
S. Crpey Page 187
CIMPAJordanie September 2005
At any time step, described by the loop in the variable TimeIndex, we
have to solve the system Mu = NP +kv
h
.
/*Set rhs*/
Compute NP +kv
h
and save the result in the vector S.
/*Solve the system*/
We solve the system Mu = S in two steps by the Gauss method. The
result is saved in P.
/*Splitting for American case*/
For American options, we compare at each time step the solution of
Mu = NP +kv
h
saved in P with the payoff function saved in Obst. We
save the result in P.
/*Price*/
/*Delta*/
S. Crpey Page 188
CIMPAJordanie September 2005
17 Finite Differences for 2D Vanilla Options
The purpose of this section is to describe various algorithms for pricing
options in the bidimensional Black-Scholes setting based upon A.D.I.
methods of Peaceman Rachford [39, 33].
The stock-prices at time t satisfy the following stochastic differential
equation:
_
_
_
dS
1
t
= S
1
t
(rdt +
11
dW
1
t
+
12
dW
2
t
), log(S
1
0
) = x
1
0
,
dS
2
t
= S
2
t
(rdt +
21
dW
1
t
+
22
dW
2
t
), log(S
2
0
) = x
2
0
where

11

12

21

22

1
0

2
_
1
2

In order to adapt the A.D.I. algorithm to the discretization of the parabolic


equation related on the pricing of European options, wed rather work
S. Crpey Page 189
CIMPAJordanie September 2005
with the underlying bidimensional Brownian motion.
We introduce some notation
is the vector with components
(r
1

1
2

2
11

1
2

2
12
, r
2

1
2

2
21

1
2

2
22
)
for x = (x
1
, x
2
), exp(x) = (e
x
1
, e
x
2
)
(t, w) = (exp(x +t +w)) so that the payoff of the option is
given (t, W
t
)
The price of a 2D European option is given by
u(0, 0, 0) = E
_
e
rT
(T, W
1
T
, W
2
T
)
_
and can be formulated in terms of the solution to the two dimensional
equation
8
<
:
u
t
(t, x, y) +
1
2

2
u
x
2
(t, x, y) +
1
2

2
u
y
2
(t, x, y) ru(t, x, y) = 0 in [0, T[
l
u(T, x, y) = (T, x, y)
S. Crpey Page 190
CIMPAJordanie September 2005
with localization
l
=] l, l[
2
and with Dirichlet boundary condition
u = on ]0, T[
l
.
For the numerical solution of the problem by nite difference method, we
introduce a grid of mesh points (t, w
1
, w
2
) = (nk, ih, jh) where h, k are
mesh parameters (which are thought of as tending to zero), n, i, j are
integers. The approximate solutions of the problem will be denoted by
u
n
h
. The deltas-hedging ratios

1
=
1

11
(
1
e
x
1
u(0, 0, 0)
w
1

21

2
)

2
=
1
e
x
2

22
u(0, 0, 0)
w
2
are approximated, respectively, with

1
=
1

11
(
1
e
x
1
u
i+1,j
u
i1,j
2h

21

2
)

2
=
1
e
x
2

22
u
i,j+1
u
i,j1
2h
S. Crpey Page 191
CIMPAJordanie September 2005
17.1 Numerical integration by an A.D.I. Method
Alternate Direction Implicit methods were proposed by Peachman
Rachford [39]. At each time step, one can integrate in each direction by
using the usual nite difference method for 1D problems in two steps, the
rst one in x the second in y:
_

_
u
n+1
h
u
n+
1
2
h
k/2
+
1
2

1
h
u
n+
1
2
h
+
1
2

2
h
u
n+1
h

1
2
ru
n+
1
2
h

1
2
ru
n+1
h
= 0
u
n+
1
2
h
u
n
h
k/2
+
1
2

1
h
u
n+
1
2
h
+
1
2

2
h
u
n
h

1
2
ru
n+
1
2
h

1
2
ru
n
h
= 0
with
(
1
h
u)
i,j
=
u
i1,j
2u
i,j
+u
i+1,j
h
2
(
2
h
u)
i,j
=
u
i,j1
2u
i,j
+u
i,j+1
h
2
S. Crpey Page 192
CIMPAJordanie September 2005
Because each time step is an implicit 1D problem, it requires the solution
of a linear system with a tridiagonal matrix, so that one can use the Gauss
method.
17.2 American Options
The price of a 2D American option, given by the Snell envelope
u(t, x, y) = sup
T
t,T
Ee
r
(, W
1

, W
2

)
can be formulated in terms of the solution to the following variational
inequality
_
_
_
max
_
u,
u
t
+
1
2
u
xx
+
1
2
u
yy
ru
_
= 0, (t, x) in [0, T[
l
u(T, x, y) = (T, x, y)
with a Dirichlet boundary condition u = on ]0, T[
l
.
For the numerical solution of the problem by nite difference method, we
S. Crpey Page 193
CIMPAJordanie September 2005
introduce again a grid of mesh points (t, x, y) = (nk, ih, jh). The
approximate solutions of the problem will be denoted by u
n
h
. To solve the
inequality one combines the projection by the splitting scheme with
A.D.I. nite difference method [43].
17.3 Algorithm (A.D.I. BS2D)
Description: Alternate Direction Implicit methods were proposed by
Peachman Rachford ([39]. At each time step, one can integrate in each
direction.
In the american case to solve the inequality one combines the projection
by the splitting scheme with A.D.I. nite difference method. The idea of
this scheme [43] is to split the American problem in two steps.
Input parameters:
TimeStepNumber M
SpaceStepNumber N
S. Crpey Page 194
CIMPAJordanie September 2005
Output parameters:
Price
Delta1
Delta2
Code Sample:
static void init(double **g,double **u,double al
pha,double beta,double gam,int flag,int N)
{
int i,j;
if (flag==1)
{
for(i=1;i<N;i++)
for(j=1;j<N;j++)
g[i][j]=alpha*u[i][j-1]+beta*u[i]
[j]+gam*u[i][j+1];
S. Crpey Page 195
CIMPAJordanie September 2005
}
else
{
for(i=1;i<N;i++)
for(j=1;j<N;j++)
g[j][i]=alpha*u[j-1][i]+beta*u[j]
[i]+gam*u[j+1][i];
}
return;
}
static void swap(double **u,double **g,int N)
{
int i,j;
for(i=1;i<N;i++)
for(j=1;j<N;j++)
S. Crpey Page 196
CIMPAJordanie September 2005
u[i][j]=g[i][j];
return;
}
static int tri(...)
...
/*Gauss Algorithm*/
b[N-1]=beta;
for(i=N-2;i>=1;i--) b[i]=beta-gam*alpha/b[i+1
];
for(i=1;i<=N-1;i++) a[i]=alpha/b[i];
for(i=1;i<N-1;i++) c[i]=gam/b[i+1];
if (flag==1) {
for(j=1;j<N;j++)
{
for(z=N-2;z>=1;z--)
g[z][j]=g[z][j]-g[z+1][j]*c[z];
S. Crpey Page 197
CIMPAJordanie September 2005
g[1][j]=g[1][j]/b[1];
for (z=2;z<N;z++)
g[z][j]=(g[z][j]/b[z]-g[z-1][j]*
a[z]);
}
} else {
for(j=1;j<N;j++)
{
for(z=N-2;z>=1;z--)
g[j][z]=g[j][z]-g[j][z+1]*c[z];
g[j][1]=g[j][1]/b[1];
for (z=2;z<N;z++)
g[j][z]=(g[j][z]/b[z]-g[j][z-1]*
a[z]);
S. Crpey Page 198
CIMPAJordanie September 2005
}
}
...
return OK;
}
static int Adi(...)
{
...
/*Covariance Matrix*/
sigma11=sigma1;
sigma12=0.0;
sigma21=rho*sigma2;
sigma22=sigma2*sqrt(1.0-SQR(rho));
m1=(r-divid1)-SQR(sigma11)/2.0;
m2=(r-divid2)-(SQR(sigma21)+SQR(sigma22))/2.0
;
S. Crpey Page 199
CIMPAJordanie September 2005
/*Space Localisation*/
limit=sqrt(t)*sqrt(log(1/PRECISION));
h=2.*limit/(double)N;
/*Time Step*/
k=t/(2.*(double)M);
/*Rhs Factor*/
b1=1.-k*(1./SQR(h)+r/2.0);
a1=k/(2.0*SQR(h));
/*Lhs Factor*/
b2=1.+k*(1./SQR(h)+r/2.0);
a2=-k/(2.0*SQR(h));
/*Terminal Values*/
x1=log(s1);
x2=log(s2);
S. Crpey Page 200
CIMPAJordanie September 2005
trend1=exp(x1+m1*t);
trend2=exp(x2+m2*t);
for (j=0;j<=N;j++)
temp1[j]=exp(sigma11*(- limit+h*(double)j)
);
for(i=1;i<N;i++)
{
for (j=1;j<N;j++)
{
temp2[i][j]=exp(sigma21*(-limit+h*(
double)j)+sigma22*(limit-h*(double)i));
P[i][j]= (p->Compute)(p->Par, trend1*
temp1[j],trend2*temp2[i][j]);
}
}
/*Homegenous Dirichlet Conditions*/
for(i=0;i<=N;i++)
S. Crpey Page 201
CIMPAJordanie September 2005
{
P[i][0]=0.;
P[i][N]=0.;
P[0][i]=0.;
P[N][i]=0.;
}
/*Finite Difference Cycle */
scan1=exp(-m1*2.*k);
scan2=exp(-m2*2.*k);
for (TimeIndex=1;TimeIndex<=M;TimeIndex++)
{
trend1*=scan1;
trend2*=scan2;
/*First Step*/
flag=1;
S. Crpey Page 202
CIMPAJordanie September 2005
/*Init Rhs*/
init(G,P,a1,b1,a1,flag,N);
/*Gauss Algorith*/
tri(G,a2,b2,a2,flag,N);
swap(P,G,N);
/*Second Step*/
flag=2;
/*Init Rhs*/
init(G,P,a1,b1,a1,flag,N);
/*Gauss Algorithm*/
tri(G,a2,b2,a2,flag,N);
swap(P,G,N);
S. Crpey Page 203
CIMPAJordanie September 2005
/*Splitting for the american case */
if (am)
{
for(i=1;i<N;i++)
{
for(j=1;j<N;j++)
{
iv=(p->Compute)(p->Par,trend1
*temp1[j],trend2*temp2[i][j]);
P[i][j]=MAX(iv,P[i][j]);
}
}
}
}
Index=(int) ((double)N/2.0);
S. Crpey Page 204
CIMPAJordanie September 2005
/*Price*/
*ptprice=P[Index][Index];
/*Deltas*/
*ptdelta2=(P[Index-1][Index]-P[Index+1][Ind
ex])/(2.*s2*h*sigma22);
*ptdelta1=((P[Index][Index+1]-P[Index][Index-
1])/(2.*s1*h)-sigma21*(*ptdelta2))/sigma11;
...
return OK;
}
Further Comments:
/*Memory Allocation*/
/*Covariance Matrix*/
/*Space localisation/*
Dene the integration domain D = [l, l]
2
using probabilistic estimation.
S. Crpey Page 205
CIMPAJordanie September 2005
/*Space Step/*
Dene the space step h =
2l
M
.
/*Time Step*/
/*Rhs Factor*/
The right-hand side factor of each step of ADI scheme.
/*Lhs Factor*/
The left-hand side factor of each step of ADI scheme.
/*Terminal Values/*
Put the value of the payoff into a vector P
/*Homegenous Dirichlet Conditions/*
/*Finite difference Cycle*/
At any time step, described by the loop in the variable TimeIndex, we
have to solve the system
/*First step*/
First step of ADI scheme.
S. Crpey Page 206
CIMPAJordanie September 2005
/*Init Rhs*/
Compute the right-hand side.
/* Gauss Algorithm*/
Resolution of linear system with Gauss method.
/*Second step*/
Second step of ADI scheme.
/*Init Rhs*/
Compute the right-hand side.
/* Gauss Algorithm*/
Resolution of linear system with Gauss method.
/*Splitting for American case*/
For American options, we compare at each time step the solution in P
with the payoff function saved in iv. We save the result in P
/*Price*/
/*Delta*/
S. Crpey Page 207
CIMPAJordanie September 2005
18 Finite Differences for Exotic Options
18.1 Lookback Options
The price of a Lookback option, with payoff
(S
T
, m
T
) where m
t
= sup
0st
S
s
satises the usual PDE in the (t, S) variables, but in the subdomain
S m of the three-dimensional state space (t, S, m). Moreover, on the
boundary S = m, the price P satises the homogenous oblique Neumann
condition
m
P = 0 [44].
This can be derived formally by writing the PDE satised by the price of
the option with approximating payoff (S
T
, m
T
(n)) where
m
t
(n) = (
_
t
0
S
n
ds)
1
n
, dm
t
(n) =
1
n
S
n
t
m
t
(n)
n1
dt
S. Crpey Page 208
CIMPAJordanie September 2005
and letting n tend to +.
An intuitive insight into the homogenous oblique Neumann boundary
condition is given by the fact that when S
t
gets closer to its current
maximum m
t
, then it becomes rather sure that this value of m
t
will not
stay the maximum up to expiry. It follows that the option value must be
insensitive to small changes in this value of m
t
, namely
m
P = 0 if
m = S.
18.2 Barrier Options
We gather in the generic term barrier every option which value solves the
following parabolic partial linear equation,
_

_
u
t
+
1
2

2
2
u
x
2
+ (r

2
2
)
u
x
ru = 0 in [0, T) ,
u(T, y) = (y), y ,
u(t, y) = R(t, y), (t, y) [0, T] ,
(23)
S. Crpey Page 209
CIMPAJordanie September 2005
Let us give some examples:
Out Options
We consider only the case of a down barrier L, the discussion for an
upper barrier U is similar.
For this option, =]L, x +l[ and R(t, L) = R.
In Options
For this option, =]L, x +l[, (y) = R and R(t, L) = C(T t, L)
where C is the price of a vanilla European call with maturity T t
Double Barrier Out Options For this option, =]L, U[ and
R(t, L) = R(t, U) = R.
Double Barrier In Options For this option, =]L, U[, (y) = R,
R(t, L) = C(T t, L) and R(t, U) = C(T t, U).
Algorithm
For barrier options, one also discretizes in space and time with a -scheme
S. Crpey Page 210
CIMPAJordanie September 2005
and one solves with Gauss method in the case of european option and
with the Psor method or the splitting method in the american case. To
obtain accurate prices, grid points are located on the barrier, where we
impose Dirichlet boundary conditions. In the case of curved barriers in
particular, it can be useful to apply implicit Dirichlet boundary conditions
on ctitous grid points that would be located on the barrier [33].
One uses linear interpolation to nd the price value and delta value
corresponding to the initial stock price. If the initial stock price is close to
barrier one uses for delta one-sided second-order difference
approximation.
18.3 Asian Options
Rogers-Shi Fixed Asian Options
European style Asian option may be valued using one-dimensional PDEs
based on a scaling property of geometric brownian motion [41].
S. Crpey Page 211
CIMPAJordanie September 2005
Let y =
K
x
and b = r . The price of a Asian call xed option
C
a
(0, x) = Ee
rT
_
1
T
_
T
0
S
s
ds K
_
+
,
can be formulated as
C
a
(0, x) = e
T
xu(0, y)
where u is the solution of the following PDE
_

_
u
t
+
1
2

2
x
2
2
u
x
2

_
1
T
+bx
_
u
x
= 0 in [0, T) R
+
u(t, 0) =
1e
b(Tt)
bT
u(T, x) = 0 in R
+
. u(t, ) = 0 in R
+
.
The delta of xed-strike Asian call option is given by:

a
= e
T
(u(0, y) y
u(0, y)
y
)
S. Crpey Page 212
CIMPAJordanie September 2005
The price of a Asian put xed option
P
a
(0, x) = Ee
rT
_
K
1
T
_
T
0
S
s
ds
_
+
,
can be formulated as
P
a
(0, x) = e
T
xu(0,
K
x
)
with u solution of the PDE
_
_
_
u
t
+
1
2

2
x
2
2
u
x
2

_
1
T
+bx
_
u
x
= 0 in [0, T) R
+
u(T, x) = x
+
in R.
The delta of xed-strike Asian put option is given by:

a
= e
T
(u(0, y) y
u(0, y)
y
)
The PDEs are solved with a nite difference time-implicit scheme. One
S. Crpey Page 213
CIMPAJordanie September 2005
discretizes in space and time and one solves the linear system with the
Gauss method. If necessary one uses linear interpolation to nd the option
value corresponding to the initial stock price.
S. Crpey Page 214
CIMPAJordanie September 2005
19 Dynamic tests
19.1 Delta-hedging
Dynamic Hedging is a technique of portfolio insurance or position risk
management in which an option-like return pattern is created by
increasing or reducing the position in the underlying to simulate the delta
change in value of an option position. Dynamic hedging relies on liquid
and reasonably continuous markets. It addresses the question of hedging
options discretely in time. The Black-Scholes analysis requires
continuous hedging, which is possible in theory but impossible-even
undesirable-in practice. The simpliest model for discrete hedging is to
rehedge at xed intervals of time h ; a strategy commonly used with h
ranging from one day to one week.
To x ideas, we shall take the point of view of the seller of the option.
Consider for example the case of a european call option with maturity T.
S. Crpey Page 215
CIMPAJordanie September 2005
The picture is the following : we sell the option at time 0 at price C
0
, that
is we receive at time 0 this amount of money, but in turn it is mandatory
for us to pay at time T the pay-off (S
T
K)
+
, which may be very high
depending on the movements of the underlying.
The idea is to go to the underlying market to buy some shares in order to
hedge the possibility of a high increase of the underlying. So the very
possibility to trade options, at least in a safe or quite safe manner, is
closely related to the access to the underlying market, or more generally
to hedging instruments. The hedge may be performed dynamically to
balance in a required way the option pay-off.
We have at our disposal the following hedging formula giving us the
expression of our Prot&Loss :
P&L = C
0
+
N1

n=0

n
(S
nh
)
_

S
(n+1)h


S
nh
_
(S
T
) (24)
where C
0
is the selling price,
n
is the hedge ratio,

S
nh
is the discounted
S. Crpey Page 216
CIMPAJordanie September 2005
value of the spot at time nh, (S
T
) the discounted pay-off and N the
number of hedges before exercise. It means that we sell the option at time
0 at price C
0
, we hedge N times at regular time step h and we deal the
pay-off at time T = Nh.
To x ideas, let us assume that the value of the spot is the solution of
BlackScholes dynamics: dS
t
= S
t
(dt +dB
t
) . If the market maker
was able to hedge continuously, he could perfectly replicate the option
and the P&L would equal 0, provided he uses the BlackScholes Delta of
the option as control . But in practice the market maker is not able to
hedge continuously : its impossible, and even if it was possible, it would
be undesirable anyway, because of transaction costs. In practice the
market maker hedges at discrete times (for example every day at closing
price). By doing this, he produces an hedging error, so the P&L never
equals 0.
More generally, assuming no arbitrage and working under the market
risk-neutral probability, discrete hedging can also be interpreted as a
S. Crpey Page 217
CIMPAJordanie September 2005
control variate method [7] in which the quantity to compute is an option
price C
0
= E (S
T
) , and the control variate is
=
N1

n=0

n
(S
nh
)
_

S
(n+1)h


S
nh
_
for a suitably-chosen control function .
19.2 Dynamic tests using Brownian Bridge in the
BlackScholes model
The solution of the BlackScholes SDE is:
S
t
= s
0
exp
__


2
2
_
t +B
t
_
so:
S
t+h
= S
t
exp
__


2
2
_
h + (B
t+h
B
t
)
_
(25)
where B
t+h
B
t
=

hX and X N(0, 1). Thus we can simulate the
S. Crpey Page 218
CIMPAJordanie September 2005
new value of the spot, step by step, multiplying the older spot by
exp
__


2
2
_
h +

hX
_
with X N(0, 1). Moreover it is
interesting to be able to lter some noteworthy paths of the spot, selecting
for example paths passing by a wanted spots value at a wanted moment.
The aim of this effort is to observe the behaviour of the pricing routines in
extreme situations of the spots trajectory.
A Brownian Bridge is a centered Gaussian process dened on T = [0, 1]
and with covariance (s, t) = s (1 t) on (s t). The easiest way to
prove that is a covariance is to observe that for the process
X
t
= B
t
tB
1
where B is a Brownian Motion, E [X
s
X
t
] = s (1 t)
for s t. This gives us also immediately a continuous version of the
Brownian Bridge. We observe that X
1
= 0 a.s. hence all the paths go a.s.
from 0 at time 0 to 0 at time 1 ; this is the reason for the name given to
this process. Naturally, the notion of Bridge may be extended to higher
dimensions and to intervals over than [0, 1].
In our case, we want the path of the spot to pass at S
T
1
at time T
1
. For
S. Crpey Page 219
CIMPAJordanie September 2005
this we would like to replace the Brownian motion in the last formula by a
Brownian Bridge so that the path passes by the wanted target point. So we
set
X
0,T
1
x,y
(t) =
t
T
1
(y x) +x +B
t

t
T
1
B
T
1
with T
1
: time target, y: brownian bridges target value, x: brownian
bridges starting value and B a brownian motion ; This gives a Brownian
Bridge which path goes from x at time 0 to y at time T
1
.
We set
X
0,T
1
x,y
(0) = x = 0
X
0,T
1
x,y
(T
1
) = y =
1

_
ln
_
S
T
1
S
0
_

_


2
2
_
T
1
_
because we want :
S
T
1
= S
0
exp
__


2
2
_
T
1
+X
0,T
1
x,y
(T
1
)
_
S. Crpey Page 220
CIMPAJordanie September 2005
We obtain nally
S
t+h
= S
t
exp
__


2
2
_
h +
_
X
0,T
1
x,y
(t +h) X
0,T
1
x,y
(t)
_
_
Now, we would like to simulate the quantity :
= X
0,T
1
x,y
(t +h) X
0,T
1
x,y
(t). After all calculs we nd that
N
_
_
y X
0,T
1
x,y
(t)
_
h
T
1
t
; h
_
1
h
T
1
t
__
It gives us the formula dealing the new value of the spot:
S
t+h
= S
t
exp

(

2
2
)h +(y X
0,T
1
x,y
(t)
h
T
1
t
+
r
h(1
h
T
1
t
)Z)
!
(26)
where Z N (0, 1) We note that this last formula is right only for
(t T
1
), for (t > T
1
) we use formula 25 on page 218.
The Dynamic Tests simulate several spots trajectories with these
S. Crpey Page 221
CIMPAJordanie September 2005
formulas at discrete times, and calculate for each path the corresponding
P&L with the discrete hedging formula 24 on page 216. After this, we
calculate some statistics on these P&L like the mean, the square
deviation, and we determine the maximum and the minimum of these
found P&L. We keep also some relevant spots trajectories generating
extremal P&L positions in aim to output some graphics. With these
graphic outputs, we can observe the behaviour of the hedging error done
by the market maker hedging at discrete time with continuous models.
Using the brownian bridge, we can be interested in some critical
situations : we can decide to observe the behaviour of the pricing routines
when the spots trajectory passes at a critical point ; for example we can
impose the spot to reach the limit, in the case of a barrier option, at an
interesting date like the maturity, because we know that can cause
numerical problems due to the expression of the at this position.
Dynamic Tests give two kinds of informations : they evaluate the hedging
error made by the market maker by hedging at discrete time on
S. Crpey Page 222
CIMPAJordanie September 2005
continuous models (like it is in reality), and they detect some problems of
the pricing routines behaviour.
S. Crpey Page 223
CIMPAJordanie September 2005
20 Conclusion
Number of Operations Convergence rate Memory Cost
MC N
1

N
N
PDE M
d 1
M
2
M
d
Table 1: Stochastic (N simulation runs) versus Deterministic (M meshes
per space dimension) Numerical Schemes; space dimension d
When the space dimension of a model is three or less, deterministic
methods are recommended. They give the price and Greeks at the same
time and for a whole range of values of the spot, and they can easily deal
with all the early exercice features
Otherwise or if one wants to be able to deal with any kinds of
path-dependency, simulation methods may be considered.
S. Crpey Page 224
CIMPAJordanie September 2005
Always try to get estimates of the constants involved in the convergence
rates before resorting to a particular method. Such estimates are often
easier to get in the case of deterministic methods, which is another
incentive to use deterministic methods when both kind of methods are
available.
Low-discrepancy sequences (Sobol sequences in particular) are often
successful for nancial applications. This is because the effective
dimension of nancial problems is often much lower than their nominal
dimension. In such cases one can benet from all the power of
low-discrepancy sequences by affecting the main risk factors of the
problem (by decreasing amount of variance explained) to the successive
components of the points of a multi-dimensional low-discrepancy
sequence. Though this requires a low-discrepancy sequence in dimension
equal to the nominal dimension of the problem, which may be high, the
fact that the rst coordinates of the quasi-random points are affected to
the main risk factors of the problem allows one to avoid much of the
S. Crpey Page 225
CIMPAJordanie September 2005
drawbacks generally associated with high-dimensional low-discrepancy
sequences.
S. Crpey Page 226
CIMPAJordanie September 2005
References
[1] L.ANDERSEN R.BROTHERTON-RATCLIFFE. Exact
exotics, Risk, 9:8589, Oct 1996.
[2] M. AVELLANEDA, P. LAURENCE. Quantitative Modeling of
Derivative Securities from Theory to Practice, Chapman & Hall,
2000.
[3] F.BLACK M.SCHOLES. The pricing of Options and Corporate
Liabilities, Journal of Political Economy, 81:635654, 1973.
[4] G.BARLES, C. DAHER AND M. ROMANO. Convergence of
numerical schemes for parabolic equations arising in nance
theory, Math. Models Methodes App. Sci., 5, n
o
1, pp. 125143,
1995.
[5] M.J.BRENNAN E.S.SCHWARTZ. The valuation of the
American put option, J. of Finance, 32:449462, 1977.
S. Crpey Page 227
CIMPAJordanie September 2005
[6] M.BROADIE P.GLASSERMANN. Pricing american-style
securities using simulation, J.J.of Economic Dynamics and
Control, 21:13231352, 1997.
[7] L.CLEWLOW A.CARVEHILL. On the simulation of contingent
claims, Journal of Derivatives, 6673, Winter 1994.
[8] R. CONT, P. TANKOV. Financial Modellin with Jump Processes,
Chapman & Hall/CRC, 2003.
[9] J.COX S.ROSS M.RUBINSTEIN. Option pricing: a simplied
approach, J. of Economics,January 1978.
[10] M. CRANDALL, H. ISHII AND P.-L. LIONS. Users guide to
viscosity solutions of second order partial differential equations,
Bull. Amer. Math. Soc., 1992.
[11] C.W.CRYER. The solution of a quadratic programming problem
using systematic overrelaxation, SIAM J. Control,(9):385392,
1971.
S. Crpey Page 228
CIMPAJordanie September 2005
[12] C.W.CRYER. The efcient solution of linear complementarity
problems for tridiagonal minkowski matrices, ACM Trans. Math.
Softwave, (9):199214, 1983.
[13] E. FOURNI, J-M. LASRY, J. LEBUCHOUX, P-L. LIONS,
N. TOUZI. An application of Malliavin calculus to Monte Carlo
methods in Finance, Finance & Stochastics, vol. 4, no 3, 391412,
19993.
[14] P. GLASSERMAN. Monte Carlo Methods in Financial
Engineering, Springer , 2004.
[15] R. GLOWINSKI J-L. LIONS R. TREMOLIERES. Analyse
Numrique des Inquations Variationnelles, Dunod , 1976.
[16] S. HESTON. A Closed-Form Solution for Options with Stochastic
Volatility with Applications to Bond and Currency Options, Review
of Financial Studies, 6(2) 32743, 1993.
[17] J. HULL. Options, futures, & other derivatives, Prentice Hall , 5th
S. Crpey Page 229
CIMPAJordanie September 2005
edition, 2002.
[18] P. JCKEL. Stochastic Volatility Models: Past, Present and
Future, The Best of Wilmott 1, Chapter 23, 2005.
[19] P. JAILLET D. LAMBERTON B. LAPEYRE. Variational
Inequalities and the Pricing of American Options, Acta
Applicandae Mathematicae 21, pp. 263289, 1990.
[20] B.KAMRAD P.RITCHKEN. Multinomial approximating models
for options with k state variables, Management Science,
37:16401652, 1991.
S. Crpey Page 230
CIMPAJordanie September 2005
[21] I. KARATZAS AND S. SHREVE. Brownian Motion and
Stochastic Calculus, Springer, 1988.
[22] G.Z KEMNA AND A.C.F.VORST. A pricing method for options
based on average asset values, J. Banking Finan., 113129, March
1990.
[23] D.E.KNUTH. The Art of Computer programming, Seminumerical
Algorithms, volume 2, Addison-Wesley, 1981.
[24] H.KUSHNER. Probability Methods for Approximations in
Stochastic Control and for Elliptic Equations, Academic Press,
1977.
[25] H.KUSHNER P.G.DUPUIS. Numerical Methods for Stochastic
Control Problems in Continous Time, Springer-Verlag, 1992.
[26] Y.W.KWOK. Mathematical models of nancial derivatives,
Springer Finance, 1998.
S. Crpey Page 231
CIMPAJordanie September 2005
[27] D. LAMBERTON, B. LAPEYRE. Introduction au calcul
stochastique appliqu la nance, Ellipses, 1997.
[28] B. LAPEYRE, E. TEMAM.. Competitive Monte Carlo methods
for the pricing of Asian options , Journal of Computational
Finance, Volume 5 / Number 1, Fall 2001
[29] P. LECUYER.. Random numbers for simulation,
Communications of the ACM, 33(10), Octobre 1990.
S. Crpey Page 232
CIMPAJordanie September 2005
[30] P. LECUYER.. Uniform random number generation, The Annals
of Operations Research, 53:77120, 1994.
[31] P. LECUYER.. Random number generation, In The Hanbook of
Simulation, 1998.
[32] W.J. MOROKOFF AND R.E. CAFLISH. Quasi-random
sequences and their discrepancies, SIAM, Journal of Scientic
Computin, 5(6):12511279, nov 1994.
[33] K. MORTON AND D. MAYERS. Numerical Solution of Partial
Differential Equations, Cambridge university press, 1994.
[34] H. NIEDERREITER. New developments in uniform
pseudorandom number and vector generation, In Springer, editor,
In Lecture Notes in Statistics, 106, Monte Carlo and Quasi-Monte
Carlo Methods in Scientic Computing, volume 106, pages
87120, 1994.
[35] H.NIEDERREITER A.B.OWEN AND J.SHIUE EDITORS.
S. Crpey Page 233
CIMPAJordanie September 2005
Randomly permuted (t,m,s)-Nets and (t,s)-sequences, in
Montecarlo and Quasi Montecarlo methods in Scientic
Computing, Springer, New York, 1995.
[36] H.NEIDERREITER. Random Number Generation and Quasi
Monte Carlo Methods, Society for Industrial and Applied
mathematics, 1992.
[37] H.NEIDERREITER. Points sets ans sequences with small
discrepancy, Monatsh.Math, 104:273337, 1987.
[38] E. PARDOUX B. LAPEYRE AND R. SENTIS. Mthodes de
Monte Carlo pour les equations de transport et de diffusion,
Springer, 1998.
[39] D.W.PEACEMAN-H.H.RACHFORD JR.. The numerical
solution of parabolic and elliptic differential equations, J.of Siam,
3:2842, 1955.
[40] P.RITCHKEN. On pricing barrier options, Journal Of
S. Crpey Page 234
CIMPAJordanie September 2005
Derivatives, pages 1928, Winter 95 1995.
[41] L.C.G.ROGERS Z.SHI. The value of an asian option, J. Appl.
Probab., 32(4):10771088, 1995.
[42] I.M.SOBOL. The distribution of points in a cube and the
approximate evaluation of integrals, U.S.S.R Comput. Maths. Math.
Phys, 16:236242, 1976.
[43] S.VILLENEUVE A.ZANETTE. Parabolic A.D.I. methods for
pricing american option on two stocks, Mathematics of Operations
Research, Vol. 27, Issue 1, 121149, 2002
[44] P. WILMOTT. Derivatives : the theory and practice of nancial
engineering, Wiley, 1998.
[45] R.ZVAN P.A.FORSYTH K.R.VETZAL. Robust numerical
methods for pde models of asian option, Journal of Computational
Finance, 1:3978, 1998.
S. Crpey Page 235
CIMPAJordanie September 2005
Websites
www-rocq.inria.fr/mathfi/Premia/index.html
screpey.free.fr
www.mathfinance.de
www.mathfinance.de/frontoffice.html
quantlib.org
www.nr.com
www.dri-ccf.com/CCF/driccf.nsf
www.gro.creditlyonnais.fr
www.iro.umontreal.ca/~lecuyer
www.mindview.net/Books/TICPP/ThinkingInCPP2e.html
S. Crpey Page 236

S-ar putea să vă placă și