Sunteți pe pagina 1din 13

MSc Financial Mathematics - SMM302 1

3 Brownian Motions
Timeline:
1828 the Brownian Motion is introduced by the Scottish royal botanist Robert
Brown in an attempt to describe the irregular motion of pollen grains sus-
pended in liquid
1900 Louis Bachelier considers Brownian motion as a possible model for stock mar-
ket values
1905 Albert Einstein considers Brownian motion as a model of the motion of a
particle in suspension and uses it to estimate Avogadro’s number
1923 Norbert Wiener defines and constructs Brownian motion rigorously for the
first time. The resulting stochastic process is often called the Wiener process
in his honour.

Definition 1 (Brownian motion) The process W := {Wt : t ≥ 0} is called standard


Brownian motion if:

1. W0 = 0
2. for s ≤ t, Wt − Ws is independent of the past history of W until time s, i.e. the
Brownian motion has increments which are independent of the natural filtration
Fs = σ(Wτ : 0 ≤ τ ≤ s)
3. for 0 ≤ s ≤ t, Wt − Ws and Wt−s have the same distribution, which is Gaussian
D
with mean zero and variance (t − s), i.e. Wt − Ws = Wt−s ∼ N(0, t − s)
4. W has continuous sample paths

1 and 3 imply that Wt ∼ N(0, t).

Proposition 2 Cov(Wt , Ws ) = min(s, t) = s ∧ t

Proof. Assume that s ≤ t. Then

Cov(Ws , Wt ) = E [(Wt − EWt ) (Ws − EWs )]


= E [Wt Ws ]
E [Ws (Wt − Ws )] + E Ws2

=
= s.


Exercise 1 a) Define a process by Xt := tξ where ξ is a standard Gaussian random
variable. Explain why X is not a Wiener process.
b) Define a process by Xt := 10Wt +100t where Wt is a standard one-dimensional Wiener
process. Compute the probability of the event that X1 < −900.
0 c Laura Ballotta - Do not reproduce without permission.

2 3 BROWNIAN MOTIONS

0.5 0.16

0.4 0.14

0.12
0.3

0.1
0.2

0.08

Bt = µ t + σ Wt
0.1
Wt

0.06
0
0.04

−0.1
0.02

−0.2
0

−0.3 −0.02

−0.4 −0.04
0 28 56 84 112 140 168 196 224 252 280 308 336 364 0 28 56 84 112 140 168 196 224 252 280 308 336 364
1.15

1.1
X = e(µ − σ /2)t + σ Wt

1.05
2

1
t

0.95

0.9
0 28 56 84 112 140 168 196 224 252 280 308 336 364

Figure 1: Sample trajectories of the Wiener process, the arithmetic Brownian motion and the
geometric Brownian motion. Parameter set: T = 1 year; µ = 0.1 p.a.; σ = 0.2 p.a.


c) Define a process by Xt := t − sξ where ξ is a standard Gaussian random variable.
Explain why X is not a Wiener process.

The process defined by Bt = µt + σWt is a Brownian motion with drift, i.e.


Bt ∼ N(µt, σ 2 t).
An important process based on the Brownian motion, which is the basis of modern
mathematical finance, is the so called geometric Brownian motion (or Doleans-Dade ex-
ponential):
Xt = X0 eµt+σWt ; X0 > 0.
A sample trajectory of the processes Wt , Bt and Xt are shown in Figure 1.

Exercise 2 Let W be a one-dimensional standard Brownian motion. Show that the


following processes are Brownian motions.

• Bt = −Wt , for t ≥ 0.
3.1 The Martingale Property 3

1
• Bt = √ Wct , for t ≥ 0 and c > 0.
c

tW1/t for t > 0
• Bt = .
0 for t = 0

Exercise 3 Let X, Y, Z be stochastic processes defined by


σ2
Xt = eσWt , Yt = eσWt − t
2 , Zt = eµt+σWt

where W is a standard one-dimensional Wiener process and µ, σ are constants. Let Ft


denote the history of the Wiener process until time t.

i) Calculate E [XT ], E [YT ] and E [ZT ] for a given T ≥ 0.

ii) Calculate E [XT | Ft ], E [YT | Ft ] and E [ZT | Ft ] for 0 ≤ t ≤ T. Are X, Y and Z


martingales? Justify your answer.

Exercise 4 Consider the processes X, Y and Z given in Exercise 2.3. Let

dP̂
Y = ;
dP

then compute Ê [Xt | Fs ]; Ê [Yt | Fs ] and Ê [Zt | Fs ] for 0 ≤ s ≤ t.

3.1 The Martingale Property


Proposition 3 A standard Brownian motion is a martingale.
√ p
Proof. E|Wt | = tE|Z|, where Z ∼ N(0, 1); this is finite (in fact, it is 2t/π). For
s ≤ t,
E[Wt |Fs ] = E[Wt − Ws + Ws |Fs ] = E[Wt − Ws ] + Ws = Ws

Exercise 5 Let W be a one-dimensional standard Brownian motion. Show that the pro-
cess Wt2 − t is a martingale with respect to the natural filtration Ft = σ (Ws : 0 ≤ s ≤ t).

Exercise 6 An analyst wishes to use a model which is based on Brownian motion, but
which does not become too large and positive for large t. The model proposed is

Xt = Wt e−cWt ,

where Wt is a standard one-dimensional Brownian motion and c is a positive constant.


Verify that there is an upper bound which X never exceeds.
4 3 BROWNIAN MOTIONS

Exercise 7 The evolution of a stock price S is modelled by

St = eµt+σWt

where Wt is a standard one-dimensional Brownian motion, µ and σ are fixed parameters


and the initial value of the stock is S0 = 1.

1. Derive an expression for P (St ≤ x) .

2. Derive an expression for the median and the expectation of St .

3. Determine an expression for the conditional expectation E (St |Fu ), where u < t and
F denotes the filtration associated with the process S.

4. Find conditions on µ and σ under which the process S is a martingale.

3.2 Construction of a Brownian motion


There are many constructions of Brownian motion, none easy. The approach used here
refers to the limit of random walks when we take the limit of the step interval finer and
finer.
Consider a sequence {Xi }i∈N of i.i.d. random variables with

EXi = 0, V ar(Xi ) = 1.

Consider the random walk


n
X
Sn = Xi .
i=1

Now define the process


S[N t]
ZN (t) = √ ,
N
where [Nt] denotes the largest integer which isless than or equal to Nt. Thus, the process

ZN (t) stays constant over the interval Nk , k+1
N
and jumps at k/N by an amount Xk / N .

Theorem 4 As N tends to infinity, the distribution of {ZN (t) : t ≥ 0} converges to that


of {Wt : t ≥ 0}.

Proof. Clearly EZN (t) = 0.


For s ≤ t, we have Cov(ZN (t), ZN (s)) = V ar [N
P s] Xk
i=1

N
= [NNs] , which converges to s as
N → ∞.
Finally we note that the Central Limit Theorem guarantees that the limiting distribution
of ZN (t) for each t is N(0, t). Therefore the limiting process is a Gaussian process with
the same expectation and covariance function as Brownian motion, which is enough to
prove that it is a Brownian motion.
3.3 The variation process of a Brownian motion 5

3.3 The variation process of a Brownian motion


Definition 5 Consider a time interval [0, T ], partitioned into n smaller sub-intervals.
For p > 0, let
n
Wt − Wt p .
X
(p)

V = i i−1
i=1
(p)
Then V is called the pth variation process of W . In particular:
• for p = 1, V (1) is the total variation process;
• for p = 2, V (2) is the quadratic variation process.
V (1) and V (2) represent different measures of how much Wt varies over time. From here
we can gather few pieces of crucial information about the inner structure of the Wiener
process, which are described in the following.
Theorem 6 Let {Wt : t ≥ 0} be a standard Brownian motion. Consider a time interval
[0, T ] and its partition 0 = t0 < t1 < t2 < · · · < tn = T in such a way that
def
τn = sup1≤i≤n (ti − ti−1 ) → 0. Then
2 p
(i) ni=1 Wti − Wti−1 −→ T as n → ∞
P
a.s.
(ii) ni=1 Wti − Wti−1 −→ ∞ as n → ∞, as long as τn ≤ Cn−α for some C, α > 0.
P

Proof.
(i) Convergence in probability is hard to check. But convergence in mean square works
and implies convergence in probability. Hence we want to show that E |V (2) − T |2 →


0 as n → ∞ as long as (ti − ti−1 ) → 0.


Let
Xn
(2)
∆Wti = |Wti − Wti−1 |, so that V = ∆Wt2i
1
n
X
∆ti = ti − ti−1 , so that T = ∆ti .
1

Then
 2   2 
Xn Xn n
X
E  ∆Wt2i − T  = E  ∆Wt2i − ∆ti 


1 1 1
 2 
Xn
2

= E ∆Wti − ∆ti 


1
 2 
Xn
= E Zi 


1
n
X XX
= E(Zi2 ) + E(Zi Zj ),
1 i j6=i
6 3 BROWNIAN MOTIONS

where Zi = ∆Wt2i − ∆ti .


Note that, by the independent increment property for Brownian motion, E(Zi Zj ) =
E(Zi )E(Zj ) = 0 for i 6= j.
Now,
h 2 i X X
E V 2 − T = E(Zi2 ) = 2 (∆ti )2
i i
X
≤ τn ∆ti = τn T,
i

which by hypothesis decreases to 0 as n → ∞.


(ii) by contradiction: consider the set
( n
)
X
A := ω: Wt − Wt < ∞ .
i i−1
i=1

Now, consider the quadratic variation process of W , V (2) . By construction, it follows


that n n
Wt − Wt 2 ≤ max Wt − Wt
X X
V (2) =

Wt − Wt .
i i−1 i i−1 i i−1
i
i=1 i=1
(2)
Since W is a continuous
Pn process, then
a.s.V = 0, which contradicts the previous
result. Therefore, i=1 Wti − Wti−1 −→ ∞.

Theorem 6.(i) simply tells us that the Brownian motion accumulates quadratic varia-
tion at rate 1 per unit of time, i.e.
Wt − Wt 2

i i−1
≈ 1;
ti − ti−1
informally, we can then write (dWt ) (dWt ) = 1 × dt. Note that most functions which are
continuous (and have continuous derivatives) have zero quadratic variation. The paths
of the Brownian motion are unusual in that their quadratic variation is not zero. This
is one of the reasons why ordinary calculus does not work for Brownian motions. The
other important reason is in the property of the total variation. Consider some ordinary
function, continuous and differentiable, then the Mean Value Theorem says that in each
subinterval [ti−1 , ti ] of our partition, there exists a t∗i such that
f (ti ) − f (ti−1 )
= f ′ (t∗i ) .
ti − ti−1
This implies that the total variation of the function f can be written as
n
X n
X
V (f ) = |f (ti ) − f (ti−1 )| = |f ′ (t∗i ) (ti − ti−1 )|
i=1 i=1
Z T
= |f ′ (t)| dt,
0
3.4 The Reflection Principle and Functionals of a Brownian Motion 7

where the last equality follows by recognizing the Riemann sum of the function f . How-
ever, for the case of the Brownian motion, we have just shown that V −→ ∞, which
essentially implies that there are some problems with the quantity “|Wt′ |”. In fact, the
Brownian motion is not differentiable, as we will show in more details in Unit 4. This is
consistent with intuition. In fact, the differential of a function tells you the angle of the
tangent to the curve in any specific point. But if you are able to calculate such a quantity,
you know where the function is going to be in the next dt period of time. However, you
should not forget that the function in our case is represented by a random process. Since
it is random, you cannot predict the outcome in the next period of time.
Other variation processes that are of interest for us are the following:

• Cross Variation: limn→∞ ni=1 Wti − Wti−1 (ti − ti−1 ) = 0, due to the fact that
P 

the Brownian motion is a continuous process. Informally, we can write (dWt ) (dt) =
0;

Pn
• limn→∞ i=1 |ti − ti−1 |2 = 0, which is trivial, and implies (dt) (dt) = 0.

3.4 The Reflection Principle and Functionals of a Brownian Mo-


tion

Definition 7 (First passage time) For a Brownian motion we define the first passage
time to a ∈ R to be the stopping time Ta given by

Ta = inf{t ≥ 0 : Wt = a},

with the condition Ta = ∞ if Wt never hits a.

Proposition 8 (The Reflection Principle) Let W be a standard Brownian motion


and let Ta be as above. Then, for any x ≤ a,

P[Ta < t ∩ Wt < x] = P[Ta < t ∩ Wt > 2a − x]

Proof. By conditioning on the value of Ta . Let fa denote the probability density


8 3 BROWNIAN MOTIONS

0.25

0.2

0.15 W t

0.1

a
0.05

B
t

−0.05

−0.1
Time, t

Figure 2: The reflection principle.

function of Ta . Then
Z t
P[Ta < t ∩ Wt < x] = fa (s) dsP[Wt < x|Ta = s]
0
Z t
= fa (s) dsP[Wt < x|Ws = a]
0
Z t
= fa (s) dsP[Wt − Ws < x − a]
0
Z t
= fa (s) dsP[Wt − Ws > a − x]
0
Z t
= fa (s) dsP[Wt > 2a − x|Ws = a]
0
Z t
= fa (s) dsP[Wt > 2a − x|Ta = s]
0
= P[Ta < t ∩ Wt > 2a − x].

Proposition 9 Let {Wt : t ≥ 0} be a standard Brownian motion and Ta the first passage
time to a. Then
P[Ta < t] = 2P[Wt > a]
and Ta is almost surely finite.

Proof. For a > 0,

P[Ta < t] = P[Ta < t ∩ Wt ≥ a] + P[Ta < t ∩ Wt < a].


3.4 The Reflection Principle and Functionals of a Brownian Motion 9

Since {Wt ≥ a} implies that {Ta < t}, it follows that


P[Ta < t ∩ Wt ≥ a] = P[Wt ≥ a].
In addition, the Reflection Principle tells us that
P[Ta < t ∩ Wt < a] = P[Ta < t ∩ 2a − Wt < a] = P[Ta < t ∩ Wt > a],
which is also equal to P[Wt ≥ a]. Therefore
P[Ta < t] = 2P[Wt ≥ a].
Now Wt ∼ N(0, t), from which it follows that
Z ∞ Z ∞
1 −y2 /2 1 2
P[Ta > t] = 2 √ √ e dy → 2 √ e−y /2 dy = 1
a/ t 2π 0 2π
as t → ∞.
Lemma 10 Let {Wt : t ≥ 0} be a standard Brownian motion and define the running
maximum of W to be
M0t = sup Ws .
0≤s≤t
Then  
a
P[M0t
≤ a] = 2N √ − 1,
t
where N denotes the distribution function of the standard normal distribution.
Proof. {M0t ≥ a} ⇐⇒ {Ta ≤ t}. Therefore
    
t a a
P[M0 ≤ a] = 1 − P[Ta ≤ t] = 1 − 2 1 − N √ = 2N √ − 1
t t

Exercise 8 Let Wt be a standard one-dimensional Brownian motion and define Bt = tW 1


t
for t > 0, with B0 = 0.
1. Calculate EBt , V arBt , Cov (Bs , Bt ) for s < t.
2. Show that
P (Wt < ct for all t ≥ 1) = P (Wt < c for all 0 ≤ t ≤ 1) ,
where c is a constant.
3. Find an expression for the value of these probabilities by stating the probability
density function of M1 = sup0≤t≤1 Wt .
Exercise 9 Determine the distribution function of the running minimum of a standard
one-dimensional Brownian motion.
The results concerning the distribution of the hitting time and the maximum/minimum
are relevant for financial application as there are quite few exotic options traded on the
OTC markets, which depend on these particular functionals of the Brownian motion.
Examples you can think of: barrier options and lookback options.
10 3 BROWNIAN MOTIONS

3.5 Correlated Brownian motions


Proposition 11 In order to construct two Brownian motions, Wt and Xt , such that

Corr(Wt , Xt ) = ρ, i.e. Cov(Wt , Xt ) = ρt,

we may take
p
Wt = ρXt + 1 − ρ2 Zt ,

where Zt is another Brownian motion independent of Xt .

Proof. Any process of the form Wt = aXt + bZt possesses the stationary, indepen-
dent increment property, has continuous sample paths and starts from 0pat time t. The
distribution of Wt = aXt + bZt is N (0, (a2 + b2 )t). Choosing a = ρ, b = 1 − ρ2 implies
that V ar(Wt ) = t, so that Wt is a standard Brownian motion, and that
p
Cov(Wt , Xt ) = Cov(ρXt + 1 − ρ2 Zt , Xt ) = ρV ar(Xt ) = ρt.

Exercise 10 Let Xt be a one-dimensional standard Brownian motion.

a) Consider the process


(1) (2)
Wt = Xρt + X(1−ρ)t ,

where X (1) and X (2) are independent copies of the given process X.

i) Show that the process Wt can be represented as


√ p
Wt = ρBt + 1 − ρZt ,

where Bt and Zt are independent standard Brownian motions.


ii) Show that the process Wt is a Brownian motion.
iii) Calculate Cov (Wt , Bt ) and Cov (Wt , Bs )

(1) (2)
b) Consider two Brownian motions Wt and Wt with representation as in part (b.i),
i.e.
(i) √ p (i)
Wt = ρBt + 1 − ρZt , i = 1, 2.
 
(1) (2)
Calculate Cov Wt , Wt .
3.6 Simulating trajectories of the Brownian motion - part 1 11

3.6 Simulating trajectories of the Brownian motion - part 1


At this point you know few many properties of the Wiener process; but what does it look
like? A sample path was presented in Figure 1; how can you obtain such a trajectory?
A simple way of generating numerical samples of the Brownian motion is to use the
property that the increments of the Brownian motion are independent and follow a normal
distribution, with mean 0 and variance equal to the length of the time period over which
you observe those increments. So, even if we have shown that this is not entirely correct,
we can approximate the increments of the Wiener process between time s and time t by
generating
√ a sample variate from the standardized normal distribution, and then multiply
it by t − s. This approach is known as the sequential algorithm.
This is the basis of Monte Carlo simulation; a very simple example is shown in the
Excel file available on the module page in CitySpace. A more serious attempt (which in
any case is based on the same principle) to generating paths of the Wiener process can
be done in Matlab or in C++ and you will see how to implement this in your Numerical
Methods modules.
The drawback of the proposed simulation procedure is that you need to generate fairly
many paths, if you want to evenly cover the full probability space and, consequently reduce
the variance of your Monte Carlo estimate. In Figure 3 you can find 10 samples paths
for the Wiener process, and the corresponding sample paths of the arithmetic Brownian
motion and the geometric Brownian motion. If you observe carefully, you can see that
these 10 trajectories are not evenly spread, but concentrate around the mean. However,
depending on the final task of your simulation code (for example, approximating the price
of some derivative security), the generation of a big number of paths might prove time
inefficient.
An alternative approach, which solves the problem noted above, makes use of stratifi-
cation. The idea here is to subdivide the probability space into K strata, and then “force”
your sample deviate to be in a specific stratum. The advantage of stratified sampling is
shown in Figure 4 for the case of the normal distribution.
Using stratification efficiently for Monte Carlo purposes, however, is not so straight-
forward, as it requires the knowledge of the Brownian bridge, which we will meet in the
next unit. For this reason, our discussion will have to wait till then.
12 3 BROWNIAN MOTIONS

3 1

0.8
2

0.6
1

Bt = µ t + σ Wt
0.4
t
W

0.2

−1
0

−2
−0.2

−3 −0.4
0 28 56 84 112 140 168 196 224 252 280 308 336 364 0 28 56 84 112 140 168 196 224 252 280 308 336 364
2

1.8

1.6
t
(µ − σ /2)t + σ W

1.4
2
X =e

1.2
t

0.8

0 28 56 84 112 140 168 196 224 252 280 308 336 364

Figure 3: 10 sample trajectories of the Wiener process, the arithmetic Brownian motion and
the geometric Brownian motion. Parameter set: T = 1 year; µ = 0.1 p.a.; σ = 0.2 p.a.
3.6 Simulating trajectories of the Brownian motion - part 1 13

Simulated distribution N(0,1) via inversion


(100,000 samples)
400

300

200

100

0
−5 −4 −3 −2 −1 0 1 2 3 4 5

Simulated distribution N(0,1) via stratification


(100,000 samples)
400

300

200

100

0
−5 −4 −3 −2 −1 0 1 2 3 4 5

Figure 4: Alternative generation methods of random numbers.

S-ar putea să vă placă și