Documente Academic
Documente Profesional
Documente Cultură
2 Stochastic Processes
Stochastic processes are, roughly speaking, nothing but random variables evolving over
time; for this reason we need the whole machinery presented in the previous chapter.
However, since now we are going to have an additional dimension to take care of, i.e.
time, we need to readjust a little our probability space. And this is what is coming up in
the next section. We will introduce then the most common processes you will deal with
over the rest of the year.
We can consider the filtration Ft as the set of information available at time t. In other
words we can consider (F t )t≥0 as describing the flow of information over time, where we
suppose that we do not lose information as time passes (this is why we say Fs ⊂ Ft for
s < t).
Example 1 Consider Example 1.3.3. The sets F (1) and F (2) contain the information
learned by observing respectively the first one and the first two movements of the stock
price. Alternatively, you can consider these two sets as containing the information gained
after 1 period of time, and 2 periods of time respectively elapsed. If so, then we can index
those σ-algebras by time and {F0 , F1 , F2 } is an example of filtration.
Hence, a filtration tells us the information we will have at future times; more precisely,
when we get to time t, we will know for each set in Ft whether the true ω lies in that set.
S0 = 4,
0
c Laura Ballotta - Do not reproduce without permission.
2 2 STOCHASTIC PROCESSES
Definition 3 (Natural filtration) Let (Ω, F , P, Ft) be a filtered probability space and
X be a stochastic process adapted to the filtration (F t )t≥0 . The natural filtration of X is
defined by the set of σ-algebras Ft = σ (Xs : 0 ≤ s ≤ t) , 0 ≤ t < ∞.
Example 3 The natural filtration of the process S defined in Example 2.2 is given by
the sequence {F0 , F1 , F2 , ...} discussed in Example 2.1.
2. E [Xt |F s ] = Xs ∀s ≤ t.
• E [Xt − Xs |Fs ] = 0 ∀s ≤ t
The martingale properties show that a martingale is a random process whose future
variations are completely unpredictable given the current information set. For this reason,
the best forecast of the change in X over an arbitrary interval is zero. A martingale
represents a fair game: given the knowledge we have, on average the return produced by
the bet is what we invested in it.
E [Xt |F s ] = E [E(Y |F t ) |F s ]
= E (Y |F s ) = Xs
where the last but one equality follows from the Tower property.
Example 6 In a simple discrete time model for the price of a share, the change in price
at time t, Xt , is assumed to be independent of anything that has happened before time t
and to take distribution:
1 with probability p
Xt = −1 with probability q
0 with probability r = 1 − p − q
m
P
Let S0 = n ∈ N be the original price of the share, Sm = S0 + Xt be the price after m
t=1
periods and define
Sm
q
Ym = .
p
Then Ym is a martingale. To check, proceed as follows: as p, q > 0, and Ym ≥ 0,
q Sm Sm
q
E |Ym | = E =E
p p
m
S0 + P Xt n Xt !m
q t=1 q q
= E = E
p p p
"
n 1 −1 0 #m
q q q q
= p+ q+ r
p p p p
n
q
= < ∞.
p
Further,
" #
S
q m
E [Ym | Fm−1 ] = E Fm−1
p
" # " #
S +X X
q m−1 m q m
= E Fm−1 = Ym−1 E Fm−1
p p
" #
X
q m
= Ym−1 E = Ym−1 .
p
Further remark: a martingale is always defined with respect to some information set,
and with respect to some probability measure. If we change the information content
and/or the probabilities associated with the process, the process under consideration may
cease to be a martingale. The opposite is also true: given a process, which does not
behave like a martingale, we may be able to modify the relevant probability measure and
convert the process into a martingale. This will be illustrated later on in the course when
we will talk about the martingale problem and the Girsanov theorem.
4 2 STOCHASTIC PROCESSES
Exercise 1 Let Ft be an increasing family of σ-algebras for t ≥ 0. Show that for every
random variable h, if Eh < ∞, then Mt := E [h|Ft ] is an Ft -martingale for t ≥ 0.
Exercise 2 Define vt = V ar(Mt ), where {Mt : t ≥ 0} is a zero-mean martingale. Show
that vt is non-decreasing.
such that X τ = Xτn ∧t is a martingale, i.e. E [Xτn ∧t |F s ] = Xτn ∧s , s ≤ t, then we say that
X is a local martingale.
In other words, if you know where you are at time t, how you get there does not matter
so far as predicting the future is concerned. Equivalently, past and future are conditional
independent given the present.
Any process with independent increments is Markov. In fact, for B ∈ σ (Xτ : τ ≤ s)
\ \
P Xt ∈ E Xs = x B = P Xt − Xs + x ∈ E Xs = x B
= P (Xt − Xs + x ∈ E |Xs = x )
We distinguish Markov processes according to the nature of the state space, i.e. the
set of values that the random variables are capable of taking, and to whether we are
observing the process discretely in time or continuously in time. The classification works
as represented in the following table.
6 2 STOCHASTIC PROCESSES
Definition 6 A Markov chain is a sequence of random variables Xt1 , Xt2 , ..., Xtn . . . with
the following property:
(m,n)
P (Xn = j |X0 = i0 , X1 = i1 , ..., Xm = i) = P (Xn = j |Xm = i) = pij .
(m,n)
pij is called transition probability (as it denotes the probability of transition from state
i at time m to state j at time n).
e−νt (νt )n
P [Nt = n] =
n!
and it is continuous in probability.
2.2 Classes of processes 7
T 1 T2 T 3 T 4
time
N t
1
3
1
2
A
1
time
T 1 T 2 T 3 T4
T
*3
X 3
AxB T
*2
X 4
X 2
T* 1
X 1
S
T 1 T 2 T 3 T4
T
*4
Note that νt is a non-decreasing function and generally is given in terms of rate or intensity
(rate if S has dimension 1, intensity for multidimensional S). This is a positive function
λ on S such that
Zt
νt = λτ dτ
0
3. for any s < t and u < v such that t − s = v − u, the distribution on Nt − Ns is the
same as that of Nv − Nu , i.e. N has stationary increments.
MGF You go through the calculations in a very similar fashion to the case of the Poisson
random variable:
∞
kN (t)
X
kn e
−λt
(λt)n
MN (k; t) = E e = e
n=0
n!
∞ k
n
e λt
= eλt(e ).
k −1
X
= e−λt
n=0
n!
Note that you can differentiate MN (k) with respect to k to obtain the mean and
the variance of the process.
∂2
k+λt(ek −1)
= λt + (λt)2 .
k
MN (k; t) = λte 1 + λte
∂k 2
k=0
k=0
As just seen, the Poisson process counts the number of points in a certain subset of
S. Often each point may be labelled with some quantity, (the size of a market crash, for
example), random itself, say X. This gives what is called a marked Poisson process or
compound Poisson process
Nt
X
Xk ,
k=1
2.2 Classes of processes 9
where {Xk }k∈N is a sequence of independent and identically distributed random variables.
A possible representation of the compound Poisson process is given in Figure 2.
Also in this case, you can calculate the MGF from which you can extract mean and
variance. Assume that EX = µ and V ar (X) = σ 2 .
MGF In this case, the conditional expectation will be of great help:
n h P N(t) io
hN̄t h k=1 Xk
MN̄ (h; t) = E e =E E e N (t)
NY (t)
h
N (t)
i
= eλt(MX (h)−1) .
hX
= E E e k
N (t) = E (MX (h))
k=1
Mean Just for the sake of doing something different, instead of differentiating the MGF,
we can use the cumulants generating function
K(h; t)N̄ = ln MN̄ (h; t) = λt (MX (h) − 1) .
Then
∂ ∂
E N̄t = K(h; t)N̄
= λt MX (h) = λµt.
∂h h=0 ∂h h=0
Variance Similarly
∂2 ∂2
2 2
Var N̄t = K(h; t) N̄
= λt MX (h) = λ µ + σ t.
∂h2
h=0 ∂h2
h=0
Exercise 6 1. Calculate the characteristic function of the Poisson process and the
compound Poisson process.
2. Use the previous results and the properties of the characteristic function to derive
the mean and the variance of the Poisson process.
3. Repeat the same exercise for the compound Poisson process.
Exercise 7 Let N be a Poisson process with rate λ and Ft the associated filtration.
a) Calculate the mean of the process N and its variance.
b) Let MN be the moment generating function of the process N, i.e. MN (k) = E ekN (t) ,
Lévy processes can be thought of as a mixture of a jump process, like for example the
Poisson process (but not only), and a diffusion (Gaussian) process called Brownian motion
or Wiener process. Lévy processes actually represent a generalization of both the Poisson
process and the Brownian motion, since the distribution followed by the increments is not
specified; we will study Lévy processes in more details next term; the Brownian motion
is instead the subject of the next unit, actually of the remaining of this module.
We conclude this section by briefly mentioning that the more general class of stochastic
process for which rules of calculus have been developed so far, is the class of semimartin-
gales. Again, we will encounter these processes in Term 2.