Documente Academic
Documente Profesional
Documente Cultură
Note: The primary reference for these notes is Mittelhammer (1999). Other treatments of prob-
ability theory include Gallant (1997), Casella & Berger (2001) and Grimmett & Stirzaker (2001).
Definition 1.1 (Sample Space). The sample space is a set, , that contains all possible outcomes.
Example 1.2. Suppose interest is in a standard 6-sided die. The sample space is 1-dot, 2-dots,
. . ., 6-dots.
Example 1.3. Suppose interest is in a standard 52-card deck. The sample space is then A , 2,
3, . . . , J , Q , K , A , . . . , K , A , . . . , K , A , . . . , K .
2 Probability, Random Variables and Expectations
An event may be any subsets of the sample space (including the entire sample space), and
the set of all events is known as the event space.
Definition 1.6 (Event Space). The set of all events in the sample space is called the event space,
and is denoted F .
Event spaces are a somewhat more difficult concept. For finite event spaces, the event space
is usually the power set of the outcomes that is, the set of all possible unique sets that can be
constructed from the elements. When variables can take infinitely many outcomes, then a more
nuanced definition is needed, although the main idea is to define the event space to be all non-
empty intervals (so that each interval has infinitely many points in it).
Example 1.7. Suppose interest lies in the outcome of a coin flip. Then the sample space is {H , T }
and the event space is {, {H } , {T } , {H , T }} where is the empty set.
The first two axioms of probability are simple: all probabilities must be non-negative and the
total probability of all events is one.
Axiom 1.9. The probability of all events in the sample space is unity, i.e.
Pr () = 1. (1.2)
The second axiom is a normalization that states that the probability of the entire sample space
is 1 and ensures that the sample space must contain all events that may occur. Pr () is a set valued
function that is, Pr () returns the probability, a number between 0 and 1, of observing an event
.
Before proceeding, it is useful to refresh four concepts from set theory.
Definition 1.10 (Set Union). Let A and B be two sets, then the union is defined
A B = { x : x A or x B } .
A union of two sets contains all elements that are in either set.
Definition 1.11 (Set Intersection). Let A and B be two sets, then the intersection is defined
A B = { x : x A and x B } .
1.1 Axiomatic Probability 3
A AC A B
A B A B
AB AB
Figure 1.1: The four set definitions shown in R2 . The upper left panel shows a set and its com-
plement. The upper right shows two disjoint sets. The lower left shows the intersection of two
sets (darkened region) and the lower right shows the union of two sets (darkened region). I all
diagrams, the outer box represents the entire space.
The intersection contains only the elements that are in both sets.
Definition 1.12 (Set Complement). Let A be a set, then the complement set, denoted
A c = {x : x
/ A} .
The complement of a set contains all elements which are not contained in the set.
Definition 1.13 (Disjoint Sets). Let A and B be sets, then A and B are disjoint if and only if A B =
.
Figure 1.1 provides a graphical representation of the four set operations in a 2-dimensional
space.
The third and final axiom states that probability is additive when sets are disjoint.
4 Probability, Random Variables and Expectations
Axiom 1.14. Let {A i }, i = 1, 2, . . . be a finite or countably infinite set of disjoint events.1 Then
!
[ X
Pr Ai = Pr (A i ) . (1.3)
i =1 i =1
Assembling a sample space, event space and a probability measure into a set produces what
is known as a probability space. Throughout the course, and in virtually all statistics, a complete
probability space is assumed (typically without explicitly stating this assumption).2
Definition 1.16 (Probability Space). A probability space is denoted using the tuple (, F , Pr)
where is the sample space, F is the event space and Pr is the probability set function which
has domain F .
The three axioms of modern probability are very powerful, and a large number of theorems
can be proven using only these axioms. A few simple example are provided, and selected proofs
appear in the Appendix.
Theorem 1.17. Let A be an event in the sample space , and let A c be the complement of A so that
= A A c . Then Pr (A) = 1 Pr (A c ).
Since A and A c are disjoint, and by definition A c is everything not in A, then the probability
of the two must be unity.
Theorem 1.18. Let A and B be events in the sample space . Then Pr (A B )= Pr (A) + Pr (B )
Pr (A B ).
This theorem shows that for any two sets, the probability of the union of the two sets is equal
to the probability of the two sets minus the probability of the intersection of the sets.
Definition 1.19 (Conditional Probability). Let A and B be two events in the sample space . If
Pr (B ) 6= 0, then the conditional probability of the event A, given event B , is given by
Pr (A B )
Pr A |B =
. (1.4)
Pr (B )
1
Definition 1.15. A S set is countably infinite if there exists a bijective (one-to-one) function from the elements of
S to the natural numbers N = {1, 2, . . .} . Common sets that are countable infinite include the integers (Z) and the
rational numbers (Q).
A complete probability space is complete if and only if B F where Pr (B ) = 0 and A B , then A F . This
2
1.1.2 Independence
Independence of two measurable sets means that any information about an event occurring in
one set has no information about whether an event occurs in another set.
Definition 1.21. Let A and B be two events in the sample space . Then A and B are independent
if and only if
Pr (A B ) = Pr (A) Pr (B ) (1.5)
Bayes rule restates the previous theorem so that the probability of observing an event in B j
given an event in A is observed can be related to the conditional probability of A given B j .
6 Probability, Random Variables and Expectations
Corollary 1.23 (Bayes Rule). Let Bi ,i = 1, 2 . . . be a finite or countably infinite partition of the
S
sample space so that B j Bk = for j 6= k and i =1 Bi = . Let Pr (Bi ) > 0 for all i , then for any
set A where Pr (A) > 0,
Pr A |B j Pr B j
= P
Pr B j |A .
Pr A |Bi Pr (Bi )
i =1
Pr A |B j Pr B j
=
Pr (A)
Pr (A B ) = Pr A |B Pr (B ) ,
which is referred to as the multiplication rule. Also notice that the order of the two sets is arbi-
trary, so that the rule can be equivalently stated as Pr (A B ) = Pr B |A Pr (A). Combining these
Pr A |B Pr (B ) = Pr B |A Pr (A)
Pr A |B Pr (B )
=
Pr B |A . (1.7)
Pr (A)
Example 1.24. Suppose a family has 2 children and one is a boy, and that the probability of having
a child of either sex is equal and independent across children. What is the probability that they
have 2 boys?
Before learning that one child is a boy, there are 4 equally probable possibilities: {B , B },
{B , G }, {G , B } and {G , G }. Using Bayes rule,
Pr B 1| {B , B } Pr {B , B }
= P
Pr {B , B } |B 1
B 1|S Pr (S )
S {{B ,B },{B ,G },{G ,B },{G ,B }} Pr
1 41
=
1 14 + 1 14 + 1 14 + 0 41
1
=
3
1
so that knowing one child is a boy increases the probability of 2 boys from 4
to 13 . Note that
X
Pr B 1|S Pr (S ) = Pr (B 1) .
Example 1.25. The famous Monte Hall Lets Make a Deal television program is an example of
Bayes rule. Contestants competed for one of three prizes, a large one (e.g. a car) and two unin-
teresting ones (duds). The prizes were hidden behind doors numbered 1, 2 and 3. Ex ante, the
contestant has no information about the which door has the large prize, and to the initial proba-
1.1 Axiomatic Probability 7
bilities are all 31 . During the negotiations with the host, it is revealed that one of the non-selected
doors does not contain the large prize. The host then gives the contestant the chance to switch
from the door initially chosen to the one remaining door. For example, suppose the contestant
choose door 1 initially, and that the host revealed that the large prize is not behind door 3. The
contestant then has the chance to choose door 2 or to stay with door 1. In this example, B is the
event where the contestant chooses the door which hides the large prize, and A is the event that
the large prize is not behind door 2.
Initially there are three equally likely outcomes (from the contestants point of view), where
D indicates dud, L indicates the large prize, and the order corresponds to the door number.
{D , D , L } , {D , L , D } , {L , D , D }
The contestant has a 31 chance of having the large prize behind door 1. The host will never remove
the large prize, and so applying Bayes rule we have
Pr H = 3|S = 1, L = 2 Pr L = 2|S = 1
Pr L = 2|H = 3, S = 1 = P3
Pr H = 3|S = 1, L = i Pr L = i |S = 1
i =1
1
1
= 3
1
2
13 + 1 + 0
1
3
1
3
1
= 3
1
2
2
= .
3
where H is the door the host reveals, S is initial door selected, and L is the door containing the
large prize. This shows that the probability the large prize is behind door 2, given that the player
initially selected door 1 and the host revealed door 3 can be computed using Bayes rule.
Pr H = 3|S = 1, L = 2 is the probability that the host shows door 3 given the contestant se-
lected door 1 and the large prize is behind door 2, which always happens since the host will
never reveal the large prize. P L = 2|S = 1 is the probability that the large is in door 2 given
the contestant selected door 1, which is 13 . Pr H = 3|S = 1, L = 1 is the probability that the
host reveals door 3 given that door 1 was selected and contained the large prize, which is 21 , and
P H = 3|S = 1, L = 3 is the probability that the host reveals door 3 given door 3 contains the
Bayes rule shows that it is always optimal to switch doors. This is a counter-intuitive result
and occurs since the hosts action reveals information about the location of the large prize. Es-
sentially, the two doors not selected by the host have combined probability 23 of containing the
large prize before the doors are opened opening the third assigns its probability to the door not
opened.
8 Probability, Random Variables and Expectations
Definition 1.27 (Discrete Random Variable). A random variable is called discrete if its range con-
sists of a countable (possibly infinite) number of elements.
While discrete random variables are less useful than continuous random variables, they are
still commonly encountered.
Example 1.28. A random variable which takes on values in {0, 1} is known as a Bernoulli random
variable, and is the simplest non-degenerate random variable (see Section 1.2.3.1).3 Bernoulli
random variables are often used to model success or failure, where success is loosely defined
a large negative return, the existence of a bull market or a corporate default.
The distinguishing characteristic of a discrete random variable is not that it takes only finitely
many values, but that the values it takes are distinct in the sense that it is possible to fit small
intervals around each point without the overlap.
Example 1.29. Poisson random variables take values in{0, 1, 2, 3, . . .} (an infinite range), and are
commonly used to model hazard rates (i.e. the number of occurrences of an event in an interval).
They are especially useful in modeling trading activity (see Section 1.2.3.2).
Definition 1.30 (Probability Mass Function). The probability mass function, f , for a discrete
random variable X is defined as f (x ) = Pr (x ) for all x R (X ), and f (x ) = 0 for all x
/ R (X )
where R (X ) is the range of X (i.e. the values for which X is defined).
Example 1.31. The probability mass function of a Bernoulli random variable takes the form
f (x ; p ) = p x (1 p )1 x
Figure 1.2 contains a few examples of Bernoulli pmfs using data from the FTSE 100 and S&P
500 over the period 19842012. Both weekly returns, using Friday to Friday prices and monthly
returns, using end-of-month prices, were constructed. Log returns were used (rt = ln (Pt /Pt 1 ))
in both examples. Two of the pmfs defined success as the return being positive. The other two de-
fine the probability of success as a return larger than -1% (weekly) or larger than -4% (monthly).
These show that the probability of a positive return is much larger for monthly horizons than for
weekly.
x
f (x ; ) = exp ()
x!
where [0, ) determines the intensity of arrival (the average value of the random variable).
The pmf of the Poisson distribution can be evaluated for every value of x 0, which is the
support of a Poisson random variable. Figure 1.4 shows empirical distribution tabulated using
a histogram for the time elapsed where .1% of the daily volume traded in the S&P 500 tracking
ETF SPY on May 31, 2012. This data series is a good candidate for modeling using a Poisson
distribution.
Continuous random variables, on the other hand, take a continuum of values technically
an uncountable infinity of values.
Definition 1.33 (Continuous Random Variable). A random variable is called continuous if its
range is uncountably infinite and there exists a non-negative-valued function f (x ) defined or
all x (, ) such that for any event B R (X ), Pr (B ) = x B f (x ) dx and f (x ) = 0 for all
R
The pmf of a discrete random variable is replaced with the probability density function (pdf)
for continuous random variables. This change in naming reflects that the probability of a single
point of a continuous random variable is 0, although the probability of observing a value inside
an arbitrarily small interval in R (X ) is not.
Definition 1.34 (Probability Density Function). For a continuous random variable, the function
f is called the probability density function (pdf).
10 Probability, Random Variables and Expectations
50
40
40
30
30
20
20
10 10
0 0
Less than 0 Above 0 Less than 0 Above 0
Weekly Return above -1% Monthly Return above -4%
80 100
80
60
60
40
40
20
20
0 0
Less than 1% Above 1% Less than 4% Above 4%
Figure 1.2: These four charts show examples of Bernoulli random variables using returns on the
FTSE 100 and S&P 500. In the top two, a success was defined as a positive return. In the bottom
two, a success was a return above -1% (weekly) or -4% (monthly).
Before providing some examples of pdfs, it is useful to characterize the properties that any
pdf should have.
Example 1.36. A simple continuous random variable can be defined on [0, 1] using the proba-
1.2 Univariate Random Variables 11
This simple pdf has peaks near 0 and 1 and a trough at 1/2. More realistic pdfs allow for values
in (, ), such as in the density of a normal random variable.
Example 1.37. The pdf of a normal random variable with parameters and 2 is given by
(x )2
1
f (x ) = exp 2
. (1.8)
22 2
N , 2 is used as a shorthand notation for a random variable with this pdf. When = 0 and
2 = 1, the distribution is known as a standard normal. Figure 1.3 contains a plot of the standard
normal pdf along with two other parameterizations.
For large values of x (in the absolute sense), the pdf of a standard normal takes very small
values, and peaks at x = 0 with a value of 0.3989. The shape of the normal distribution is that of
a bell (and is occasionally referred to a bell curve).
A closely related function to the pdf is the cumulative distribution function, which returns
the total probability of observing a value of the random variable less than its input.
Definition 1.38 (Cumulative Distribution Function). The cumulative distribution function (cdf)
for a random variable X is defined as F (c ) = Pr (x c ) for all c (, ).
Cumulative distribution function is used for both discrete and continuous random variables.
Definition 1.39 (Discrete CDF). When X is a discrete random variable, the cdf is
X
F (x ) = f (s ) (1.9)
s x
for x (, ).
x <0
0 if
F (x ; p ) = p if 0x <1 .
x 1
1 if
The Bernoulli cdf is simple since it only takes 3 values. The cdf of a Poisson random variable
relatively simple since it is defined as sum the probability mass function for all values less than
or equal to the functions argument.
12 Probability, Random Variables and Expectations
bx c
X i
F (x ; ) = exp () , x 0.
i!
i =0
where bc returns the largest integer smaller than the input (the floor operator).
Continuous cdfs operate much like discrete cdfs, only the summation is replaced by an inte-
gral since there are a continuum of values possible for X .
Definition 1.42 (Continuous CDF). When X is a continuous random variable, the cdf is
Z x
F (x ) = f (s ) ds (1.10)
for x (, ).
The integral computes the total area under the pdf starting from up to x.
Example 1.43. The cdf of the random variable with pdf given by 12 (x 1/2)2 is
F (x ) = 4x 3 6x 2 + 3x .
This cdf is the integral of the pdf, and checking shows that F (0) = 0, F (1/2) = 1/2 (since it is
symmetric around 1/2) and F (1) = 1, which must be 1 since the random variable is only defined
on [0, 1].h
Example 1.44. The cdf of a normally distributed random variable with parameters and 2 is
given by
x
(s )2
Z
1
F (x ) = exp 2
ds . (1.11)
22 2
Figure 1.3 contains a plot of the standard normal cdf along with two other parameterizations.
In the case of a standard normal random variable, the cdf is not available in closed form, and
so when computed using a computer (i.e. in Excel or MATLAB), fast, accurate numeric approxi-
mations based on polynomial expansions are used (Abramowitz & Stegun 1964).
The cdf can be similarly derived from the pdf as long as the cdf is continuously differentiable.
At points where the cdf is not continuously differentiable, the pdf is defined to take the value 0.4
Theorem 1.45 (Relationship between CDF and pdf). Let f (x ) and F (x ) represent the pdf and
cdf of a continuous random variable X , respectively. The density function for X can be defined as
f (x ) = F x(x ) whenever f (x ) is continuous and f (x ) = 0 elsewhere.
4
Formally a pdf does not have to exist for a random variable, although a cdf always does. In practice, this is a
technical point and distributions which have this property are rarely encountered in financial economics.
1.2 Univariate Random Variables 13
2.5 0.8
2
0.6
1.5
0.4
1
0.5 0.2
0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Normal PDFs Normal CDFs
1
= 0, 2 = 1
0.4 = 1, 2 = 1
= 0, 2 = 4 0.8
0.3
0.6
0.2
0.4
0.1 0.2
0 0
6 4 2 0 2 4 6 6 4 2 0 2 4 6
2
Figure 1.3: The top panels show the pdf for the density f (x ) = 12 x 12 and its associated cdf.
The bottom left panel shows the probability density function for normal distributions with alter-
native values for and 2 . The bottom right panel shows the cdf for the same parameterizations.
Example 1.46. Taking the derivative of the cdf in the running example,
F (x )
= 12x 2 12x + 3
x
1
= 12 x x +
2
4
2
1
= 12 x .
2
14 Probability, Random Variables and Expectations
A quantile is just the point on the cdf where the total probability that a random variable
is smaller is and the probability that the random variable takes a larger value is 1 . The
definition of a quantile does not necessarily require uniqueness, and non-unique quantiles are
encountered when pdfs have regions of 0 probability (or equivalently cdfs are discontinuous).
Quantiles are unique for random variables which have continuously differentiable cdfs. One
common modification of the quantile definition is to select the smallest number which satisfies
the two conditions to impose uniqueness of the quantile.
The function which returns the quantile is known as the quantile function.
Definition 1.48 (Quantile Function). Let X be a continuous random variable with cdf F (x ). The
quantile function for X is defined as G () = q where Pr (x q ) = and Pr (x > q ) = 1 .
When F (x ) is one-to-one (and hence X is strictly continuous) then G () = F 1 ().
Quantile functions are generally set-valued when quantiles are not unique, although in the
common case where the pdf does not contain any regions of 0 probability, the quantile function
is the inverse of the cdf.
for x 0 and > 0. Since f (x ; ) > 0 for x > 0, the quantile function is
F 1 (; ) = ln (1 ) .
The quantile function plays an important role in simulation of random variables. In partic-
ular, if u U (0, 1)5 , then x = F 1 (u ) is distributed F . For example, when u is a standard
uniform (U (0, 1)), and F 1 () is the quantile function of an exponential random variable with
shape parameter , then x = F 1 (u ; ) follows an exponential () distribution.
Theorem 1.50 (Probability Integral Transform). Let U be a standard uniform random variable,
FX (x ) be a continuous, increasing cdf . Then Pr F 1 (U ) < x = FX (x ) and so F 1 (U ) is dis-
tributed F .
5
The mathematical notation is read distributed as. For example, x U (0, 1) indicates that x is distributed
as a standard uniform random variable.
1.2 Univariate Random Variables 15
Pr (U F (x )) = F (x ) ,
Pr (U F (x )) = Pr F 1 (U ) F 1 (F (x ))
= Pr F 1 (U ) x
= Pr (X x ) .
F 1 (U ) is F by definition of the cdf. The right panel of figure 1.8 shows the relationship between
the cdf of a standard normal and the associated quantile function. Applying F (X ) produces a
uniform U through the cdf and applying F 1 (U ) produces X through the quantile function.
Discrete
1.2.3.1 Bernoulli
A Bernoulli random variable is a discrete random variable which takes one of two values, 0 or
1. It is often used to model success or failure, where success is loosely defined. For example, a
success may be the event that a trade was profitable net of costs, or the event that stock market
volatility as measured by VIX was greater than 40%. The Bernoulli distribution depends on a
single parameter p which determines the probability of success.
Parameters
p [0, 1]
Support
x {0, 1}
f (x ; p ) = p x (1 p )1 x , p 0
Moments
Mean p
Variance p (1 p )
16 Probability, Random Variables and Expectations
600
400
200
0
0 50 100 150 200 250
5-minute Realized Variance of SPY
Scaled 23
0.2 5-minute RV
0.1
0.1
0 0.05 0.1 0.15
Figure 1.4: The left panel shows a histogram of the elapsed time in seconds required for .1% of
the daily volume being traded to occur for SPY on May 31, 2012. The right panel shows both the
fitted scaled 2 distribution and the raw data (mirrored below) for 5-minute realized variance
estimates for SPY on May 31, 2012.
1.2.3.2 Poisson
A Poisson random variable is a discrete random variable taking values in {0, 1, . . .}. The Poisson
depends on a single parameter (known as the intensity). Poisson random variables are often
used to model counts of events during some interval, for example the number of trades executed
over a 5-minute window.
Parameters
Support
x {0, 1, . . .}
1.2 Univariate Random Variables 17
Moments
Mean
Variance
Continuous
1.2.3.3 Normal (Gaussian)
The normal is the most important univariate distribution in financial economics. It is the familiar
bell-shaped distribution, and is used heavily in hypothesis testing and in modeling (net) asset
P P
returns (e.g. rt = ln Pt ln Pt 1 or rt = t Pt t11 where Pt is the price of the asset in period t ).
Parameters
(, ) , 2 0
Support
x (, )
Moments
Mean
Variance 2
Median
Skewness 0
Kurtosis 3
6
The error function does not have a closed form and is defined
Z x
2
erf (x ) = exp s 2 ds .
0
18 Probability, Random Variables and Expectations
0 0
5 5
10 10
0.1 0.05 0 0.05 0.1 0.1 0.05 0 0.05 0.1
Monthly FTSE Monthly S&P 500
10 Normal Normal
Std. t, = 5 Std. t, = 4
FTSE 100 Return 10 S&P 500 Return
5
5
0 0
5 5
0.15 0.1 0.05 0 0.05 0.1 0.15 0.1 0.05 0 0.05 0.1
Figure 1.5: Weekly and monthly densities for the FTSE 100 and S&P 500. All panels plot the pdf of
a normal and a standardized Students t using parameters estimated with maximum likelihood
estimation (See Chapter2). The points below 0 on the y-axis show the actual returns observed
during this period.
1.2 Univariate Random Variables 19
Notes
The normal with mean and variance 2 is written N , 2 . A normally distributed random
variable with = 0 and 2 = 1 is known as a standard normal. Figure 1.5 shows the fit normal
distribution to the FTSE 100 and S&P 500 using both weekly and monthly returns for the period
19842012. Below each figure is a plot of the raw data.
1.2.3.4 Log-Normal
Parameters
(, ) , 2 0
Support
x (0, )
Since Y = ln (X ) N , 2 , the cdf is the same as the normal only using ln x in place of x .
Moments
2
Mean exp + 2
Median exp ()
exp 1 exp 2 + 2
2
Variance
1.2.3.5 2 (Chi-square)
2 random variables depend on a single parameter known as the degree-of-freedom. They are
commonly encountered when testing hypotheses, although they are also used to model contin-
uous variables which are non-negative such as conditional variances. 2 random variables are
closely related to standard normal random variables and are defined as the sum of independent
20 Probability, Random Variables and Expectations
standard normal random variables which have been squared. Suppose Z 1 , . . . , Z are standard
P
normally distributed and independent, then x = i =1 z i2 follows a 2 .7
Parameters
[0, )
Support
x [0, )
Moments
Mean
Variance 2
Notes
Figure 1.4 shows a 2 pdf which was used to fit some simple estimators of the 5-minute variance
of the S&P 500 from May 31, 2012. These were computed by summing and squaring 1-minute
returns within a 5-minute interval (all using log prices). 5-minute variance estimators are im-
portant in high-frequency trading and other (slower) algorithmic trading.
Students t random variables are also commonly encountered in hypothesis testing and, like 2
random variables, are closely related to standard normals. Students t random variables depend
on a single parameter, , and can be constructed from two other independent random variables.
If Z a standard normal, W a 2 and Z W , then x = z / w follows a Students t distribu-
p
tion. Students t are similar to normals except that they are heavier tailed, although as a
Students t converges to a standard normal.
Support
x (, )
Moments
Mean 0, > 1
Median 0
Variance 2
, > 2
Skewness 0, > 3
(2)
Kurtosis 3 4 , > 4
Notes
When = 1, a Students t is known as a Cauchy random variable. Cauchy random variables are
so heavy tailed that even the mean does not exist.
The standardized Students t extends the usual Students t in two directions. First, it removes
the variances dependence on so that the scale of the random variable can be established sep-
arately from the degree of freedom parameter. Second, it explicitly adds location and scale pa-
rameters so that if Y is a Students t random variable with degree of freedom , then
2
x =+ y
1.2.3.7 Uniform
The continuous uniform is commonly encountered in certain test statistics, especially those test-
ing whether assumed densities are appropriate for a particular series. Uniform random variables,
when combined with quantile functions, are also useful for simulating random variables.
Parameters
Support
x [a , b ]
f (x ) = 1
b a
F (x ) = x a
b a
for a x b , F (x ) = 0 for x < a and F (x ) = 1 for x > b
Moments
b a
Mean 2
b a
Median 2
(b a )2
Variance 12
Skewness 0
9
Kurtosis 5
Notes
which are arranged into a column vector. The definition of a multivariate random variable is vir-
tually identical to that of a univariate random variable, only mapping to the n-dimensional
space Rn .
Multivariate random variables, like univariate random variables, are technically functions of
events in the underlying probability space X (), although the function argument (the event)
is usually suppressed.
Multivariate random variables can be either discrete or continuous. Discrete multivariate
random variables are fairly uncommon in financial economics and so the remainder of the chap-
ter focuses exclusively on the continuous case. The characterization of a what makes a multivari-
ate random variable continuous is also virtually identical to that in the univariate case.
Multivariate random variables, at least when continuous, are often described by their prob-
ability density function.
Definition 1.54 (Multivariate Probability Density Function). The function f (x1 , . . . , xn ) is called
a multivariate probability function (pdf).
Example 1.55. Suppose X is a bivariate random variable, then the function f (x1 , x2 ) = 3
x12 + x22
2
defined on [0, 1] [0, 1] is a valid probability density function.
Example 1.56. Suppose X is a bivariate standard normal random variable. Then the probability
density function of X is
x12 + x22
1
f (x1 , x2 ) = exp .
2 2
The multivariate cumulative distribution function is virtually identical to that in the univari-
ate case, and measure the total probability between (for each element of X ) and some point.
24 Probability, Random Variables and Expectations
Definition 1.57 (Multivariate Cumulative Distribution Function). The joint cumulative distribu-
tion function of an n -dimensional random variable X is defined by
F (x1 , . . . , xn ) = Pr (X i xi , i = 1, . . . , n )
Example 1.58. Suppose X is a bivariate random variable with probability density function
3 2
f (x1 , x2 ) = x1 + x22
2
x13 x2 + x1 x23
F (x1 , x2 ) = .
2
Figure 1.6 shows the joint cdf of the density in the previous example. As was the case for uni-
variate random variables, the probability density function can be determined by differentiating
the cumulative distribution function with respect to each component.
Theorem 1.59 (Relationship between CDF and PDF). Let f (x1 , . . . , xn ) and F (x1 , . . . , xn ) repre-
sent the pdf and cdf of an n -dimensional continuous random variable X , respectively. The density
function for X can be defined as f (x1 , . . . , xn ) = x1 xF2 ...
n
(x)
xn
whenever f (x1 , . . . , xn ) is continuous
and f (x1 , . . . , xn ) = 0 elsewhere.
Example 1.60. Suppose X is a bivariate random variable with cumulative distribution function
x 3 x +x x 3
F (x1 , x2 ) = 1 2 2 1 2 . The probability density function can be determined using
2 F (x1 , x2 )
f (x1 , x2 ) =
x1 x2
1 3x12 x2 + x23
=
2 x2
3 2
= x1 + x22 .
2
Definition 1.61 (Bivariate Marginal Probability Density Function). Let X be a bivariate random
variable comprised of X 1 and X 2 . The marginal distribution of X 1 is given by
Z
f1 (x1 ) = f (x1 , x2 ) dx2 . (1.15)
The marginal density of X 1 is a density function where X 2 has been integrated out. This in-
tegration is simply a form of averaging varying x2 according to the probability associated with
each value of x2 and so the marginal is only a function of x1 . Both probability density functions
and cumulative distribution functions have marginal versions.
Example 1.62. Suppose X is a bivariate random variable with probability density function
3 2
f (x1 , x2 ) = x1 + x22
2
and is defined on [0, 1] [0, 1]. The marginal probability density function for X 1 is
3 1
f1 (x1 ) = x12 + ,
2 3
Example 1.63. Suppose X is a bivariate random variable with probability density function f (x1 , x2 ) =
6 x1 x22 and is defined on [0, 1] [0, 1]. The marginal probability density functions for X 1 and
X 2 are
f1 (x1 ) = 2x1 and f2 (x2 ) = 3x22 .
12 12
= ,
12 22
Figure 1.7 shows the fit marginal distributions to weekly returns on the FTSE 100 and S&P 500
assuming that returns are normally distributed. Marginal pdfs can be transformed into marginal
cdfs through integration.
Definition 1.65 (Bivariate Marginal Cumulative Distribution Function). The cumulative marginal
distribution function of X 1 in bivariate random variable X is defined by
F1 (x1 ) = Pr (X 1 x1 )
26 Probability, Random Variables and Expectations
The general j -dimensional marginal distribution partitions the n -dimensional random vari-
able X into two blocks, and constructs the marginal distribution for the first j by integrating out
(averaging over) the remaining n j components of X . In the definition, both X 1 and X 2 are
vectors.
Definition 1.66 (Marginal Probability Density Function). Let X be a n -dimensional random vari-
able and partition the first j (1 j < n ) elements of X into X 1 , and the remainder into X 2 so
0
that X = X 10 X 20 . The marginal probability density function for X 1 is given by
Z Z
f1,..., j x1 , . . . , x j = f (x1 , . . . , xn ) dx j +1 . . . dxn .
... (1.16)
The marginal cumulative distribution function is related to the marginal probability density
function in the same manner as the joint probability density function is related to the cumulative
distribution function. It also has the same interpretation.
Z x1 Z xj
F1,..., j x1 , . . . , x j =
... f1,..., j s1 , . . . , s j ds1 . . . ds j . (1.17)
Marginal distributions provide the tools needed to model the distribution of a subset of the com-
ponents of a random variable while averaging over the other components. Conditional densities
and distributions, on the other hand, consider a subset of the components a random variable
conditional on observing a specific value for the remaining components. In practice, the vast ma-
jority of modeling makes use of conditioning information where the interest is in understanding
the distribution of a random variable conditional on the observed values of some other random
variables. For example, consider the problem of modeling the expected return of an individual
stock. Balance sheet information such as the book value of assets, earnings and return on equity
are all available, and can be conditioned on to model the conditional distribution of the stocks
return.
First, consider the bivariate case.
Definition 1.68 (Bivariate Conditional Probability Density Function). Let X be a bivariate ran-
dom variable comprised of X 1 and X 2 . The conditional probability density function for X 1 given
1.3 Multivariate Random Variables 27
f (x1 , x2 ) dx2
R
x1 |X 2 B = BR
f . (1.18)
B
f2 (x2 ) dx2
When B is an elementary event (e.g. single point), so that Pr (X 2 = x2 ) = 0 and f2 (x2 ) > 0, then
f (x1 , x2 )
x1 |X 2 = x2 =
f . (1.19)
f2 (x2 )
Conditional density functions differ slightly depending on whether the conditioning variable
is restricted to a set or a point. When the conditioning variable is specified to be a set where
Pr (X 2 B ) > 0, then the conditional density is the joint probability of X 1 and X 2 B divided by
the marginal probability of X 2 B . When the conditioning variable is restricted to a point, the
conditional density is the ratio of the joint pdf to the margin pdf of X 2 .
Example 1.69. Suppose X is a bivariate random variable with probability density function
3 2
f (x1 , x2 ) = x1 + x22
2
1 1
= 12x12 + 1 ,
f x1 |X 2 0,
2 5
x12 + x22
x1 |X 2 = x2 =
f .
x22 + 1
Figure 1.6 shows the joint pdf along with both types of conditional densities. The upper left
panel shows that conditional density for X 2 [0.25, 0.5]. The highlighted region contains the
components of the joint pdf which are averaged to produce the conditional density. The lower
left also shows the pdf but also shows three (non-normalized) conditional densities of the form
f x1 | x2 . The lower right pane shows these three densities correctly normalized.
The previous example shows that, in general, the conditional probability density function
differs as the region used changes.
28 Probability, Random Variables and Expectations
12 12
= ,
12 22
12 12
2
then the conditional distribution of X 1 given X 2 = x2 is N 1 + 22
(x2 2 ) , 12 22
.
Marginal distributions and conditional distributions are related in a number of ways. One
obvious way is that f x1 |X 2 R (X 2 ) = f1 (x1 ) that is, the conditional probability of X 1 given
that X 2 is in its range is the marginal pdf of X 1 . This holds since integrating over all values of x2 is
essentially not conditioning on anything (which is known as the unconditional, and a marginal
density could, in principle, be called the unconditional density since it averages across all values
of the other variable).
The general definition allows for an n -dimensional random vector where the conditioning
variable has dimension between 1 and j < n .
Definition 1.71 (Conditional Probability Density Function). Let f (x1 , . . . , xn ) be the joint density
function for an n-dimensional random variable X = [X 1 . . . X n ]0 and and partition the first j (1
0
j < n) elements of X into X 1 , and the remainder into X 2 so that X = X 10 X 20 . The conditional
x 1 , . . . , x j | X 2 = x2 .
In general the simplified notation f x1 , . . . , x j |x2 will be used to represent f
1.3.3 Independence
A special relationship exists between the joint probability density function and the marginal den-
sity functions when random variables are independent the joint must be the product of each
marginal.
Theorem 1.72 (Independence of Random Variables). The random variables X 1 ,. . . , X n with joint
density function f (x1 , . . . , xn ) are independent if and only if
n
Y
f (x1 , . . . , xn ) = fi (xi ) (1.22)
i =1
1 3 x2 [0.25, 0.5]
F (x1 , x2 )
f(x1 , x2 )
2
0.5
1
0 0
1 1
1 1
0.5 0.5 0.5 0.5
x1 0 0 x2 x1 0 0 x2
Conditional Densities Normalized Conditional Densities
3
f(x1 |x2 = 0.3)
f (x1 |x2 = 0.7) f(x1 |x2 = 0.5)
3 f (x1 |x2 = 0.5) 2.5 f(x1 |x2 = 0.7)
f (x1 |x2 = 0.3)
2
f(x1 , x2 )
2
f(x1 |x2 )
1.5
1
1
0
1 0.5
1
0.5 0.5 0
x1 0 0 x2 0 0.2 0.4 0.6 0.8 1
x1
Figure 1.6: These four panels show four views of a distribution defined on [0, 1] [0, 1]. The
upper left panel shows the joint cdf. The upper right shows the pdf along with the portion
of the pdf used to construct a conditional distribution f x1 | x2 [0.25, 0.5] . The line shows
actual correctly scaled conditional distribution which is only a function of x1 plotted at
the
E X 2 |X 2 [0.25, 0.5] . The lower left panel also shows the pdf along with three non-normalized
conditional densities. The bottom right panel shows the correctly normalized conditional den-
sities.
30 Probability, Random Variables and Expectations
The intuition behind this result follows from the fact that when the components of a random
variable are independent, any change in one component has no information for the others. In
other words, both marginals and conditionals must be the same.
Example 1.73. Let X be a bivariate random variable with probability density function f (x1 , x2 ) =
x1 x2 on [0, 1] [0, 1], then X 1 and X 2 are independent. This can be verified since
Independence is a very strong concept, and it carries over from random variables to functions
of random variables as long as each function involves only one random variable.9
Independence is often combined with an assumption that the marginal distribution is the
same to simplify the analysis of collections of random data.
f (x1 , x2 ) = f x1 | x2 f2 (x2 )
= f x2 | x1 f1 (x2 ) .
The joint can be factored two ways, and equating the two factorizations produces Bayes rule.
Definition 1.76 (Bivariate Bayes Rule). Let X by a bivariate random variable with components
X 1 and X 2 , then
x2 | x1 f1 (x1 )
f
=
f x1 | x2 (1.23)
f2 (x2 )
9
This can be generalized to the full multivariate case where X is an n -dimensional random variable where
the first j components are independent from the last n j components defining y1 = Y1 x1 , . . . , x j and y2 =
Y2 x j +1 , . . . , xn .
1.3 Multivariate Random Variables 31
Bayes rule states that the probability of observing X 1 given a value of X 2 is equal to the joint
probability of the two random variables divided by the marginal probability of observing X 2 .
Bayes rule is normally applied where there is a belief about X 1 (f1 (x1 ), called a prior), and the con-
ditional distribution of X 1 given X 2 is a known density (f x2 | x1 , called the likelihood), which
combine to form a belief about X 1 (f x1 | x2 , called the posterior). The marginal density of X 2
is not important when using Bayes rule since the numerator is still proportional to the condi-
tional density of X 1 given X 2 since f2 (x2 ) is a number, and so it is common to express the non-
normalized posterior as
f x1 | x2 f x2 | x1 f1 (x1 ) ,
Example 1.77. Suppose interest lies in the probability a firm does bankrupt which can be mod-
eled as a Bernoulli distribution. The parameter p is unknown but, given a value of p , the likeli-
hood that a firm goes bankrupt is
x |p = p x (1 p )1 x .
f
While p is known, a prior for the bankruptcy rate can be specified. Suppose the prior for p follows
a Beta (, ) distribution which has pdf
p 1 (1 p ) 1
f (p ) =
B (, )
where B (a , b ) is Beta function that acts as a normalizing constant.10 The Beta distribution has
support on [0, 1] and nests the standard uniform as a special case when = = 1. The expected
value of a random variable with a Beta (, ) is + and the variance is (+ )2
(+ +1)
where > 0
and > 0.
Using Bayes rule,
p 1 (1 p ) 1
p x (1 p )1 x
f p |x
B (, )
p 1+x (1 p ) x
= .
B (, )
Note that this isnt a density since it has the wrong normalizing constant. However, the com-
ponent of the density which contains p is p ( x )1 (1 p )( x +1)1 (known as the kernel) is the
same as in the Beta distribution, only with different parameters. Thus the posterior, f p | x is
Beta ( + x , x + 1). Since the posterior is the same as the prior, it could be combined with
10
The beta function can only be given as an indefinite integral,
Z 1
B (a , b ) = s a 1 (1 s )b 1 ds .
0
32 Probability, Random Variables and Expectations
another observation (and the Bernoulli likelihood) to produce an updated posterior. When a
Bayesian problem has this property, the prior density said to be conjugate to the likelihood.
Example 1.78. Suppose M is a random variable representing the score on the midterm, and
interest lies in the final course grade, C . The prior for C is normal with mean and variance 2 ,
and that the distribution of M given C is also conditionally normal with mean C and variance
2 . Bayes rule can be used to make inference on the final course grade given the midterm grade.
f m |c fC (c )
f c |m
(m c )2 (c )2
1 1
exp exp
22 22 22 22
1 (m c )2 (c )2
= K exp +
2 2 2
2c m 2 2
2
c2
1 c 2c m
= K exp + 2 2 + 2 + 2
2 2 2
2
2
1 1 1 m m
= K exp c 2
+ 2 2c + 2 + + 2
2 2 2 2
This (non-normalized) density can be shown to have the kernel of a normal by completing
the square,11
!2
1 m
2
+ 2
f c |m exp 1 c .
1
+ 1
2 1
2
+ 1
2 2 2
and variance 1
1 1
+ 2 .
2
The mean is a weighted average of the prior mean, and the midterm score, m , where the
weights are determined by the inverse variance of the prior and conditional distributions. Since
the weights are proportional to the inverse of the variance, a small variance leads to a relatively
large weight. If 2 = 2 ,then the posterior mean is the average of the prior mean and the midterm
score. The variance of the posterior depends on the uncertainty in the prior (2 ) and the uncer-
tainty in the data (2 ). The posterior variance is always less than the smaller of 2 and 2 . Like
11
Suppose a quadratic in x has the form a x 2 + b x + c . Then
a x 2 + b x + c = a (x d )2 + e
0.02
0 10
0.02
5
0.04
0.06 0
0.05 0 0.05 0.05 0 0.05
FTSE 100 Return
Bivariate Normal PDF Contour of Bivariate Normal PDF
0.06
0.04
S&P 500 Return
300 0.02
200 0
100
0.02
0.05 0.04
0.05
0 0 0.06
0.05 0.05 0.05 0 0.05
S&P 500 FTSE 100 FTSE 100 Return
Figure 1.7: These four figures show different views of the weekly returns of the FTSE 100 and the
S&P 500. The top left contains a scatter plot of the raw data. The top right shows the marginal
distributions from a fit bivariate normal distribution (using maximum likelihood). The bottom
two panels show two representations of the joint probability density function.
the Bernoulli-Beta combination in the previous problem, the normal distribution is a conjugate
prior when the conditional density is normal.
Like the univariate normal, the multivariate normal depends on 2 parameters, and n by 1 vec-
tor of means and an n by n positive semi-definite covariance matrix. The multivariate normal
is closed to both to marginalization and conditioning in other words, if X is multivariate nor-
mal, then all marginal distributions of X are normal, and so are all conditional distributions of
X 1 given X 2 for any partitioning.
34 Probability, Random Variables and Expectations
Parameters
Support
x Rn
Moments
Mean
Median
Variance
Skewness 0
Kurtosis 3
Marginal Distribution
11 12
= .
012 22
0
In other words, the distribution of X 1 , . . . X j is N 1 , 11 . Moreover, the marginal distribution
of a single element of X is N i , i2 where i is the ith element of and i2 is the ith diagonal
element of .
12
Any two variables can be reordered in a multivariate normal by swapping their means and reordering the cor-
responding rows and columns of the covariance matrix.
1.4 Expectations and Moments 35
Conditional Distribution
N 1 + 0 x2 2 , 11 0 22
where =
22 12 .
1 0
1 12 12
x1
N , ,
x2 2 12 22
12 12
2
X 1 |X 2 = x2 N 1 + 2 (x2 2 ) , 1 2
2
,
2 2
Notes
If the covariance between elements i and j equals zero (so that i j = 0), they are indepen-
dent.
For the normal, a covariance (or correlation) of 0 implies independence. This is not true of
most other multivariate random variables.
1.4.1 Expectations
The expectation is the value, on average, of a random variable (or function of a random variable).
Unlike common English language usage, where ones expectation is not well defined (e.g. could
36 Probability, Random Variables and Expectations
be the mean or the mode, another measure of the tendency of a random variable), the expecta-
tion in a probabilistic sense always averages over the possible values weighting by the probability
of observing each value. The form of an expectation in the discrete case is particularly simple.
Definition 1.79 (Expectation of a Discrete Random Variable). The expectation of a discrete ran-
dom variable, defined E [X ] = x R (X ) x f (x ), exists if and only if x R (X ) | x | f (x ) < .
P P
When the range of X is finite then the expectation always exists. When the range is infinite,
such as when a random variable takes on values in the range 0, 1, 2, . . ., the probability mass func-
tion must be sufficiently small for large values of the random variable in order for the expectation
to exist.13 Expectations of continuous random variables are virtually identical, only replacing the
sum with an integral.
Theorem 1.81 (Expectation Existence for Bounded Random Variables). If | x | < c for all x
R (X ), then E [X ] exists.
The expectation operator, E [] is generally defined for arbitrary functions of a random vari-
able, g (x ). In practice, g (x ) takes many forms x , x 2 , x p for some p , exp (x ) or something
more complicated. Discrete and continuous expectations are closely related. Figure 1.8 shows a
standard normal along with a discrete approximation where each bin has a width of 0.20 and the
height is based on the pdf value at the mid-point of the bin. Treating the normal as a discrete dis-
tribution based on this approximation would provide reasonable approximations to the correct
(integral) expectations.
0.3 0.6
U
0.2 0.4 Quantile Function
0.1 0.2
0 0
2 1 0 1 2 3 2 1 0 1 2 3
X
Figure 1.8: The left panel shows a standard normal and a discrete approximation. Discrete ap-
proximations are useful for approximating integrals in expectations. The right panel shows the
relationship between the quantile function and the cdf.
The inequalities become strict if the functions are strictly convex (or concave) as long as X is not
degenerate.14 Jensens inequality is common in economic applications. For example, standard
utility functions (U ()) are assumed to be concave which reflects the idea that marginal utility
(U 0 ()) is decreasing in consumption (or wealth). Applying Jensens inequality shows that if con-
sumption is random, then E [U (c )] < U (E [c ]) in other words, the economic agent is worse
off when facing uncertain consumption. Convex functions are also commonly encountered, for
example in option pricing or in (production) cost functions. The expectations operator has a
number of simple and useful properties:
14
A degenerate random variable has probability 1 on a single point, and so is not meaningfully random.
38 Probability, Random Variables and Expectations
These rules are used throughout financial economics when studying random variables and func-
tions of random variables.
The expectation of a function of a multivariate random variable is similarly defined, only
integrating across all dimensions.
It is straight forward to see that rule that the expectation of the sum is the sum of the expec-
tation carries over to multivariate random variables, and so
" n
# n
X X
E g i (X 1 , . . . X n ) = E [g i (X 1 , . . . X n )] .
i =1 i =1
Pn Pn
Additionally, taking g i (X 1 , . . . X n ) = X i , we have E Xi = E [X i ].
i =1 i =1
1.4 Expectations and Moments 39
1.4.2 Moments
Definition 1.85 (Noncentral Moment). The rth noncentral moment of a continuous random vari-
able X is defined Z
0r E [X r ] = x r f (x ) dx (1.25)
for r = 1, 2, . . ..
The first non-central moment is the average, or mean, of the random variable.
Definition 1.86 (Mean). The first non-central moment of a random variable X is called the mean
of X and is denoted .
Central moments are similarly defined, only centered around the mean.
Definition 1.87 (Central Moment). The rth central moment of a random variables X is defined
Z
r
r E (X ) = (x )r f (x ) dx
(1.26)
for r = 2, 3 . . ..
Aside from the first moment, references to moments refer to central moments. Moments
may not exist if a distribution is sufficiently heavy tailed. However, if the r th moment exists, then
any moment of lower order must also exist.
Theorem 1.88 (Lesser Moment Existence). If 0r exists for some r , then 0s exists for s r . More-
over, for any r , 0r exists if and only if r exists.
Central moments are used to describe a distribution since they are invariant to changes in
the mean. The second central moment is known as the variance.
If c is a constant, then V [c ] = 0.
If c is a constant, then V [c X ] = c 2 V [X ].
If a is a constant, then V [a + X ] = V [X ].
The variance of the sum is the sum of the variances plus twice all of the covariancesa ,
" n
# n n n
X X X X
= V [X i ] + 2
V Xi Cov X j , X k
i =1 i =1 j =1 k = j +1
a
See Section 1.4.7 for more on covariances.
The variance is a measure of dispersion, although the square root of the variance, known as
the standard deviation, is typically more useful.15
Definition 1.90 (Standard Deviation). The square root of the variance is known as the standard
deviations and is denoted or equivalently std (X ).
The standard deviation is a more meaningful measure than the variance since its units are
the same as the mean (and random variable). For example, suppose X is the return on the stock
market next year, and that the mean of X is 8% and the standard deviation is 20% (the variance is
.04). The mean and standard deviation are both measures as percentage change in investment,
and so can be directly compared, such as in the Sharpe ratio (Sharpe 1994). Applying the prop-
erties of the expectation operator and variance operator, it is possible to define a studentized (or
standardized) random variable.
Definition 1.91 (Studentization). Let X be a random variable with mean and variance 2 , then
x
Z = (1.27)
Standard deviation also provides a bound on the probability which can lie in the tail of a
distribution, as shown in Chebyshevs inequality.
Chebyshevs inequality is useful in a number of contexts. One of the most useful is in estab-
lishing consistency in any an estimator which has a variance that tends to 0 as the sample size
diverges.
15
The standard deviation is occasionally confused for the standard error. While both are square roots of variances,
the standard deviation refers to deviation in a random variable while standard error is reserved for parameter esti-
mators.
1.4 Expectations and Moments 41
The third central moment does not have a specific name, although it is called the skewness
when standardized by the scaled variance.
Definition 1.93 (Skewness). The third central moment, standardized by the second central mo-
ment raised to the power 3/2,
E (X E [X ])3
3
= 3 = E Z
3
3 (1.28)
(2 ) E (X E [X ]) 2 2
2
The skewness is a general measure of asymmetry, and is 0 for symmetric distribution (assum-
ing the third moment exists). The normalized fourth central moment is known as the kurtosis.
Definition 1.94 (Kurtosis). The fourth central moment, standardized by the squared second cen-
tral moment,
E (X E [X ])4
4
= = E Z4
(1.29)
(2 )2 E (X E [X ]) 2 2
Kurtosis measures of the chance of observing a large (and absolute terms) value, and is often
expressed as excess kurtosis.
Definition 1.95 (Excess Kurtosis). The kurtosis of a random variable minus the kurtosis of a
normal random variable, 3, is known as excess kurtosis.
Random variables with a positive excess kurtosis are often referred to as heavy tailed.
The median measures the point where 50% of the distribution lies on either side (it may not
be unique), and is just a particular quantile. The median has a few advantages over the mean,
and in particular it is less affected by outliers (e.g. the difference between mean and median
income) and it always exists (the mean doesnt exist for very heavy tailed distributions).
The interquartile range uses quartiles16 to provide an alternative measure of dispersion than
standard deviation.
16
-tiles are include terciles (3), quartiles (4), quintiles (5), deciles (10) and percentiles (100). In all cases the bin
ends are[(i 1/m ) , i /m] where m is the number of bins and i = 1, 2, . . . , m.
42 Probability, Random Variables and Expectations
Definition 1.97 (Interquartile Range). The value q.75 q.25 is known as the interquartile range.
The mode complements the mean and median as a measure of central tendency. A mode is
a local maximum of a density.
Definition 1.98 (Mode). Let X be a random variable with density function f (x ). An point c
where f (x ) attains a maximum is known as a mode.
Definition 1.99 (Unimodal Distribution). Any random variable which has a single, unique mode
is called unimodal.
Note that modes in a multimodal distribution do not necessarily have to have equal proba-
bility.
Definition 1.100 (Multimodal Distribution). Any random variable which as more than one mode
is called multimodal.
Figure 1.9 shows a number of distributions. The distributions depicted in the top panels are
all unimodal. The distributions in the bottom pane are mixtures of normals, meaning that with
probability p random variables come form one normal, and with probability 1 p they are drawn
from the other. Both mixtures of normals are multimodal.
E [X 1 ]
X1
X2 E [X 2 ]
E [X ] = E = . (1.30)
.. ..
. .
Xn E [X n ]
Covariance is a measure which captures the tendency of two variables to move together in a
linear sense.
Definition 1.101 (Covariance). The covariance between two random variables X and Y is de-
fined
Covariance can be alternatively defined using the joint product moment and the product of
the means.
1.4 Expectations and Moments 43
0.4 0.5
Std. Normal 21
23
0.4 25
0.3
0.3
0.2
0.2
0.1
0.1
0 0
3 2 1 0 1 2 3 0 2 4 6 8 10
50-50 Mixture Normal 30-70 Mixture Normal
0.3
0.2
0.2
0.1
0.1
0 0
4 2 0 2 4 4 2 0 2 4
Figure 1.9: These four figures show two unimodal (upper panels) and two multimodal (lower
panels) distributions. The upper left is a standard normal density. The upper right shows three
2 densities for = 1, 3 and 5. The lower panels contain mixture distributions of 2 normals the
left is a 50-50 mixture of N (1, 1) and N (1, 1) and the right is a 30-70 mixture of N (2, 1) and
N (1, 1).
44 Probability, Random Variables and Expectations
Theorem 1.102 (Alternative Covariance). The covariance between two random variables X and
Y can be equivalently defined
X Y = E [X Y ] E [X ] E [Y ] . (1.32)
Inverting the covariance expression shows that no covariance is sufficient to ensure that the
expectation of a product is the product of the expectations.
Theorem 1.103 (Zero Covariance and Expectation of Product). If X and Y have X Y = 0, then
E [X Y ] = E [X ] E [Y ].
The previous result follows directly from the definition of covariance since X Y = E [X Y ]
E [X ] E [Y ]. In financial economics, this result is often applied to products of random variables so
that the mean of the product can be directly determined by knowledge of the mean of each vari-
able and the covariance between the two. For example, when studying consumption based asset
pricing, it is common to encounter terms involving the expected value of consumption growth
times the pricing kernel (or stochastic discount factor) in many cases the full joint distribution
of the two is intractable although the mean and covariance of the two random variables can be
determined.
The Cauchy-Schwarz inequality is a version of the triangle inequality and states that the ex-
pectation of the squared product is less than the product of the squares.
Example 1.105. When X is an n-dimensional random variable, it is useful to assemble the vari-
ances and covariances into a covariance matrix.
Definition 1.106 (Covariance Matrix). The covariance matrix of an n -dimensional random vari-
able X is defined
..
12 12 . 1n
12 22 . . . 2n
0
Cov [X ] = = E (X E [X ]) (X E [X ]) =
.. .. ..
..
.
. . .
1n 2n . . . n2
where the ith diagonal element contains the variance of X i (i2 ) and the element in position (i , j )
contains the covariance between X i and X j i j .
When X is composed of two sub-vectors, a block form of the covariance matrix is often con-
venient.
11 12
= (1.33)
012 22
1.4 Expectations and Moments 45
In many cases, it is useful to avoid specifying a specific value for X 2 in which case E X 1 |X 1
will be used. Note that E X 1 |X 2 will typically be a function of the random variable X 2 .
Example 1.113. Suppose X is a bivariate normal distribution with components X 1 and X 2 , =
[1 2 ]0 and
12 12
= ,
12 22
12
then E X 1 |X 2 = x2 = 1 + (x2 2 ). This follows from the conditional density of a bivariate
22
random variable.
The law of iterated expectations uses conditional expectations to show that the condition-
ing does not affect the final result of taking expectations in other words, the order of taking
expectations does not matter.
Theorem 1.114 (Bivariate Law of Iterated Expectations). Let X be a continuous bivariate random
variable comprised of X 1 and X 2 . Then E E g (X 1 ) |X 2 = E [g (X 1 )] .
The law of iterated expectations follows from basic properties of an integral since the order
of integration does not matter as long as all integrals are taken.
Example 1.115. Suppose X is a bivariate normal distribution with components X 1 and X 2 , =
[1 2 ]0 and
12 12
= ,
12 22
then E [X 1 ] = 1 and
12
= E 1 + 2 (X 2 2 )
E E X 1 |X 2
2
12
= 1 + 2 (E [X 2 ] 2 )
2
12
= 1 + 2 (2 2 )
2
= 1 .
When using conditional expectations, any random variable conditioned on behaves as-if
non-random (in the conditional expectation), and so E E X 1 X 2 |X 2 = E X 2 E X 1 |X 2 . This
is a very useful tool when combined with the law of iterated expectations when E X 1 |X 2 is a
known function of X 2 .
Example 1.116. Suppose X is a bivariate normal distribution with components X 1 and X 2 , = 0
and
12 12
= ,
12 22
1.4 Expectations and Moments 47
then
E [X 1 X 2 ] = E E X 1 X 2 |X 2
= E X 2 E X 1 |X 2
12
= E X2 X2
22
12 2
= E X2
22
12
= 22
22
= 12 .
One particularly useful application of conditional expectations occurs when the conditional
expectation is known and constant, so that E X 1 |X 2 = c .
Example 1.117. Suppose X is a bivariate random variable composed of X 1 and X 2 and that
E X 1 |X 2 = c . Then E [X 1 ] = c since
E [X 1 ] = E E X 1 |X 2
= E [c ]
= c.
Conditional expectations can be taken for general n-dimensional random variables, and the
law of iterated expectations holds as well.
Z Z
E g (X 1 ) |X 2 = x2 =
... g x1 , . . . , x j f x1 , . . . , x j |x2 dx j . . . dx1 (1.37)
The law of iterated expectations also hold for arbitrary partitions as well.
Theorem 1.119 (Law of Iterated Expectations). Let X be a n-dimensional random variable and
and partition the first j (1 j < n ) elements of X into X 1 , and the remainder into X 2 so that
0
X = X 10 X 20 . Then E E g (X 1 ) |X 2 = E [g (X 1 )]. The law of iterated expectations is also known
Full multivariate conditional expectations are extremely common in time series. For exam-
ple, when using daily data, there are over 30,000 observations of the Dow Jones Industrial Average
available to model. Attempting to model the full joint distribution would be a formidable task.
48 Probability, Random Variables and Expectations
On the other hand, modeling the conditional expectation (or conditional mean) of the final ob-
servation, conditioning on those observations in the past, is far simpler.
Example 1.120. Suppose {X t } is a sequence of random variables where X t comes after X t j for
j 1. The conditional conditional expectation of X t given its past is
E X t |X t 1, X t 2 , . . . .
This leads naturally to the definition of a martingale, which is an important concept in finan-
cial economics which related to efficient markets.
for all j 0 and E |X t | < , both holding for all t , then {X t } is a martingale.
The two definitions of conditional variance are identical to those of the (unconditional) vari-
ance where the (unconditional) expectations has been replaced by conditional expectations.
Conditioning can be used to compute higher-order moments as well.
Definition 1.124 (Conditional Moment). The rth central moment of a random variables X con-
ditional on another random variable Y is defined
r
r E
X E X |Y |Y (1.39)
for r = 2, 3, . . ..
Combining the conditional expectation and the conditional variance leads to the law of total
variance.
1.4 Expectations and Moments 49
Theorem 1.125. The variance of a random variable X can be decomposed into the variance of the
conditional expectation plus the expectation of the conditional variance,
V [X ] = V E X |Y + E V X |Y
. (1.40)
The law of total variance shows that the total variance of a variable can be decomposed into
the variability of the conditional mean plus the average of the conditional variance. This is a
useful decomposition for time-series.
Independence can also be defined conditionally.
Definition 1.126 (Conditional Independence). Two random variables X 1 and X 2 are condition-
ally independent, conditional on Y , if
x1 , x2 | y = f1 x1 | y f2 x2 | y .
f
Note that random variables that are conditionally independent are not necessarily uncondi-
tionally independent.
Example 1.127. Suppose X is a trivariate normal random variable with mean 0 and covariance
12 0
0
= 0 22 0
0 0 32
and define Y1 = x1 + x3 and Y2 = x2 + x3 . Then Y1 and Y2 are correlated bivariate normal with
mean 0 and covariance
1 + 32 32
2
Y = ,
32 22 + 32
but the joint distribution of Y1 and Y2 given X 3 is bivariate normal with mean 0 and
12 0
Y |X 3 =
0 22
The variance of the sum is the weighted sum of the variance plus all of the covariances.
Pn
Theorem 1.129. Let Y = i =1 ci X i where ci are constants. Then
n
X n
X n
X
V [Y ] = ci2 V [X i ] + 2
c j ck Cov X i , X j (1.41)
i =1 j =1 k = j +1
or equivalently
n
X n
X n
X
2Y = ci2 2X i +2 c j c k X j X k .
i =1 j =1 k = j +1
Theorem 1.130. Let c in an n by 1 vector and let X by an n-dimensional random variable with
covariance . Define Y = c0 x. The variance of Y is 2Y = c0Cov [X ] c = c0 c.
Theorem 1.131. Let C be an n by m matrix and let X be an n-dimensional random variable with
mean X and covariance X . Define Y = C0 x. The expected value of Y is E [Y ] = Y = C0 E [X ] =
C0 X and the covariance of Y is Y = C0Cov [X ] C = C0 X C.
The final result for vectors relates quadratic forms of normals (inner-products) to 2 dis-
tributed random variables.
Theorem 1.133 (Quadratic Forms of Normals). Let X be an n-dimensional normal random vari-
Pn
able with mean 0 and identity covariance In . Then x0 x = i =1 xi2 n2 .
Combing this result with studentization, when X is a general n -dimensional normal random
variable with mean and covariance ,
0
0 21 1
(x ) 2 (x )0 = (x )0 1 (x )0 n2 .
1.4 Expectations and Moments 51
Expectations of functions of continuous random variables are integrals against the underlying
pdf. In some cases these integrals are analytically tractable, although in many situations integrals
cannot be analytically computed and so numerical techniques are needed to compute expected
values and moments.
Monte Carlo is one method to approximate an integral. Monte Carlo utilizes simulated draws
from the underlying distribution and averaging to approximate integrals.
Definition 1.134 (Monte Carlo Integration). Suppose X F ( ) and that it is possible to simulate
a series { xi } from F ( ). The Monte Carlo expectation of a function g (x ) is defined
m
X
E [g
d (X )] = m 1 g (xi ) ,
i =1
Pm
Moreover, as long as E |g (x )| < , limm m 1 g (xi ) = E [g (x )].
i =1
The intuition behind this result follows from the properties of { xi }. Since these are i.i.d. draws
from F ( ), they will, on average, tend to appear in any interval B R (X ) in proportion to the
probability Pr (X B ). In essence, the simulated values coarsely approximating the discrete ap-
proximation shown in 1.8.
While Monte Carlo integration is a general technique, there are some important limitations.
First, if the function g (x ) takes large values in regions where Pr (X B ) is small, it may require
a very large number of draws to accurately approximate E [g (x )] since, by construction, there
are unlikely to many points in B . In practice the behavior of h (x ) = g (x ) f (x ) plays an impor-
tant role in determining the appropriate sample size.18 Second, while Monte Carlo integration
is technically valid for random variables with any number of dimensions, in practice it is usually
only reliable when the dimension is small (typically 3 or fewer), especially when the range is un-
bounded (R (X ) Rn ). When the dimension of X is large, many simulated draws are needed to
visit the corners of the (joint) pdf, and if 1,000 draws are sufficient for a unidimensional problem,
1000n may be needed to achieve the same accuracy when X has n dimensions.
Alternatively the function to be integrated can be approximated using a polygon with an easy-
to-compute area, such as the rectangles approximating the normal pdf shown in figure 1.8. The
quality of the approximation will depend on the resolution of the grid used. Suppose u and l are
the upper and lower bounds of the integral, respectively, and that the region can be split into m
intervals l = b0 < b1 < . . . < bm 1 < bm = u . Then the integral of a function h () is
Z u m Z
X bi
h (x ) dx = h (x ) dx .
l i =1 bi 1
18
Monte Carlo integrals can also be seen as estimators, and in many cases standard inference can be used to
determine the accuracy of the integral. See Chapter 2 for more details on inference and constructing confidence
intervals.
52 Probability, Random Variables and Expectations
In practice, l and u may be infinite, in which case some cut-off point is required. In general,
the cut-off should be chosen to that they vast majority of the probability lies between l and u
Ru
( l f (x ) dx 1).
This decomposition is combined with an area for approximating the area under h between
bi 1 and bi . The simplest is the rectangle method, which uses a rectangle with a height equal to
the value of the function at the mid-point.
Definition 1.135 (Rectangle Method). The rectangle rule approximates the area under the curve
with a rectangle and is given by
u
u +l
Z
h (x ) dx h (u l ) .
l 2
The rectangle rule would be exact if the function was piece-wise flat. The trapezoid rule im-
proves the approximation by replacing the function at the midpoint with the average value of
the function, and would be exact for any piece-wise linear function (including piece-wise flat
functions).
Definition 1.136 (Trapezoid Method). The trapezoid rule approximates the area under the curve
with a trapezoid and is given by
u
h (u ) + h (l )
Z
h (x ) dx (u l ) .
l 2
The final method is known as Simpsons rule which is based on using quadratic approxima-
tion to the underlying function. It is exact when the underlying function is piece-wise linear or
quadratic.
Definition 1.137 (Simpsons Rule). Simpsons Rule uses an approximation that would be exact if
they underlying function were quadratic, and is given by
u
u +l
u l
Z
h (x ) dx h (u) + 4h + h (l ) .
l 6 2
Example 1.138. Consider the problem of computing the expected payoff of an option. The pay-
off of a call option is given by
c = max (s1 k , 0)
where k is the strike price, s1 is the stock price at expiration and s0 is the current stock price.
Suppose returns are normally distributed with mean = .08 and standard deviation = .20. In
this problem, g (r ) = (s0 exp (r ) k ) I[s0 exp(r )>k ] where I[] and a binary indicator function which
takes the value 1 when the argument is true, and
(r )2
1
f (r ) = exp 2
.
22 2
1.4 Expectations and Moments 53
Rectangle Method
Bins 3 4 6 10
10 7.19 7.43 7.58 8.50
20 7.13 7.35 7.39 7.50
50 7.12 7.33 7.34 7.36
1000 7.11 7.32 7.33 7.33
Trapezoid Method
Bins 3 4 6 10
10 6.96 7.11 6.86 5.53
20 7.08 7.27 7.22 7.01
50 7.11 7.31 7.31 7.28
1000 7.11 7.32 7.33 7.33
Simpsons Rule
Bins 3 4 6 10
10 7.11 7.32 7.34 7.51
20 7.11 7.32 7.33 7.34
50 7.11 7.32 7.33 7.33
1000 7.11 7.32 7.33 7.33
Monte Carlo
Draws (m ) 100 1000
Mean 7.34 7.33
Std. Dev. 0.88 0.28
Table 1.1: Computed values for the expected payout for an option, where the correct value is 7.33
The top three panels use approximations to the function which have simple to compute areas.
The bottom panel shows the average and standard deviation from a Monte Carlo integration
where the number of points varies and 10, 000 simulations were used.
1.4 Expectations and Moments 55
Exercises
Shorter Questions
Exercise 1.1. Show that the two forms of the covariance,
n
" #
1X
V X = V = n 1 2
Xi
n
i =1
where 2 is V [X 1 ].
Exercise 1.9. Prove that Corr [a + b X , c + d Y ] = Corr [X , Y ].
Exercise 1.10. Suppose {X i } is a sequence of random variables where, for all i , V [X i ] = 2 ,
Cov [X i , X i 1 ] = and Cov X i , X i j = 0 for j > 1.. What is V X ?
variables.
= Y 2 where Y is normally distributed with mean and
Exercise 1.12. Suppose that E X |Y
variance 2 . What is E [a + b X ]?
Exercise 1.13. Suppose E X |Y = y = a + b y and V X |Y = y = c + d y 2 where Y is normally
12 12
= .
12 22
56 Probability, Random Variables and Expectations
Longer Questions
Exercise 1.15. Sixty percent (60%) of all traders hired by a large financial firm are rated as per-
forming satisfactorily or better in their first year review. Of these, 90% earned a first in financial
econometrics. Of the traders who were rated as unsatisfactory, only 20% earned a first in finan-
cial econometrics.
i. What is the probability that a trader is rated as satisfactory or better given they received a
first in financial econometrics?
ii. What is the probability that a trader is rated as unsatisfactory given they received a first in
financial econometrics?
iii. Is financial econometrics a useful indicator or trader performance? Why or why not?
Exercise 1.16. Large financial firms use automated screening to detect rogue trades those that
exceed risk limits. One of your colleagues has introduced a new statistical test using the trading
data that, given that a trader has exceeded her risk limit, detects this with probability 98%. It also
only indicates false positives that is non-rogue trades that are flagged as rogue 1% of the time.
i. Assuming 99% of trades are legitimate, what is the probability that a detected trade is rogue?
Explain the intuition behind this result.
iii. How low would the false positive rate have to be to have a 98% chance that a detected trade
was actually rogue?
Exercise 1.17. Your corporate finance professor uses a few jokes to add levity to his lectures. Each
week he tells 3 different jokes. However, he is also very busy, and so forgets week to week which
jokes were used.
i. Assuming he has 12 jokes, what is the probability of 1 repeat across 2 consecutive weeks?
ii. What is the probability of hearing 2 of the same jokes in consecutive weeks?
iii. What is the probability that all 3 jokes are the same?
iv. Assuming the term is 8 weeks long, and they your professor has 96 jokes, what is the prob-
ability that there is no repetition across the term? Note: he remembers the jokes he gives
in a particular lecture, only forgets across lectures.
v. How many jokes would your professor need to know to have a 99% chance of not repeating
any in the term?
1.4 Expectations and Moments 57
Exercise 1.18. A hedge fund company manages three distinct funds. In any given month, the
probability that the return is positive is shown in the following table:
Pr (r1,t > 0) = .55 Pr (r1,t > 0 r2,t > 0) = .82
Pr (r2,t > 0) = .60 Pr (r1,t > 0 r3,t > 0) = .7525
Pr (r3,t > 0) = .45 Pr (r2,t > 0 r3,t > 0) = .78
Pr r2,t > 0 r3,t > 0|r1,t > 0 = .20
iii. What is the probability that funds 1 and 2 have positive returns, given that fund 3 has a
positive return?
iv. What is the probability that at least one fund will has a positive return in any given month?
Exercise 1.19. Suppose the probabilities of three events, A, B and C are as depicted in the fol-
lowing diagram:
A B
.10
.05 .05
.175
C
iii. What is Pr (A B )?
iv. What is Pr A B |C ?
v. What is Pr C |A B ?
vi. What is Pr C |A B ?
Exercise 1.20. At a small high-frequency hedge fund, two competing algorithms produce trades.
Algorithm produces 80 trades per second and 5% lose money. Algorithm produces 20 trades
per second but only 1% lose money. Given the last trade lost money, what is the probability it
was produced by algorithm ?
58 Probability, Random Variables and Expectations
Exercise 1.24. A firm producing widgets has a production function q (L ) = L 0.5 where L is the
amount of labor. Sales prices fluctuate randomly and can be $10 (20%), $20 (50%) or $30 (30%).
Labor prices also vary and can be $1 (40%), 2 (30%) or 3 (30%). The firm always maximizes profits
after seeing both sales prices and labor prices.
ii. What is the probability that the firm makes at least $100?
iii. Given the firm makes a profit of $100, what is the probability that the profit is over $200?
Exercise 1.25. A fund manager tells you that her fund has non-linear returns as a function of the
market and that his return is ri ,t = 0.02 + 2rm ,t 0.5rm,t
2
where ri ,t is the return on the fund and
rm,t is the return on the market.
i. She tells you her expectation of the market return this year is 10%, and that her fund will
have an expected return of 22%. Can this be?
1.4 Expectations and Moments 59
ii. At what variance is would the expected return on the fund be negative?
Exercise 1.26. For the following densities, find the mean (if it exists), variance (if it exists), median
and mode, and indicate whether the density is symmetric.
i. f (x ) = 3x 2 for x [0, 1]
4
iv. f (x ) = .2 x .84 x for x {0, 1, 2, 3, 4}
x
Exercise 1.27. The daily price of a stock has an average value of 2. Then then Pr (X > 10) < .2
where X denotes the price of the stock. True or false?
Exercise 1.28. An investor can invest in stocks or bonds which have expected returns and co-
variances as
.10 .04 .003
= , =
.03 .003 .0009
where stocks are the first component.
i. Suppose the investor has 1,000 to invest, and splits the investment evenly. What is the
expected return, standard deviation, variance and Sharpe Ratio (/) for the investment?
ii. Now suppose the investor seeks to maximize her expected utility where her utility is defined
is defined in terms of her portfolio return, U (r ) = E [r ] .01V [r ]. How much should she
invest in each asset?
Exercise 1.29. Suppose f (x ) = (1 p ) x p for x (0, 1, . . .) and p (0, 1]. Show that a ran-
dom variable from the distribution is memoryless in the sense that Pr X s + r |X r =
Pr (X s ). In other words, the probability of surviving s or more periods is the same whether
starting at 0 or after having survived r periods.
Exercise 1.30. Your Economics professor offers to play a game with you. You pay 1,000 to play
and your Economics professor will flip a fair coin and pay you 2 x where x is the number of tries
required for the coin to show heads.
Exercise 1.31. Consider the roll of a fair pair of dice where a roll of a 7 or 11 pays 2x and anything
else pays x where x is the amount bet. Is this game fair?
60 Probability, Random Variables and Expectations
Exercise 1.32. Suppose the joint density function of X and Y is given by f (x , y ) = 1/2 x exp ( x y )
where x [3, 5] and y (0, ).
Exercise 1.33. Suppose a fund manager has $10,000 of yours under management and tells you
that the expected value of your portfolio in two years time is $30,000 and that with probability
75% your investment will be worth at least $40,000 in two years time.
ii. Next, suppose she tells you that the standard deviation of your portfolio value is 2,000.
Assuming this is true (as is the expected value), what is the most you can say about the
probability your portfolio value falls between $20,000 and $40,000 in two years time?
Exercise 1.34. Suppose the joint probability density function of two random variables is given
by f (x ) = 52 (3x + 2y ) where x [0, 1] and y [0, 1].
ii. What is E X |Y = y ? Are X and Y independent? (Hint: What must the form of E X |Y
iii. Show (numerically) that the law of total variance holds for X 2 .
1.4 Expectations and Moments 61
Exercise 1.37. Suppose y N (5, 36) and x N (4, 25) where X and Y are independent.