Documente Academic
Documente Profesional
Documente Cultură
Contents
1 Phok Ponna
Probability and Statistics
Chapter 1
f (x) = px (1 − p)1−x , x = 0, 1
Theorem 1.1.1. If X is a Bernoulli random variable with parameter p, then the mean, variance
and moment generating functions are respectively given by
2
µX = p, σX = p(1 − p), MX (t) = (1 − p) + pet .
We will denote a binomial random variable with parameters p and n as X ∼ Bin(n, p).
Theorem 1.2.1. If X is binomial random variable with parameters p and n, then the mean,
variance and moment generating functions are respectively given by
2
n
MX (t) = (1 − p) + pet .
µX = np, σX = np(1 − p),
f (x) = (1 − p)x−1 p, x = 1, 2, . . . , +∞
2 Phok Ponna
Probability and Statistics
Theorem 1.3.1. If X is a geometric random variable with parameter p, then the mean, variance
and moment generating functions are respectively given by
1 2 1−p pet
µX = , σX = , MX (t) = if t < − ln(1 − p).
p p2 1 − (1 − p)et
r(1 − p) 2 r(1 − p) pr
µX = , σX = , MX (t) = .
p p2 [1 − (1 − p)et ]r
3 Phok Ponna
Probability and Statistics
Chapter 2
where a and b are constants. We denote a random variable X with the uniform distribution on the
interval [a, b] as X ∼ U (a, b).
Theorem 2.1.1. If X ∼ U (a, b), then
b+a (b − a)2 1 if t = 0
E(X) = , V (X) = , MX (t) = etb − eta .
2 12 if t 6= 0
t(b − a)
where α > 0 and β > 0. We denote a random variable with gamma distribution as X ∼ Gam(α, β).
4 Phok Ponna
Probability and Statistics
5 Phok Ponna
Probability and Statistics
Definition 2.5.2. A random variable X is said to have the beta density function if its probability
density function is of the form
1
xα−1 (1 − x)β−1 if 0 < x < 1
f (x) = B(α, β)
0 otherwise,
for every positive α and β. If X has a beta distribution, then we symbolically denote this by
writing X ∼ Bet(α, β).
α αβ
E(X) = , V (X) = .
α+β (α + β)2 (α + β + 1)
where −∞ < µ < +∞ and 0 < σ 2 < +∞ are arbitrary parameters. If X has a normal distribution
with parameters µ and σ 2 , then we write X ∼ N (µ, σ 2 ).
Definition 2.6.2. A normal random variable is said to be standard normal, if its mean is zero
and variance is one. We denote a standard normal random variable X by X ∼ N (0, 1). The
probability density function of standard normal distribution is the following:
1 z2
f (z) = √ e− 2 , −∞ < z < +∞.
2π
where −∞ < µ < +∞ and 0 < σ 2 < +∞ are arbitrary parameters.If X has a lognormal distribu-
tion with parameters µ and σ 2 , then we write X ∼ Log(µ, σ 2 ).
6 Phok Ponna
Probability and Statistics
Chapter 3
SEQUENCES OF RANDOM
VARIABLES
Theorem 3.1.3. Let the distributions of the random variables X1 , X2 , ..., Xn be χ2 (r1 ), χ2 (r 2
2 ), ..., χ (rn ),
2
P n
respectively. If X1 , X2 , ..., Xn are mutually independent, then Y = X1 +X2 +. . .+Xn ∼ χ ( i=1 ri ).
Theorem 3.1.4. If Z1 , Z2 , ..., Zn are mutually independent and each one is standard normal, then
Z12 + Z22 + . . . + Zn2 ∼ χ2 (n), that is the sum is chi-square with n degrees of freedom.
Theorem 3.1.5. If X1 , X2 , ..., Xn is a random sample of size n from the normal distribution
n n
1X 1 X
2
N (µ, σ ), then the sample mean X̄n = 2
Xi and the sample variance Sn = (Xi − X̄n )2
n n−1
i=1 i=1
have the following properties:
(n − 1)Sn2
(a) ∼ χ2 (n − 1),
σ2
(b) X̄n and Sn2 are indepentdent.
7 Phok Ponna
Probability and Statistics
Theorem 3.2.3 (The weak law of large numbers). Let X1 , X2 , ... be a sequence of independent
and identically distributed random variables with µ = E(Xi ) and σ 2 = V (Xi ) < +∞ for i =
1, 2, 3, .... Then
lim P (|X̄n − µ| ≥ ε) = 0
n→+∞
for every ε.
Theorem 3.2.4 (Central Limit Theorem). Let X1 , X2 , ..., Xn be a random sample of size n
from a distribution with mean µ and variance σ 2 < +∞, then the limiting distribution of
X̄ − µ
Zn =
√σ
n
Definition 3.2.1. Suppose X is a random variable with cumulative density function F (x) and
the sequence X1 , X2 , ... of random variables with cumulative density functions F1 (x), F2 (x), ...,
respectively. The sequence Xn converges in distribution to X if
for all values x at which F (x) is continuous. The distribution of X is called the limiting distri-
bution of Xn .
Theorem 3.2.5. Let X1 , X2 , ... be a sequence of random variables with distribution functions
F1 (x), F2 (x), ... and moment generating functions MX1 (t), MX2 (t), ..., respectively. Let X be a
random variable with distribution function F (x) and moment generating function MX (t). If for all
t in the open interval (−h, h) for some h > 0
8 Phok Ponna
Probability and Statistics
Chapter 4
SAMPLING DISTRIBUTIONS
ASSOCIATED WITH THE
NORMAL POPULATIONS
Theorem 4.1.2. If X ∼ N (µ, σ 2 ) and X1 , X2 , ..., Xn is a random sample from the population X,
then
n
X −µ 2
X
∼ χ2 (n).
σ
i=1
Theorem 4.1.3. If X ∼ N (µ, σ 2 ) and X1 , X2 , ..., Xn is a random sample from the population X,
then
(n − 1)S 2
∼ χ2 (n − 1).
σ2
Theorem 4.1.4. If X ∼ Gam(θ, α), then
2
X ∼ χ2 (2α).
θ
where ν > 0. If X has a t-distribution with ν degrees of freedom, then we denote it by writing
X ∼ t(ν).
The t-distribution was discovered by W.S. Gosset (1876-1936) of England who published his
work under the pseudonym of student. Therefore, this distribution is known as Student’s t-
distribution. This distribution is a generalization of the Cauchy distribution and the normal
distribution.
Theorem 4.2.1. If the random variable X has a t-distribution with ν degrees of freedom, then
(
0 if ν ≥ 2
E(X) =
DN E if ν = 1
9 Phok Ponna
Probability and Statistics
and (
ν
ν−2 if ν ≥ 3
V (X) =
DN E if ν = 1, 2.
where DNE means does not exist.
Theorem 4.2.2. If Z ∼ N (0, 1) and U ∼ χ2 (ν) and in addition, Z and U are independent, then
the random variable W defined by
Z
W =q
U
ν
where ν1 , ν2 > 0. If X has a F -distribution with ν1 and ν2 degrees of freedom, then we denote it
by writing X ∼ F (ν1 , ν2 ).
Theorem 4.3.1. If the random variable X ∼ F (ν1 , ν2 ), then
(
ν2
if ν2 ≥ 3
E(X) = ν2 −2
DN E if ν2 = 1, 2
and
2ν22 (ν1 +ν2 −2)
(
ν1 (ν2 −2)2 (ν2 −4)
if ν2 ≥ 5
V (x) =
DN E if ν2 = 1, 2, 3, 4.
Here DNE means does not exist.
1
Theorem 4.3.2. If X ∼ F (ν1 , ν2 ), then the random variable ∼ F (ν2 , ν1 ).
X
Theorem 4.3.3. If U ∼ χ2 (ν1 ) and V ∼ χ2 (ν2 ), and the random variables U and V are indepen-
dent, then the random variable
U
ν1
V
∼ F (ν1 , ν2 ).
ν2
Theorem 4.3.4. Let X ∼ N (µ1 , σ12 ) and X1 , X2 , ..., Xn be a random sample of size n from the
population X. Let Y ∼ N (µ2 , σ22 ) and Y1 , Y2 , ..., Ym be a random sample of size m from the
population Y . Then the statistic
S12
σ12
S22
∼ F (n − 1, m − 1),
σ22
where S12 and S22 denote the sample variances of the first and the second sample, respectively.
10 Phok Ponna