Sunteți pe pagina 1din 10

Probability and Statistics

Contents

1 SOME SPECIAL DISCRETE DISTRIBUTIONS 2


1.1 Bernoulli Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Geometric Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Negative Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Hypergeometric Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Poisson Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 SOME SPECIAL CONTINUOUS DISTRIBUTIONS 4


2.1 Uniform Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Gamma Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Exponential Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Chi-square Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.5 Beta Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.6 Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.7 Lognormal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 SEQUENCES OF RANDOM VARIABLES 7


3.1 Distribution of sample mean and variance . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Laws of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4 SAMPLING DISTRIBUTIONS ASSOCIATED WITH THE NORMAL POP-


ULATIONS 9
4.1 Chi-square distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2 Student’s t-distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3 Snedecor’s F -distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1 Phok Ponna
Probability and Statistics

Chapter 1

SOME SPECIAL DISCRETE


DISTRIBUTIONS

1.1 Bernoulli Distribution


Definition 1.1.1. The random variable X is called the Bernoulli random variable if its prob-
ability mass function is of the form

f (x) = px (1 − p)1−x , x = 0, 1

where p is the probability of success.

We denote the Bernoulli random variable by writing X ∼ Ber(p).

Theorem 1.1.1. If X is a Bernoulli random variable with parameter p, then the mean, variance
and moment generating functions are respectively given by
2
µX = p, σX = p(1 − p), MX (t) = (1 − p) + pet .

1.2 Binomial Distribution


Definition 1.2.1. The random variable X is called the binomial random variable with param-
eters p and n if its probability mass function is of the form
 
n x
f (x) = p (1 − p)n−x , x = 0, 1, 2, . . . , n
x

where 0 < p < 1 is the probability of success.

We will denote a binomial random variable with parameters p and n as X ∼ Bin(n, p).

Theorem 1.2.1. If X is binomial random variable with parameters p and n, then the mean,
variance and moment generating functions are respectively given by
2
n
MX (t) = (1 − p) + pet .

µX = np, σX = np(1 − p),

1.3 Geometric Distribution


Definition 1.3.1. Let X denote the trial number on which the first success occurs. A random
variable X has a geometric distribution if its probability mass function is given by

f (x) = (1 − p)x−1 p, x = 1, 2, . . . , +∞

where p denotes the probability of success in a single Bernoulli trial.

2 Phok Ponna
Probability and Statistics

If X has a geometric distribution we denote it as X ∼ Geo(p).

Theorem 1.3.1. If X is a geometric random variable with parameter p, then the mean, variance
and moment generating functions are respectively given by

1 2 1−p pet
µX = , σX = , MX (t) = if t < − ln(1 − p).
p p2 1 − (1 − p)et

1.4 Negative Binomial Distribution


Definition 1.4.1. Let X denote the number of failures that precede the rth success. . A random
variable X has the negative binomial (or Pascal) distribution if its probability mass function is
of the form  
x+r−1 r
f (x) = p (1 − p)x , x = 0, 1, 2, . . . , . . .
r−1
where p is the probability of success in a single Bernoulli trial. We denote the random variable
X whose distribution is negative binomial distribution by writing X ∼ N b(r, p).

Theorem 1.4.1. If X ∼ N b(r, p), then

r(1 − p) 2 r(1 − p) pr
µX = , σX = , MX (t) = .
p p2 [1 − (1 − p)et ]r

1.5 Hypergeometric Distribution


Theorem 1.5.1. If X is the number of S’s in a completely random sample of size n drawn from
a population consisting of M S’s and (N − M ) F ’s, then the probability distribution of X, called
the hypergeometric distribution, is given by
M N −M
 
x n−x
f (x) = N

n

for x an integer satisfying max(0, n − N + M ) ≤ x ≤ min(n − M ).

We shall denote such a random variable by writing X ∼ Hyp(n, M, N ).

Theorem 1.5.2. If X ∼ Hyp(n, M, N ), then


   
M M N −n M
E(X) = n , V (X) = n 1− .
N N N −1 N

1.6 Poisson Distribution


Definition 1.6.1. A random variable X is said to have a Poisson distribution if its probability
mass function is given by
λx
f (x) = e−λ , x = 0, 1, 2, . . . , +∞
x!
where 0 < λ < +∞ is a parameter. We denote such a random variable by X ∼ P oi(λ).

Theorem 1.6.1. If X ∼ P oi(λ), then


t −1)
E(X) = λ, V (X) = λ, MX (t) = eλ(e .

3 Phok Ponna
Probability and Statistics

Chapter 2

SOME SPECIAL CONTINUOUS


DISTRIBUTIONS

2.1 Uniform Distribution


Definition 2.1.1. A random variable X is said to be uniform on the interval [a, b] if its probability
density function is of the form

 1 if a ≤ x ≤ b
f (x) = b − a
0 otherwise,

where a and b are constants. We denote a random variable X with the uniform distribution on the
interval [a, b] as X ∼ U (a, b).
Theorem 2.1.1. If X ∼ U (a, b), then

b+a (b − a)2 1 if t = 0
E(X) = , V (X) = , MX (t) = etb − eta .
2 12  if t 6= 0
t(b − a)

2.2 Gamma Distribution


Definition 2.2.1. The gamma function is defined as
Z +∞
Γ(a) = xa−1 e−x dx
0

where a is positive real number (that is, a > 0).


Lemma 2.2.1. 1. Γ(1) = 1

2. Γ(a) = (a − 1)Γ(a − 1) for all real number a > 1.



3. Γ( 21 ) = π.

4. Γ(n + 1) = n!, if n is a natural number.


Definition 2.2.2. A continuous random variable X is said to have a gamma distribution if its
probability density function is given by
 x
 1
 −
α−1 β
f (x) = Γ(α)β α x e if 0 < x < +∞

0 otherwise,

where α > 0 and β > 0. We denote a random variable with gamma distribution as X ∼ Gam(α, β).

4 Phok Ponna
Probability and Statistics

Theorem 2.2.2. If X ∼ Gam(α, β), then


 α
1 1
E(X) = αβ, V (X) = αβ 2 , MX (t) = , if t < .
1 − βt β

2.3 Exponential Distribution


Definition 2.3.1. A continuous random variable X is said to be an exponential random vari-
able with parameter θ if its probability density function is of the form
 x
1 −

f (x) = θ e θ if 0 < x < +∞

0 otherwise,
where θ > 0. If a random variable X has an exponential density function with parameter θ, then
we denote it by writing X ∼ Exp(θ).
Remark. An exponential distribution is a special case of the gamma distribution. If the parameter
α = 1, then the gamma distribution reduces to the exponential distribution.
Theorem 2.3.1. If X ∼ Exp(θ), then
1 1
E(X) = θ, V (X) = θ2 , MX (t) = , if t < .
1 − θt θ

2.4 Chi-square Distribution


Definition 2.4.1. A continuous random variable X is said to have a chi-square distribution
with r degrees of freedom if its probability density function is of the form
 x
 1
 r
−1

r x2 e 2 if 0 < x < +∞
f (x) = Γ( r )2 2
 2
0 otherwise,

where r > 0. If X has a chi-square distribution, then we denote it by writing X ∼ χ2 (r).


Remark. The gamma distribution reduces to the chi-square distribution if α = 2r and β = 2. Thus,
the chi-square distribution is a special case of the gamma distribution. Further, if r → +∞, then
the chi-square distribution tends to the normal distribution..
Theorem 2.4.1. If X ∼ χ2 (r), then
 r
1 2, 1
E(X) = r, V (X) = 2r, MX (t) = if t < .
1 − 2t 2

2.5 Beta Distribution


Definition 2.5.1. Let α and β be any two positive real numbers. The beta function B(α, β) is
defined as Z 1
B(α, β) = xα−1 (1 − x)β−1 dx.
0
Theorem 2.5.1. Let α and β be any two positive real numbers. Then
Γ(α)Γ(β)
B(α, β) = ,
Γ(α + β)
where Z +∞
Γ(a) = xa−1 e−x dx
0
is the gamma function.

5 Phok Ponna
Probability and Statistics

Definition 2.5.2. A random variable X is said to have the beta density function if its probability
density function is of the form
1

 xα−1 (1 − x)β−1 if 0 < x < 1
f (x) = B(α, β)
0 otherwise,

for every positive α and β. If X has a beta distribution, then we symbolically denote this by
writing X ∼ Bet(α, β).

Theorem 2.5.2. If X ∼ Bet(α, β), then

α αβ
E(X) = , V (X) = .
α+β (α + β)2 (α + β + 1)

2.6 Normal Distribution


Definition 2.6.1. A random variable X is said to have a normal distribution if its probability
density function is given by
1 1 x−µ 2
f (x) = √ e− 2 ( σ ) , −∞ < x < +∞,
σ 2π

where −∞ < µ < +∞ and 0 < σ 2 < +∞ are arbitrary parameters. If X has a normal distribution
with parameters µ and σ 2 , then we write X ∼ N (µ, σ 2 ).

Theorem 2.6.1. If X ∼ N (µ, σ 2 ), then


1 2 t2
E(X) = µ, V (X) = σ 2 , MX (t) = eµt+ 2 σ .

Definition 2.6.2. A normal random variable is said to be standard normal, if its mean is zero
and variance is one. We denote a standard normal random variable X by X ∼ N (0, 1). The
probability density function of standard normal distribution is the following:
1 z2
f (z) = √ e− 2 , −∞ < z < +∞.

Theorem 2.6.2. If X ∼ N (µ, σ 2 ), then the random variable Z = X−µ


σ ∼ N (0, 1).
 2
Theorem 2.6.3. If X ∼ N (µ, σ 2 ), then the random variable X−µ
σ ∼ χ2 (1).

2.7 Lognormal Distribution


Definition 2.7.1. A random variable X is said to have a lognormal distribution if its probability
density function is given by
 2
1 − 12 ( ln x−µ ) , if 0 < x < +∞
e
 √ σ
f (x) = xσ 2π
0 otherwise,

where −∞ < µ < +∞ and 0 < σ 2 < +∞ are arbitrary parameters.If X has a lognormal distribu-
tion with parameters µ and σ 2 , then we write X ∼ Log(µ, σ 2 ).

Theorem 2.7.1. If X ∼ Log(µ, σ 2 ), then


1 2
h 2 i 2
E(X) = eµ+ 2 σ , V (X) = eσ − 1 e2µ+σ .

6 Phok Ponna
Probability and Statistics

Chapter 3

SEQUENCES OF RANDOM
VARIABLES

3.1 Distribution of sample mean and variance


Theorem 3.1.1. If X1 , X2 , ..., Xn are independent random variables with respective moment gen-
n
X
erating functions MXi (t), i = 1, 2, ..., n, then the moment generating function of Y = ai Xi is
i=1
given by
n
Y
MY (t) = MXi (ai t).
i=1
Theorem 3.1.2. If X1 , X2 , ..., Xn are mutually independent random variables such that Xi!∼
n
X n
X n
X
N (µi , σi2 ), i = 1, 2, 3, ..., n, then the random variable Y = ai Xi ∼ N ai µi , ai σi2 .
i=1 i=1 i=1

Theorem 3.1.3. Let the distributions of the random variables X1 , X2 , ..., Xn be χ2 (r1 ), χ2 (r 2
2 ), ..., χ (rn ),
2
P n
respectively. If X1 , X2 , ..., Xn are mutually independent, then Y = X1 +X2 +. . .+Xn ∼ χ ( i=1 ri ).
Theorem 3.1.4. If Z1 , Z2 , ..., Zn are mutually independent and each one is standard normal, then
Z12 + Z22 + . . . + Zn2 ∼ χ2 (n), that is the sum is chi-square with n degrees of freedom.
Theorem 3.1.5. If X1 , X2 , ..., Xn is a random sample of size n from the normal distribution
n n
1X 1 X
2
N (µ, σ ), then the sample mean X̄n = 2
Xi and the sample variance Sn = (Xi − X̄n )2
n n−1
i=1 i=1
have the following properties:
(n − 1)Sn2
(a) ∼ χ2 (n − 1),
σ2
(b) X̄n and Sn2 are indepentdent.

3.2 Laws of Large Numbers


Theorem 3.2.1 (Markov’s Inequality). Suppose X is a nonnegative random variable with mean
E(X). Then
E(X)
P (X ≥ a) ≤
a
for all a > 0.
Theorem 3.2.2 (Chebyshev’s Inequality). Let X be a random variable with mean µ and stan-
dard deviation σ. Then Chebyshev’s inequality says that
1
P (|X − µ| ≥ kσ) ≤
k2

7 Phok Ponna
Probability and Statistics

for any nonzero positive constant k.

Theorem 3.2.3 (The weak law of large numbers). Let X1 , X2 , ... be a sequence of independent
and identically distributed random variables with µ = E(Xi ) and σ 2 = V (Xi ) < +∞ for i =
1, 2, 3, .... Then
lim P (|X̄n − µ| ≥ ε) = 0
n→+∞

for every ε.

Theorem 3.2.4 (Central Limit Theorem). Let X1 , X2 , ..., Xn be a random sample of size n
from a distribution with mean µ and variance σ 2 < +∞, then the limiting distribution of

X̄ − µ
Zn =
√σ
n

is standard normal, that is Zn converges in distribution to Z where Z denotes a standard normal


random variable.

Definition 3.2.1. Suppose X is a random variable with cumulative density function F (x) and
the sequence X1 , X2 , ... of random variables with cumulative density functions F1 (x), F2 (x), ...,
respectively. The sequence Xn converges in distribution to X if

lim Fn (x) = F (x)


n→+∞

for all values x at which F (x) is continuous. The distribution of X is called the limiting distri-
bution of Xn .

Whenever a sequence of random variables X1 , X2 , ... converges in distribution to the random


d
variable X, it will be denoted by Xn → X.

Theorem 3.2.5. Let X1 , X2 , ... be a sequence of random variables with distribution functions
F1 (x), F2 (x), ... and moment generating functions MX1 (t), MX2 (t), ..., respectively. Let X be a
random variable with distribution function F (x) and moment generating function MX (t). If for all
t in the open interval (−h, h) for some h > 0

lim MXn (t) = MX (t),


n→+∞

then at the points of continuity of F (x)

lim Fn (x) = F (x).


n→+∞

8 Phok Ponna
Probability and Statistics

Chapter 4

SAMPLING DISTRIBUTIONS
ASSOCIATED WITH THE
NORMAL POPULATIONS

4.1 Chi-square distribution


 2
X−µ
Theorem 4.1.1. If X ∼ N (µ, σ 2 ), then σ ∼ χ2 (1).

Theorem 4.1.2. If X ∼ N (µ, σ 2 ) and X1 , X2 , ..., Xn is a random sample from the population X,
then
n 
X −µ 2
X 
∼ χ2 (n).
σ
i=1

Theorem 4.1.3. If X ∼ N (µ, σ 2 ) and X1 , X2 , ..., Xn is a random sample from the population X,
then
(n − 1)S 2
∼ χ2 (n − 1).
σ2
Theorem 4.1.4. If X ∼ Gam(θ, α), then
2
X ∼ χ2 (2α).
θ

4.2 Student’s t-distribution


Definition 4.2.1. A continuous random variable X is said to have a t-distribution with ν degrees
of freedom if its probability density function is of the form
Γ( ν+1
2 )
f (x; ν) =  ν+1 , −∞ < x < +∞
√ 
x2 2
πν Γ( ν2 ) 1 + ν

where ν > 0. If X has a t-distribution with ν degrees of freedom, then we denote it by writing
X ∼ t(ν).
The t-distribution was discovered by W.S. Gosset (1876-1936) of England who published his
work under the pseudonym of student. Therefore, this distribution is known as Student’s t-
distribution. This distribution is a generalization of the Cauchy distribution and the normal
distribution.
Theorem 4.2.1. If the random variable X has a t-distribution with ν degrees of freedom, then
(
0 if ν ≥ 2
E(X) =
DN E if ν = 1

9 Phok Ponna
Probability and Statistics

and (
ν
ν−2 if ν ≥ 3
V (X) =
DN E if ν = 1, 2.
where DNE means does not exist.
Theorem 4.2.2. If Z ∼ N (0, 1) and U ∼ χ2 (ν) and in addition, Z and U are independent, then
the random variable W defined by
Z
W =q
U
ν

has a t-distribution with ν degrees of freedom.


Theorem 4.2.3. If X ∼ N (µ, σ 2 ) and X1 , X2 , ..., Xn is a random sample from the population X,
then
X̄ − µ
S
∼ t(n − 1).

n

4.3 Snedecor’s F -distribution


Definition 4.3.1. A continuous random variable X is said to have a F -distribution with ν1 and
ν2 degrees of freedom if its probability density function is of the form
 ν1 ν1
ν1 +ν2 ν1 −1
 Γ( 2 )( ν2 ) 2 x 2

if 0 ≤ x < +∞
ν1 +ν2
f (x; ν1 , ν2 ) = Γ( ν21 )Γ( ν22 )(1+ νν12 x) 2

0 otherwise

where ν1 , ν2 > 0. If X has a F -distribution with ν1 and ν2 degrees of freedom, then we denote it
by writing X ∼ F (ν1 , ν2 ).
Theorem 4.3.1. If the random variable X ∼ F (ν1 , ν2 ), then
(
ν2
if ν2 ≥ 3
E(X) = ν2 −2
DN E if ν2 = 1, 2

and
2ν22 (ν1 +ν2 −2)
(
ν1 (ν2 −2)2 (ν2 −4)
if ν2 ≥ 5
V (x) =
DN E if ν2 = 1, 2, 3, 4.
Here DNE means does not exist.
1
Theorem 4.3.2. If X ∼ F (ν1 , ν2 ), then the random variable ∼ F (ν2 , ν1 ).
X
Theorem 4.3.3. If U ∼ χ2 (ν1 ) and V ∼ χ2 (ν2 ), and the random variables U and V are indepen-
dent, then the random variable
U
ν1
V
∼ F (ν1 , ν2 ).
ν2

Theorem 4.3.4. Let X ∼ N (µ1 , σ12 ) and X1 , X2 , ..., Xn be a random sample of size n from the
population X. Let Y ∼ N (µ2 , σ22 ) and Y1 , Y2 , ..., Ym be a random sample of size m from the
population Y . Then the statistic
S12
σ12
S22
∼ F (n − 1, m − 1),
σ22

where S12 and S22 denote the sample variances of the first and the second sample, respectively.

10 Phok Ponna

S-ar putea să vă placă și