Documente Academic
Documente Profesional
Documente Cultură
①Proof of 𝑃(∅) = 0:
Consider a series of events E1, E2, …, Ei where E1=S and Ei=∅ for i>1, then, because the events are mutually
exclusive and because 𝑆 = ⋃∞ 𝑖=1 𝐸𝑖 , we have, from axiom 3:
∞ ∞
Which means:
𝑃(∅) = 0
𝑃 (⋃ 𝐸𝑖 ) = ∑ 𝑃(𝐸𝑖 )
1 𝑖=1
④regarding Eʹ:
1 = 𝑃(𝑆) = 𝑃(𝐸 ∪ 𝐸ʹ) = 𝑃(𝐸) + 𝑃(𝐸ʹ)
Since 𝐹 = (𝐹 ∩ 𝐸) ∪ (𝐹 ∩ 𝐸ʹ), 𝑃(𝐹) = 𝑃(𝐸 ∩ 𝐹) + 𝑃(𝐸ʹ ∩ 𝐹), meaning 𝑃(𝐸ʹ ∩ 𝐹) = 𝑃(𝐹) − 𝑃(𝐸 ∩ 𝐹)
𝑁(𝐸 )
⑥Proof of 𝑃(𝐸 ) = 𝑁(𝑆) :
For 𝑆 = {1,2,3, … , 𝑁}, 𝑃({1}) = 𝑃({2}) = ⋯ = 𝑃({3}), and since {1}, {2}, … 𝑎𝑛𝑑 {𝑁} are all mutually exclusive,
then for example, for 𝐸 = {1,2,4}:
1 1 1 𝑁(𝐸)
𝑃(1 ∪ 2 ∪ 4) = 𝑃({1}) + 𝑃({2}) + ({4}) = + + =
𝑁 𝑁 𝑁 𝑁(𝑆)
1
𝑝 𝑛−𝑝
(𝑞) (𝑘 − 𝑞 )
𝑛
( )
𝑘
②Variance definition:
𝑉𝑎𝑟(𝑋) = 𝐸[(𝑋 − 𝜇)2 ] = ∑(𝑥 − 𝜇)2 𝑝(𝑥)
𝑥
𝑉𝑎𝑟(𝑎𝑋 + 𝑏) = 𝑎2 𝑉𝑎𝑟(𝑋)
③Standard Deviation:
𝑠𝑑(𝑋) = √𝑉𝑎𝑟(𝑋)
𝐸(𝑋) = 𝑝
2
The expected value is equal to:
𝑛
𝑛
𝐸(𝐵(𝑋; 𝑛, 𝑝)) = ∑ 𝑖 ( ) 𝑝𝑖 (1 − 𝑝)𝑛−𝑖
𝑖
𝑖=1
𝑛 𝑛−1
Note that: 𝑖 ( ) = 𝑛 ( )
𝑖 𝑖−1
𝑛 𝑛
𝑛−1 𝑛 − 1 𝑖−1 (1
∑𝑛( ) 𝑝. 𝑝𝑖−1 (1 − 𝑝)𝑛−1−(𝑖−1) = 𝑛𝑝 ∑ ( )𝑝 − 𝑝)𝑛−1−(𝑖−1) = 𝑛𝑝
𝑖−1 𝑖−1
𝑖=1 𝑖=1
𝜆𝑖
𝑝(𝑖) = 𝑒 −𝜆 𝑖 = 1,2,3, …
𝑖!
It also represents a probability mass function:
∞ ∞
−𝜆
𝜆𝑖
∑ 𝑝(𝑖) = 𝑒 ∑ = 𝑒 −𝜆 𝑒 𝜆 = 1
𝑖!
𝑖=0 𝑖=0
𝑛! 𝑖 (1 𝑛−𝑖
𝑛! 𝜆 𝑖 𝜆 𝑛−𝑖
𝑝(𝑖) = 𝑝 − 𝑝) = ( ) (1 − )
(𝑛 − 𝑖)! 𝑖! (𝑛 − 𝑖)! 𝑛! 𝑛 𝑛
3
𝑛
𝜆
𝑛(𝑛 − 1) … (𝑛 − 𝑖 + 1) 𝜆𝑖 (1 − ⁄𝑛)
= 𝑖
𝑛𝑖 𝑖!
(1 − 𝜆⁄𝑛)
𝜆 𝑛 𝑛(𝑛 − 1) … (𝑛 − 𝑖 + 1) 𝜆 𝑖
(1 − ) ≈ 𝑒 −𝜆 ≈1 (1 − ) ≈ 1
𝑛 𝑛𝑖 𝑛
Hence,
𝜆𝑖
𝑝(𝑖) ≈ 𝑒 −𝜆
𝑖!
𝜆(𝜆 + 1) − 𝜆2 = 𝜆
In other words, the probability of achieving r successes in r+m-1 trials has to be calculated, which is as follows:
𝑟+𝑚−1
𝑛 − 1 𝑟 (1
∑ ( )𝑝 − 𝑝)𝑛−𝑟
𝑟−1
𝑛=𝑟
4
⑧the expected value and variance of the NB random variable:
First we calculate E[X]:
∞ ∞
𝑛 − 1 𝑟 (1 𝑟 𝑛
𝐸[𝑋] = ∑ 𝑛 ( )𝑝 − 𝑝)𝑛−𝑟 = ∑ 1 ( ) 𝑝𝑟+1 (1 − 𝑝)𝑛−𝑟
𝑟−1 𝑝 𝑟
𝑛=𝑟 𝑛=𝑟
∞
𝑟 (𝑛 + 1) − 1 𝑟+1 𝑟
= ∑( ) 𝑝 (1 − 𝑝)(𝑛+1)−(𝑟+1) =
𝑝 (𝑟 + 1) − 1 𝑝
𝑛=𝑟
The term after the sigma sign represents a mass probability function with parameters (n+1, r+1, p)
𝑟
𝐸[𝑋] =
𝑝
For the variance,
𝑟 𝑟 𝑟+1
𝐸[𝑋 2 ] = 𝐸[𝑌 − 1] = ( − 1)
𝑝 𝑝 𝑝
𝑟 𝑟+1 𝑟 2 𝑟(1 − 𝑝)
𝑉𝑎𝑟(𝑋) = ( − 1) − ( ) =
𝑝 𝑝 𝑝 𝑝2
𝑁 𝑚 𝑁−𝑚
( )( )
𝑖 𝑛−𝑖
𝐸[𝑋] = ∑ 𝑖
𝑁
𝑖=1 ( )
𝑖
𝑚 𝑚−1 𝑁 𝑁 𝑁−1
using 𝑖 ( ) = 𝑚 ( ) and ( ) = ( ):
𝑖 𝑖−1 𝑖 𝑛 𝑖−1
𝑁 𝑚 − 1 (𝑁 − 1) − (𝑚 − 1)
𝑛𝑚 ( )( ) 𝑛𝑚 𝑛𝑚
𝑖−1 (𝑛 − 1) − (𝑖 − 1)
𝐸[𝑋] = ∑1 = 1=
𝑁 𝑁−1 𝑁 𝑁
𝑖=1 ( )
𝑖−1
The number 1 after nm/N represents a mass probability function with parameters (1, n-1, N-1, m-1)
For E[X2]:
5
𝑚 𝑁−𝑚 𝑁 𝑚 − 1 (𝑁 − 1) − (𝑚 − 1)
( )( ) 𝑛𝑚 𝑁 (
𝑖−1
)(
(𝑛 − 1) − (𝑖 − 1)
)
2] 2 𝑖 𝑛 − 𝑖
𝐸[𝑋 = ∑ 𝑖 = ∑[(𝑖 − 1) + 1]
𝑁 𝑁 𝑁−1
𝑖=1 ( ) 𝑖=1 ( )
𝑛 𝑛−1
𝑛𝑚 𝑛𝑚 (𝑛 − 1)(𝑚 − 1)
= [𝐸(𝑖 − 1) + 𝑃(1)] = [ + 1]
𝑁 𝑁 (𝑁 − 1)
𝑚
𝑛𝑚 𝑚 1−𝑀 𝑛𝑚 𝑛𝑚 𝑚 𝑛−1
∴ 𝑉𝑎𝑟(𝑋) = [(𝑛 − 1) ( − )+1− ]= (1 − ) (1 − )
𝑁 𝑁 𝑁−1 𝑁 𝑅 𝑁 𝑁−1
Therefore:
𝑎 ∞
𝑎
𝑃(𝑋 < 𝑎) = 𝑃(𝑋 ≤ 𝑎) = 𝐹(𝑥) = ∫ 𝑓(𝑥)𝑑𝑥 = [∫ 𝑓(𝑥)𝑑𝑥 ] |
−∞ −∞
−∞
𝐸(𝑋) = ∫ 𝑥𝑓(𝑥)𝑑𝑥
−∞
∞
𝐸[𝑔(𝑥)] = ∫ 𝑔(𝑥)𝑓(𝑥)𝑑𝑥
−∞
For example:
1 𝑖𝑓 0 ≤ 𝑥 ≤ 1
𝑓(𝑥) = { 𝐸(𝑒 𝑋 ) =?
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
6
∞ 1
𝑓(𝑥)=1, 0≤𝑥≤1 1
𝐸(𝑒 𝑋 ) = ∫ 𝑒 𝑥 𝑓(𝑥)𝑑𝑥 → ∫ 𝑒 𝑥 𝑑𝑥 = 𝑒 𝑥 | = 𝑒 1 − 𝑒 0 = 𝑒 − 1
0
−∞ 0
∴ P(𝑎 < 𝑋 < 𝑏) = Φ(𝑏) − Φ(𝑎) = 𝑃(𝑍𝑎 < 𝑍𝑋 < 𝑍𝑏 ) = Φ(𝑍𝑏 ) − Φ(𝑍𝑎 )
note that: Φ(∞) = 1
7
⑥The normal approximation to the Binomial Distribution:
The probability that in n trials, w/ a success probability of p, X of them will be a success, can be approximate
as:
𝑋 − 𝑛𝑝
𝑃 (𝑎 ≤ ≤ 𝑏) ≅ Φ(𝑏) − Φ(𝑎)
√𝑛𝑝(1 − 𝑝)
When using the normal approximation for a binomially distributed P(x)=y, use normal distribution for
P(y-.5<y<y+.5) (standardize first), which is called the continuity correction. For a binomially distributed P(X>Y),
use the normal distribution for P(X≥Y+.5)
⑧ Integration by Parts:
For calculating∫ 𝑓(𝑥)𝑔(𝑥)𝑑𝑥:
𝑑
[𝑓(𝑥)𝑔(𝑥)] = 𝑓́ (𝑥)𝑔(𝑥) + 𝑓(𝑥)𝑔́ (𝑥)
𝑑𝑥
Integrating both side gives:
𝑑
∫ [𝑓(𝑥)𝑔(𝑥)]𝑑𝑥 = ∫ 𝑓́ (𝑥)𝑔(𝑥)𝑑𝑥 + ∫ 𝑓(𝑥)𝑔́ (𝑥)𝑑𝑥
𝑑𝑥
Rearranging gives the integration by parts formula:
∫ 𝑢𝑑𝑣 = 𝑢𝑣 − ∫ 𝑣𝑑𝑢