Sunteți pe pagina 1din 5

Mathematical Statistics

Lecture 2

Inequalities 1. Markov Inequalities (Definition) Let g(X) be a nonnegative function of the random variable X. If E[g(X)] exists, then for every positive constant c, [ ] [ ] Proof: [ ]

Note: Clearly the survival function g(c)=P(X>c) as t , but how fast? Example Suppose x [

rv, suppose y > 0 and suppose [ ] ]

. Then let

So, Markov

2. Chebychev Inequalities Let the rv X have a distribution of probability about which we assume only that there is a finite variance . Then for every k>0, [ or equivalently, [ Proof: In Markovs inequalities, take g(X)= [ Since [ ] and ] ] . Then we have [ ] ]

, and we square root the left-hand equation, we have [ ] ]. 1

Hence the

is an upper bound for the probability [

Mathematical Statistics

Lecture 2

We can divide the left-hand numerator by [ We can rewrite the inequality by [ Also, we can state that [

to have ]

Weak Law of Large Numbers (WLLN): Let , be a series of independent and identically distributed random variables, with nite expected value and nite variance . Let . Then for every , the WLLN states, [ ] or equivalently, [ Proof: Since ]

are independent and have the same distributions, ( ) , [ [ [ ] ] ]

From Chebychev, for every

Convergence in Probability (Definition) Let { } be a sequence of rv and let X be a rv defined on a sample space. We say that converges in probability to X if for all , [ ] or equivalently, [ If so, we write ]

Mathematical Statistics

Lecture 2

Parametric family or a parameterized family is a family of objects whose definitions depend on a set of parameters.

{ Example: { [ Five Families 1. { 2. { 3. { 4. { 5. { }

} { ] }

} } }, n is known } }

Example: { } Note: Golden Rule: GR of estimation suggests trying estimator, to estimate the population mean, try the sample mean. Whatever you do to population, do it to your sample. How well does estimate , here? Note here that, by WLLN usuall close

Consistent An estimator is said to be consistent if it converges in probability to the true value of the parameter, as according to Convergence in Probability [ or equivalently, [| | ] ]

Mathematical Statistics Example: Unbiased is an unbiased estimator for a function of the parameter [ ] Example: [ ] [ ] [ ] Example: { 1. Is consistent for ? Let , consider (| | ) ] [ [ [ [ ( 2. Is unbiased? Consider [ ] ) ] ] ]

Lecture 2

if

} { }

[ ] [ ] [ ] [ ] [ ]

ax

* *

+ ] + ]

[ [ ( )

Mathematical Statistics

Lecture 2

[ ] A corrected estimator: Use

1. Is consistent for ? Let (| | ) [ [ *| [| [ * 2. Is unbiased? [ ] [ ] [ ]

, consider, | ] + + | *| *| *| ]

] ] | *| | | + + + | +

[ ]

S-ar putea să vă placă și