Sunteți pe pagina 1din 34

PROBABILITY THEORY

Study material
STATISTICS
COMPLEMENTARY COURSE
For
I SEMESTER B.Sc. MATHEMATICS

(2011 Admission)

UNIVERSITY OF CALICUT
SCHOOL OF DISTANCE EDUCATION
CALICUT UNIVERSITY P.O. MALAPPURAM, KERALA, INDIA - 673 635

415

ProbabilityTheory

Page1

UNIVERSITY OF CALICUT
SCHOOL OF DISTANCE EDUCATION
Study Material
STATISTICS
COMPLEMENTARY COURSE
I SEMESTER B.Sc. MATHEMATICS

Probability Theory
Prepared by :
Sri.Z.A.Ashraf
Department of Mathematics,
Govt. Arts & Science College,
Calicut - 18
Edited & scrutinized by :
Dr.C.P.Mohammed (Retd.),
Poolakkandy House,
Nanmanda P.O.
Calicut District.

Reserved
ProbabilityTheory

Page2

1PROBABILITY

1.1INTRODUCTION

1.3RANDOMEXPERIMENT

1.3.1SampleSpace

1.2EXPERIMENT

1.3.2Samplepoints

1.3.3Event

1.4CLASSICALDEFIp[NITION

1.4.1Definition

1.4.2LimitationsOfClassicalDefinition

1.5FREQUENCYDEFINITION

1.6 AXIOMATIC DEFINITION

1.6.1 Borel field or -algebra of events:1.6.2 Axioms of probability


1.7 THEOREMS ON PROBABILITY
1.8 CONDITIONAL PROBABILITY
1.9 MULTIPLICATION LAW OF PROBABILITY
1.10 INDEPENDENT EVENTS

1.10.1PairwiseandMutualIndependents

1.11 BAYES THEOREM


1.12 MULTIPLE CHOICE QUESTIONS
1.13 EXAMPLES
1.14 PROBLEMS

2RANDOMVARIABLES
2.1 DEFINITION:2.1.1 Types of Random Variables
2.2 PROBABILITY MASS FUNCTION
2.3 PROBABILITY DENSITY FUNCTION
2.4 DISTRIBUTION FUNCTION OF A R.V

2.4.1 Properties of Distribution function


2.5MULTIPLECHOICEQUESTIONS
2.6EXAMPLES
2.7PROBLEMS

3MATHEMATICALEXPECTATION
3.1INTRODUCTION
3.2DEFINITION
3.2.1ExpectationOfFunctionOfR.V
3.2.2PropertieOfExpectation

3.3MOMENTS
3.3.1MomentsaboutA
3.3.2Momentsaboutorigin
3.3.3CentralMoments
3.3.4MeasuresOfCentralTendency
3.3.5MeasuresOfVariation
3.3.6MeasuresOfSkewness

3.4MOMENTGENERATINGFUNCTION

3.3.7MeasuresOfKurtosis

3.4.1Definition

3.4.2Propertiesofmgf

3.4.3CharacteristicFunction

3.5MULTIPLECHOICEQUESTIONS

3.6EXAMPLES

3.7PROBLEMS

4CHANGEOFVARIABLES

4.1THEOREM

4.2MULTIPLECHOICEQUESTIONS

4.3EXAMPLES

4.4PROBLEMS

ProbabilityTheory

Page3

SYLLUBUS
Semester I- PROBABILITY THEORY
Module 1. Probability concepts: Random experiment, sample space, event, classical definition,
axiomatic definition and relative frequency definition of probability, concept of probability measure.
Addition and multiplication theorem (limited to three events). Conditional probability and Bayes
Theorem-numerical problems. (25 hours)
Module 2. Random variables: Definition- probability distribution of a random variable, probability
mass function, probability density function and (cumulative) distribution function and their properties.
(15 hours)
Module 3. Mathematical Expectations: Expectation of a random variable, moments, relation between
raw and central moments, moment generating function (mgf) and its properties. Measures of
skewness and kurtosis in terms of moments. Definition of characteristic function and its simple
properties. ( 20 hours )
Module 4 Change of variables: Discrete and continuous cases (univariate only), simple problems.
(12 hours)
Book for Reference
1. V.K. Rohatgi: an Introduction to Probability theory and Mathematic Statististics, Wiley Eastern.
2. S.C. Gupta and V.K.Kapoor: Fundamentals of Mathematical Statistics, Sultan Chand and
sons
3. Mood A.M., Graybill. F.A and Boes D.C: Introduction to Theory of Statistics McGraw Hill.
4. John E Freund: Mathematical Statistics (Sixth Edition), Pearson Education (India),New Delhi.

ProbabilityTheory

Page4

Chapter 1
PROBABILITY
1.1 INTRODUCTION
In our day to day life, we may face many situations where uncertainty plays a vital role. We usually
use statements like there is a chance for rain today or probably I will get A grade in university
examination etc. In all these contexts the term chance or probably is used to indicate uncertainty.
The word Probability is related with the occurrence of uncertainty, and Probability theory is the
discipline which tries to quantify the concept of chance or likelihood.

1.2 EXPERIMENT
An experiment is an activity which can be repeated in more or less same condition and will have
some specific outcome or outcomes. By combining hydrogen with oxygen under certain conditions
results in formation of H2O or tossing a coin and checking whether the face value is Head or Tail are
examples of experiments. Generally, experiment are of two types:

1. Deterministic experiment:- In deterministic experiment, if we repeat the experiment,


outcome or outcomes will not change; provided the initial conditions of the experiment
remains same. Experiments like combining hydrogen and oxygen are examples of
deterministic experiments.
2. Random experiment or stochastic experiment:- In random experiments, result or outcome
may not be same if we repeat the experiment in a similar conditions. But result will be one
among a set of possible out-comes. Tossing a coin or throwing a die are typical examples
of this type of experiments.

1.3 RANDOM EXPERIMENT


A Random experiment may be defined as an experiment with more than one possible outcome,
which may be repeated any number of times under more or less similar conditions and the
outcomes of which vary irregularly from repetition to repetition. For example, throwing a die and
looking on the face value is a random experiment as the face value will be any value from the
set {1, 2, 3, 4, 5, 6}, which cannot be predicted before conducting the experiment. Similarly
taking a ball from a box containing different coloured balls and looking the colour of the selected
ball, inspecting life length of a bulb etc are random experiments. Random experiments are
common in all kinds of research activities. Administering a new medicine on a group of people
and looking on effectiveness, Taking a product from factory and verifying whether it is defective,
Providing training for better academic achievement and measuring its effectiveness etc are
some examples of random experiments.

1.3.1 Sample Space


The set of all simple outcomes or sample points of a random experiment is called its sample space,
which is denoted by the letter S or . Eg: Sample space of the random experiment of throwing a die
is S = {1, 2, 3, 4, 5, 6} and sample space of the random experiment of tossing a coin is, S= {H,T }.

1.3.2 Sample points


Every elements in the set of all outcomes of the random experiments are called sample
points.

ProbabilityTheory

Page5

1.3.3 Event
Those subsets of the sample space to which probabilities are assigned by probability set
function under consideration are called an Event. Eg: Consider the random experiment of
throwing a die and the sample space will be, S = {1, 2,3,4,5, 6}. One can define an event E
as getting odd number. Then E = {1, 3, 5}. Note that the event E is a subset of sample
space S.

Types of Events

Simple and Compound Event :- If an event has only one outcome, that is, only one
sample point it is called simple event. Eg: The event of getting head in the experiment of
tossing a coin.
If an event has more than one sample points then it is called compound event.
Eg: The event of getting odd or even when rolling a die.

Mutually Exclusive Event :- Two events A and B are said to be mutually exclusive if the
occurrence of any one of the events exclude the occurrence of the other.

Exhaustive Event:- Let E1, E2, E3. . . En are n events of sample space then E0 is are said
to be exhaustive if [n i=1Ei = S.

1.4 CLASSICAL DEFINITION


There are different approaches in defining probability. The oldest and simplest one is
classical definition or mathematical definition.

1.4.1 Definition
If a random experiment results in n exhaustive, mutually exclusive and equally likely cases,
m of them are favourable (m n) to the occurrence of an event A, then the probability of A,
denoted as P(A), is defined as P(A) =

Note:1. We can also say the above statement as the odds in favour of A are m : (n - m) or odds against A
are
(n - m) : n.

2. For any event A, 0 P(A) 1


3. If A is a sure event, then P(A) = 1

4. If A is an impossible event, then P(A) = 0

1.4.2 Limitations Of Classical Definition


Classical definition of probability is very easy to understand. But the definition may not be applicable
in all situations. Following are some of the limitations of classical definition of probability.

1. If the events cannot be considered as equally likely, classical definition fails.


2. When the total number of possible outcomes n become infinite this definition
cannot be applied.
ProbabilityTheory

Page6

1.5 FREQUENCY DEFINITION


Let the trials be repeated over a large number of times under essentially homogeneous
condition. The limit of the ratio of the number of times of an event A happens (say, m) to the
total number of trials (say n), as the number of trials tends to infinity is called the probability
P(A) = lim

of the event A .

1.6 AXIOMATIC DEFINITION


Axiomatic definition of probability was introduced by Russian mathematician A.N.
Kolmogrov and it approaches probability as a measure. To explain axioms of probability we
have to define Borel field.

1.6.1 Borel field or -algebra of events:Let S be a non-empty set and F be a collection of subsets of S. Then F is called borel field
(- field) if
1.

is non-empty

2. The elements of

are subsets of S

thenAc

3. IfA

4. The union of any countable collection of elements of


for

= 1, 2,... then A1

A2

is an element of

. i.e. if Ai

...

1.6.2 Axioms of probability


A probability function P[.] is a set function with domain A (- field) and counter domain the
interval[0,1] which satisfies the following axioms:

1. P(A) 0 for every A

2. P(S)=1
3. If A1, A2,... is a sequence of mutually exclusive events in A thenP[

A ] = P(A ).

1.7 THEOREMS ON PROBABILITY


Theorem-1 If denotes impossible event then P() = 0
Proof:- Let S be the sample space and be null (impossible) event. We have S
= S.
Therefore, P(S [ ) = P (S). i.e P (S) + P () = P (S) i.e 1 + P () = 1. Hence P () = 0.
proved.

Theorem-2 For any event A, P (Ac) = 1 - P (A)


Proof:- If Ac is the complement of A, we have A Ac = S. Then P (A
P (Ac) = 1. i.e P (Ac) = 1 - P (A). proved
ProbabilityTheory

Ac) = P (S).i.e P (A) +


Page7

Theorem -3 (Addition Theorem) If A and B are two events, then


P (A
Proof:- We have A

B) = P (A)+P (B) - P (A

(Ac B)

B =A

B) = P (A) + P (Ac

therefore P (A

B)

B) ................. (1)

Now, B =(A B) (Ac B) and hence P (B) = P (A B)+P (Ac


(B) - P (A B) ........................................................ (2)

B) ie; P (Ac

B) = P

Substituting (2) in (1),


P (A

B) = P (A) + P (B) - P (A

B). proved

Note 1:- If the two events A and B are disjoints P(A

B)=P(A)+P(B)

Note 2:- For three events A,B,C


P(A

C)=P(A)+P(B)+P(C)-P(A B)-P(B C)-P(A C)+P(A B C).

Note 3:-For n events A1,A2,...An


P(A1 A2 . . .
. . An).

An) =

P(A )- P(A

A ) + P(Ai Aj Ak)-. . . + (-1)(n+1)P(A1 A2

Theorem 4 For every sequence of events A1, A2,...,


P[

A ]) P(A1)+P(A2)+...

Proof:- By considering the infinite operations into union of disjoint events,

i.e.

Ai

A1(Ac1 A2)

A ) P(A1) + P(Ac1 A2) + P(Ac 1

P(A1) + P(A2) + P(A3) +


i.e. P

(Ac 1 Ac 2 A3)

A)

P(A

Ac 2 A3)+....

Since P(Ac1 A2)P(A2) etc.,

1.8 CONDITIONAL PROBABILITY


Let A and B be any two events. The probability of the event A given that the event B has
already occured is known as conditional probability of A given B, denoted by P(A|B), and is
defined as P(A|B) = P(A \ B)/P(B), provided P(B)6= 0
Similarly the conditional probability of B given A is defined as,
P(B|A) = P(A \ B)/P(A), Provided P(A) 6= 0
Remarks:1. For P(B) > 0P(A|B) ~ P(A)
2. P(A|B) is not defined if P(B) = 0
3. P(B|B)=1
ProbabilityTheory

Page8

1.9 MULTIPLICATION LAW OF PROBABILITY


For any two events A and B P(A B) = P(A)P(B|A) provided P(A) > 0 or P(A
B) =
P(B)P(A|B) provided P(B) > 0 , where P(A|B) and P(B|A) are the conditional probabilities of
A and B respectively
Proof
Suppose the sample space consists of N sample points of which NA are favourable for the
occurrence of A and NB are favourable for the occurrence of B. Let NAB be the number of
occurrence favourable to the compound event A B, then the conditional probabilities are
given by
P(A) =

,P(B) =

and P(A

B) = =

Now the conditional probability P(A|B) refers to the

sample space of NB occurrences. Out of which NAB occurrences pertaining to the occurrence
of A Therefore P(A|B) =
Now P(A B) =
P(A

B) =

AB

AB

/N = N

AB

AB

/NB and P(B|A) = NAB/NA

/NA.NA/N = P(A|B)P(A) ........................ (1)

/N = NAB/NB.NB/N = P(A|B)P(B) ...................(2)

Thus we get
P(A

B) = P(A)P(B|A), P(A) >0


=P(B)P(A|B),P(B) >0

1.10 INDEPENDENT EVENTS


Two or more events are said to be independent if the probability of any one them is not
affected by the supplementary knowledge concerning the materialisation of any number of
the remaining events.

1.10.1 Pairwise and Mutual Independents


A set of events A1, A2, . .. ,An are said to be pairwise independent if every pair of different
events are independent. i.e ,P(A Aj) = P(A )P(Aj) for all and j, =6 j.
A set of events A1, A2 . . . An are said to be mutually independent if P(Ai Aj . Ar)
Ar) of A1,A2,An
= P(Ai)P(Aj).P(Ar) for every subset (Ai, Aj,

Note:- If A and B are two independent events,then P(A B) = P(A).P(B).

1.11 BAYES THEOREM


Let S be a sample space partitioned into n mutually exclusive event B1, B2, . . . Bn such that
P(Bi) > 0, = 1, 2, 3.. n. Let A be any event of S for which P(A) 0, Then the
probability for the event B ( = 1, 2, . . . n) given the event A is
P(B /A) = (P(B )P(A/B )
i

ProbabilityTheory

P(Bj)P(A/Bj)

Page9

Proof:From the definition of conditional probability,


P(B /A) =

P(A B )
P(A)

P(Bi)P(A/Bi)
P(A)

But A=A
Therefore P(A) = P[U

(A

. .

)=

S =A (U
B )] =

P(Bi)P(A/Bi

Substituting (2) in (1) we get P(B /A) =

(1)

(A

B)

(2)

P(B )P(A/B )

P(B )P(A/B ). proved

1.12 MULTIPLE CHOICE QUESTIONS


1. Two coins are tossed simultaneously. If one of them turned head, what is the
probability that the other one also turn head?
(a) 0.01
(b) 0.05
(c) 0.25
(d) 0.50
2. In the following functions, which constitute S=a,b,c as a probability space:
(a) p(a) = 0.5p(b) = 0.3p(c) = 0.3
(b) p(a) = 0.5p(b) = 0.2p(c) = 0.3
(c) p(a) = 0.5p(b) = -0.3p(c) = 0.3
(d) p(a) = 0.5p(b) = 0.7p(c) = - 0.2
3. If an unbaised coin is tossed once, then the two events head and tail are
(a) Mutually exclusive
(b) Equally likely
(c) Exhaustive
(d) All the above
4. If the letters of teh word REGULATIONS be arranged at rando what is the chance
that there will be exactly 4 letters between R and E
(a)

10/55

(b)

6/55

(c)

10/25

(d)

None of thsese

5. An event that can be split into further events is known as


ProbabilityTheory

Page10

(a) Simple event


(b) Complex event
(c) Composite event
(d) None of these
6. If A and B are two events such that P(A) = 1/4 , P(B) = 1/3 and P(A
be P(A/B)

B) = 1/2 what will

(a) 1/4
(b) 1/2
(c) 1/3
(d) None of these
7. Sangeetha speaks truth in 20% cases and Jaseena speaks truth in 30% cases, what is
the probability that they will contradict each other in a particular issue.
(a) 1/4
(b) 1/2
(c) 1/3
(d) None of these
8. If A and B are independent events, what is P(A/B)
(a) P(B/A)
(b) P(A)
(c) P(B)
(d) None of these
9. Which one of the following statement(s) are true
(a) If A and B are mutually exclusive P(B [ B) = P(A) + P(B)
(b) If A and B are mutually independent P(B \ B) = P(A)P(B)
(c) Both of these are true
(d) None of these
10. If A and B are independent events then which of the following statements are true
(a) A and Bc are independent
(b) Ac and Bc are independent
(c) Ac and Bc are independent
(d) All the above

1.13 EXAMPLES
ProbabilityTheory

Page11

Example 1
Three coins are tossed. What is the probability of getting
1. All Heads
2. Exactly one Head
3. Exactly two Heads
4. Atleast one head
Solution
Here the sample space is: S= (HHH,HHT,HTH,THH,TTH,THT,HTT,TTT) Therefore total
number of cases is 8.

1. There is only one case with all heads as (HHH), therefore, number of favourable cases is1.
Hence P (All heads) = .

2. There are three cases with exactly one head as (HTT ,THT ,TTH) Hence P (exactly one
head) = .

3. There are three cases with exactly two heads, as (HHT, HTH, THH) Hence P (exactly two
heads) =

4. In all 7 cases except (TTT) there is atleast one Head, Hence P(atleast one head)=

Example 2
A box contains 5 red,3 white 6 blue balls.If 3 balls are drawn at random, determine the
probability that
1. All 3 are blue
2. 2 are red 1 is white
3. One of each colour is drawn
Solution
Given that the box contains 5 red, 3 white 6 blue balls. If three balls are drawn at random, there
will be 14C3 possible outcomes.

1.

Here, since there are 6 blue balls the number of favourable cases will be 6C3 .

Therefore the P(getting 3 blue balls) =

6C3
14C3

= 915

5C2X3C1 15
= 182
14C3
5C2X3C1X6C1
And P( One red, One blue and oOne white balls) =
=
14C3

2. Similarly, P(getting 2 Red and 1 white ball)=


3.

45
182

Example 3
A card is drawn from a well shuffled pack of playing cards.Find the probability that
1. A club
2. A king
3. The ace of spade

Solution
ProbabilityTheory

Page12

There are 52 different cards in a pack of playing cards. Total number of cases= 52C1 = 52

1. There will be 13 club cards in a packet. Therefore favourable cases= 13C1 = 13.
Hence probability of getting a club will be

2. There are 4 kings in a packet. Therefore number of favourable cases = 4C1 and
probability of getting a king will be

3. In a pack of cards there will be only one ace of spade and hence probability of getting
an ace of spade will be =

Example 4
Given P(A) = 0.30, P(B) = 0.78, P(A
1. P(Ac

B) = 0.16.. Evaluate

Bc)

2. P(Ac[ Bc)
3. P(A

Bc)

Solution
Given P(A)=0.30, P(B)=0.78, P(A B)=0.16
1. P(Ac Bc) = P(A B)c = 1 - P(A B) =1-P(A)+P(B)P(A B)
=1-0.30+0.78-0.16
=0.08
2. P(Ac Bc) = P(A B)c = 1 - P(A B) =1-0.16=0.84
3. P(A

Bc) = P[A - (A B)] =P(A)-P(A B)

=0.30-0.16=0.14.

Example 5
The probability that a student passes Statistics course is
passes both Statistics and mathematics course is
one course is

and the probability that he

.The probability that he passes atleast

what is the probability that he passes mathematics course?

Solution
Let A be the event of a student passes Statistics course and B be the same student passes
Mathematics course.

Given P(A) =

, P(A

ProbabilityTheory

B) =

P(A B) =45 . We have to find P(B).


Page13

By addition theorem, P(A B) =P(A)+P(B)-P(A B) i.e, = + P(B) Therefore P(B) =

- +

Example 6
A problem in statistics is given to 3 students A,B and C whose chances of solving it are , and
respectively. What is the probability that the problem will be solved?
Solution
Let us define the events as
A

-The problem is solved by the student A

-The problem is solved by the student B

-The problem is solved by the student C

Then P(A) = ,P(B) = andP(C) =


The problem will be solved if atleast one of them solves the problem. That means we have to find
P(A B C).
Now P(A B C)

= 1 - P(A B C)c
= 1 - P(Ac Bc

Cc)

=1 - ((1 - )(1 - )(1 - )) =

Example 7
If two dice are thrown, what is the probability that the sum is equal to 9?
Solution
When two dice are thrown, there will be 36 mutually exclusive and equally likely cases like (1, 1),
(1,2), (1,3) .. . (6,6).
Among these cases, exactly 4 cases viz. (3, 6), (6, 3), (4, 5), (5,4) are favourable to the event sum
equal to 9.
Therefore P(sum equal to9) =

Example 8
What is the probability of getting 53 Sundays in a leap year
Solution
We have in a leap year there will be 366 days. So there will be 52 complete weeks and two days
remaining. These two days may be any one from following 7 pairs:
(Sunday, Monday), (Monday, Tuesday), (Tuesday, Wednesday), (Wednesday, Thursday),
(Thursday, Friday), (Friday, Saturday) and (Saturday, Sunday). Here two pairs are favourable
towards sunday. Hence probability of a leap year containing 53 sudnay will be

Example 9
Let A and B be two events associated with an experiment and suppose P(A) = 0.5 while P(AorB) =
ProbabilityTheory

Page14

0.8 .let P(B) = p. For what values of p are A and B independent?


Solution
If A and B are independent,we know that P(A
0.8 =0.5 + p - 0.5 p

B) = P(A) + P(B) - P(A)P(B)

0.5p= 0.3 p=

Example 10
Suppose there are two bags with first bag contains 3 white and 2 black balls, second bag contains 2
white and 4 black balls. One ball is transferred from bag I to bag II and then a ball is drawn from the
latter. It happens to be white. What is the probability that the transferred ball is white?
Solution
Let A be the event f drawing a white ball; B1 is the event of transferring a white ball from bag I and
B2 is the event of transferring a black ball from bag I to bag II.
Then P(B1) =

and P(B2) =

Also P(A/B1) =

and P(A/B2) =

P[transefered ball was white given that the ball drawn is white] = P(B1/A)
From Bayes theorem, P(B1/A) =

,
, ,

1.14 PROBLEMS
1.

For two events A and B, if P(A) = 0.3 ,P(B) = 0.2 and P(AB) = 0.1 , find the probability that exactly one of the
events will happen.

2.

The chance of two students to win a competition are

and

respectively. If they participate in the same

competition what is the probability that at least one will win?


3.

A box contains 2 silver and 4 gold coins and a second box contains 4 silver and 3 gold coins. If a coin is
selected at random from one one box, what is the probability that it is a gold coin?

4.

An integer is chosen at random from the first hundred integers. What is the probability that the integer chosen
is divisible by 3 or 5?

5.

A, B and C tosses a coin in order. The first one to toss head will win. What are their respective probabilites.

6.

A box contains 5 red, 6 white and 3 blue balls. If three balls are drawn at random what is the probability that
all are blue?

7.

Sunitha and Saleena stand in a circle with 10 other girls. If the arrangement of 12 girls are at random, find the
chance that there are exactly three girls between Sunitha and Saleena?

8.

Two dice are thrown at random and face value is noted, If A is the event that first die shows 2 or 5 or 6 and B
is the event of sum of the face values is 9, check whether A and B are independent. ?

9.

Three firms A, B, and C supply 25%, 35% and 40% of chairs needed to a college. Past experience shows that
5%, 4%, and 2% of the chairs produced by these companies are defective. If a chair was found to be
defective, what is the probability that chair was supplied by firm A?

10. Three urns are given each containing red and white balls. Urn I contains 6 red and 4 white balls, Urn II
contains 2 red and 6 white balls and Urn III contains 1 red and 2 white balls. An urn is selected at random and
a ball is drawn. If the ball is red what is the chance that it is from first Urn?

ProbabilityTheory

Page15

Chapter 2
RANDOM VARIABLES
By the term random variable we mean a variable which takes real values in accordance with the
change in the outcome of the random experiment. Or in other words, it is a real valued function
defined over the sample space . Hence its domain will be and its range will be real line R.
Random variable is also known as stochastic variable and it is denoted as X() or simply X.

2.1 Definition:Let (,S) be a sample space. A finite single-valued function that maps into is called a random
variable, denoted by , if the inverse images under of all Borel sets in are events.
i. e. X-1(B) = [
space

B] S for all B .

For example, let us consider a coin tossing experiment. Here the sample
= will be = (H, T). Now define a variable X() such that X() 8

0if Head
Here X takes real values either 0 or 1, in accordance with the outcome Head or
1if Tail
Tail. Thus X is a random variable.

2.1.1 Types of Random Variables:Random variable (r.v ) can be either discrete or continuous.
1. Discrete :A r.v is said to be discrete if its range includes finite number of values or countable infinite number
of values. For example, Number of road accidents occurs in a day in a city.
2. Continuous :A r.v which can assume any value from a specified interval of the form [a,b] is known as continous
r.v. For example, Lifetime of mobile phone or height of a randomly selected student from your
college etc can be treated as continuous r.vs.
Note:If X is a rv on (, S) then aX + b will also be a r.v on (, S) where a, b are constants. Also noted
that If X is r.v, then X2 and

will also be rvs.

2.2 Probability mass function


If X is a discrete r.v, then the probability mass function (pmf) or probability distribution of X is
defined as:
P(X= )
Properties:1. As

is a probability function,

2. The total probability

0.

= 1.

2.3 Probability Density Function


ProbabilityTheory

Page16

If X is a continuous random variable, then the probability density function or probability distribution
of X is defined as

X + d ) Properties:-

= P(

is a probability function,

1. As

2. The total probability

= 1.

2.4 Distribution Function of a r.v


Let X be a r.v, either discrete or continuous, its cumulative distribution function or simply distribution
function is defined as :
F( ) = P(X )

= if X is discrete and

if X is continuous.

2.4.1 Properties of Distribution function


if F(x) is the distribution function of a r.v. X, then
1. F(-) =0
2. F()=1
3. If a < b, then F(a) F(b) (i.e F( ) is non decreasing)

, lim

4. F(x) is continuous from the right ie

) = F(

5.

) - F(

-1)

2.5 MULTIPLE CHOICE QUESTIONS


1. Values taken by a r.v will be always a
(a)

Positive integer

(b)

Positive real number

(c)

Real Number

(d)

None of these

2. If the pdf of a r.v is given as

kxif0
2
0otherwise

Find the value of k


(a)

(b)

1/2

(c)

1/4

(d)

None of thsese

3. If X is a r.v, P(X ~ x) is known as

4.

(a)

Probability mass function

(b)

Probabiliyt density function

(c)

Probability distribution function

(d)

None of thsese

If f(x) is the pdf of X then P f(x) is equal to

ProbabilityTheory

Page17

(a) 0
(b) -1
(c)
(d) 1
5.

The distribution function of a rv will be


(a) Increasing function
(b) Decreasing function
(c) Constant function
(d) Non-decreasing function

6.

If F(x) is the distribution function of r.v X, P(a X b) will be


(a)
(b)
(c)

7.

(d) (b) - (a)


Value of k in the following pdf is
(a)

(b)

(c)

(d)
8.

F(a) - F(b)
F(b) - F(a)
a-b
x:

(x) : k

2k

None of these

If the pdf of a continuous rv is given as

( ) = 2 if 0 1
0 otherwise

What is the value of P(X =


(a)
(b)
(c)
(d)
9.

0.5
1
0
None of these

If X1 and X2 are two r.vs which one of the following statements are true
(a) X1 + X2 will be a rv.
(b) cX1 & cX2 will be a rv where c is a cont.
(c) X1 - X2 will be a rv.
(d) All the above

10. Which one of the following statements are ture


(a) The maximum value of f(x) is 1
(b) The maximum value of F(x) is 1
(c) Both (a) and (b)
(d) None of these

ProbabilityTheory

Page18

2.6 EXAMPLES
EXAMPLE 1
Check whether the following function is a pdf or not
0
2
0otherwise

=
Solution
If

( ) is a pdf, we have

Here ( ) d =

( )d = 1
|

2
0

EXAMPLE 2
Check whether following is a pdf.
for

( )=

= 1,2, 3,

0 elsewhere

Solution
If

( ) is a pdf, we have

Here ( ) =

( )=1

so, given f(x) is not pdf


EXAMPLE 3
If the pdf of a r.v is given as f(x) =

if

= 1,2, 3,4, 5

Find the distribution function of X ?

0 elsewhere

Solution
The Distribution function is defined as F(x) = P(X x) Therefore the Distribution function of
given pdf is

F(x) =

if < 1
if 1 < 2
if 2

<3

if 3

<4

if 4

<5

X 5

EXAMPLE 4 Obtain the distribution function of the total number of heads occuring in 3
ProbabilityTheory

Page19

tosses of an unbiased coin.

Solution
Let X denotes the number of heads turning up in 3 tosses. Then its pdf f(x)
will be f (x) = 0

F (x)

if

=0

if

=1

if

=2

if

=3

Hence the distribution function F(x) = P(X ) will be


0
if < 0
0 <1
F (x)

< 2u

2 <3
3

1
EXAMPLE 5

The probability mass function of a r.v X is given as follows


x:
f(x) :

k2

2k2

k2

Find the value of k


Solution

If f(x) is a pdf we know that f(x)=1


k2 +

+ 2k2 + k2

=1

4 k2 + 3k - 1 = 0
k = (-3 9

16)/8 = -3 5/8 = -1or

since probability is greater than or equal to zero, we have k =


EXAMPLE 6

A continues r.v X has the pdf


f(x) = 3x2, 0

1, find a and b such that

1. P(X a)=P(X>a)
2. P(X > b) = 0.05

ProbabilityTheory

Page20

Solution
1.

P(X )=P(X> )

We have P(X ) = P(X > a)

a3 = 1-a3

2 a3 = 1

a3 =

2.

P(X > b) = 0.05

3x2dx

3x2dx

3x2 = 0.05 1-b3 = 0.05 b = 0.98

EXAMPLE 7
Let the distribution function of X be
0
if < 0
F (x)
0 1
1
>1
Find P(2X+3 3.6)
Solution
P(2X + 3 3.6) = P(X =

P(X 0.3)

= F(0.3) = 0.3

EXAMPLE 8
If a continous r.v X has the pdf
2x
if 0 < x 1
( )

elsewhere

Find the distribution function F(X)


Solution

F(x) =

(t) dt =

2tdt = [t2] = x2

Therefore the distribution function is


0
if < 0
2
F (x)
0 <1
1
>1
EXAMPLE 9 Suppose the life in weeks of a certain kind of computers has the pdf:
when x 100
( )

when x < 100

What is the probability that none of three such computers will have to be replaced during the first
ProbabilityTheory

Page21

150 weeks of operation?

Solution
Probability that a computer will last for first 150 weeks is given by P(X 150) = P(0 <X <
100) + P(100 < 150)
150
100

(x) dx = 150

d = . Hence the probability that none of the three computers will have to

100

be replaced during the first 150 weeks is

EXAMPLE 10
The amount of sugar (in kg), a certain shop is able to sell is given with the probability density :
1
0 <5
25
( )

< 10

Find the probablity that the sales is between 2.5 kg and 7.5 kg.
Solution
If X is defined as the random variable, amount of sugar sales, we have to find P(2.5 X 7.5)
( 7.5 (x) dx
2.5

5
2.5

7.5
5

[10- dx

5
+
2.5

= 10

7.5
= 0.75
5

2.7 PROBLEMS
1. A r.v X has the pdf f(x)=k if x = 0,f(x)=2k if x =1 f(x)=3k if x = 2 f(x) = 0 otherwise. Find the value of k?
2. If f(x) = for x = 1, 2, 3,4 and5 and 0 other wise, find P(X = 1 or 2) .
3. Find the value of k so that f(x) = kx (1 - x) for 0 X 1 is a pdf of a continuous random variable.
4. A r.v X has pdf f(x) = A for 0 X 10, find the value of A. Also find P(2 X 5).
5. For a r.v X f(x) =
6. If f(x) =

e - , 0

for 0 X 2, find the value of a such that P(X< a/ X> )=

is the density function of a rv X, find its DF?

7. If a continous r.v X has the pdf if -2 ~ x < 0 if 0 ~ x <1


if -2
F (x)
4x2

<0

if 0

<1

if 1

<

Find the distribution function F(X).

8. Find the mean, variance and coefficients 1 and 2 of the distribution f(x) = kx2 e- x, for 0 X 1
9. If a continous r.v X has the pdf
2

for -

<

( )

0 otherwise Find the mean

10.

If f(x) = 6 (1 - ) for 0 <

ProbabilityTheory

< 1, find the mean deviation about mean of X?

Page22

Chapter 3
MATHEMATICAL EXPECTATION
3.1 INTRODUCTION
Mathematical expectation is a statistical concept originated from games of chance, but widely used
in different contexts.

3.2 DEFINITION
If X is a discrete r.v with pdf
E(X) =

( ), then mathematical expectation or expected value of X is defined as

( ) (provided sum is absolutely convergent), if X is discrete; and E(X) =

( )d

(provided integral is absolutely convergent), if X is continuous.


For example, consider a game of chance such that an unbaised coin is tossed, and if head falls
player A has to pay Rs. 1000 to player B; if tail falls player B has to pay Rs. 2000 to player A. We
can compute how much A can expect in a long run, if the game is continued.
Let X be the r.v which is defined as the amount received to player A. Hence X will take values
either -1000 or +2000 with probability each. Hence by applying the definition of expectation
we get E(X) = (-1000. + (2000.= 500. This means A can expect a gain of Rs. 500 in a game.
But note that there is no deal of Rs. 500. But this means over a long run, on average, in a game
A can expect Rs.500. i.e. if the game is repeated 10 times A can expect Rs. 5000/-.

3.2.1 EXPECTATION OF FUNCTION OF R.V


Let g(X) be a function of r.v X with pdf ( ). Then expected value of g(x) is defined as E[g(x)]
=

( ), If X is discrete and E[g(x)] =

g(x) ( )d , If X is continuous ; provided the sum or

integral is absolutely convergent.

3.2.2 PROPERTIE OF EXPECTATION


1. If X is a r.v then E(k) = k , where k is a constant. Proof:
Let X be a continuous r.v. having the pdf ( ). Then E(k) =

( )d =

( )d =k.1 = k.

proved
2. E(kg(x)) = kE(g(x)),where k is a constant and g(x) is a function of X Proof:
Let X be a r.v having pdf ( ).,Let g(x) be any function of X. Thus E[k(g(x)]=

( )d = kE[g( )]. proved.

( )d

3. E(aX + b) = aE(x) + b where a and b are constants.


Proof:
E(aX + b)=

(ax + b) ( )d = a

( )d + b

( )d

=aE(X) + b.1 = aE(X) + b. proved.


4. E(g(X) + h(X)) = E(g(X)) + E(h(X)) , where g(X) and h(x) are functions of X.
ProbabilityTheory

Page23

Proof:
E(g(X) + h(X)) =

( )d =

( )d

( )d

= E(g(X) + h(X)) proved

5.

If X and Y are two r.v then [E(XY)]2 E(X2)E(Y2). [Caychy-Schwartz Inequlity]

Proof:
Consider X + tY , for real X, Y and t.
Then (X+tY)2 0 E(X+tY)2 0
E(X2+2t XY+ t2Y2) 0
t2E(Y2) + 2tE(XY) + E(X)2 0
The LHS is a quadratic expression in t and it will be non-negative if discriminant is 0.
i.e. [2E(XY)]2 - 4E(X2)E(Y2) 0
4E(XY)2 - 4E(X2)E(Y2) 0
E(XY)2 E(X2)E(Y)2) proved

3.3 MOMENTS
Moments play vital role in the discussion of characterisation of probability distributions. The nature
of the distribution can be identified by looking on various moments. Moments can be defined in
different forms :

3.3.1 Moments about A


Let X be a r.v .The rth raw moment about an arbitary value A is defined as (A) = E(X - A)r, r =
0,1,2,... = x(x - A)r ( ), if X is discrete and = x(x - A)r ( )d , if X is continuous

3.3.2 Moments about origin


In the above definition, if we take A = 0 ,the origin, we get rth raw moment about origin as follows :
= E(X)r, r = 0,1,2,...
= x xr ( ), if X is discrete and
= x xr ( )d , if X is continuous
Note:Put r= 1, we get = E(X) , which is known as the mean of X and it is simply denoted as .
1
Put r=2, we get = E(X)2
2
Put r=3, we get = E(X)3 and so on.
3

3.3.3 Central Moments


In the definition of moments if we take E(X) as A we get rth central moment, denoted as r as follows:

ProbabilityTheory

Page24

r(A) = E(X - E(X))r, r = 0, 1,2, . . .


or r(A) = E(X-)r, r = 0,1,2,... = x(x - )r
continuous Note:-

( ),

if X is discrete and = R x(x - )r

( )d

, if X is

Put r= 1,we get 1=E(X-)=0


Put r=2, we get 2 = E(X - )2, which is known as variance of X.
Put r=3, we get 3 = E(X - )3 and so on.

Relation between raw moments and central moments


The rth central moment r can be expressed in terms of raw moments as follows:
r = - rC1

-11 + rC2

2
1

- . . . + (-1)r( )r.
1

3.3.4 MEASURES OF CENTRAL TENDENCY


1. Arithmetic Mean:- Arithmetic mean can be defined in terms of expectation as :
Mean = E(X) = x ( ) = if X is discrete; or
Mean = E(X) =

( )d , if X is continuous. it is denoted as

2. Median:- Median is the middle most observation. Therefore for a random variable X
we can define median as the value Md such that

( )d = or

( )d =

3. Mode :- Mode is the value of the r.v, for which the probability is maximum. Hence in
the case of discrete r.v, one can find mode in such way that it the value x which
satisfies ( ) > ( ( -1) and ( )> ( + 1). If X is a continuous r.v mode will be the
value of x such that

( ) = 0 and

( ) < 0.

3.3.5 MEASURES OF VARIATION


1. Mean Deviation:- Mean deviation about mean of a r.v X can be defined in terms of

expectation as : MD = E|X - E(X)| = |x - |f(x), if X is discrete; or =


continuous.

|x - | ( )d , if X is

2. Quartile Deviation:- If Q1 and Q3 are first and third quartiles of a distribution, quartile deviation

can be defined as QD =
3. Variance or Standard Deviation :- The variance of r.v, X is defined as: V(X) = E(X2) - (EX)2.

The square root of variance is known as standard deviation and it is denoted as . That is =
4.

3.3.6 MEASURES OF SKEWNESS

If left and right tails of a probability curve are not equally distributed, the curve is said to be
asymmetric curve or skewed curve. The lack of symmetry is known as skewness. If the left tail of the
curve is longer than right tail, curve is said to be negatively skewed and if right tail is more elongated
than left the curve is said to be positively skewed. There are different methods for measuring
skewness. A very popular measure for skewness in terms of moments is: 1 =

or 1 =

1.

A distribution is said to be positively skewed, negatively skewed or symmetric according as


ProbabilityTheory

Page25

>0,

<0 or

=0.

3.3.7 MEASURES OF KURTOSIS


Kurtosis is a term indicates the peakedness of a probability curve. If the curve is highly
peaked we call the curve as leptokurtic. If the curve is properly peaked it is called
mesokurtic or normal. If the curve is less peaked than normalit is called platy kurtic. There
are different methods for measuring kurtosis. A commonly used measure for kurtosis in
terms of moments is 2 =

or

= 2 -3

A distribution is said to be leptokurtic, mesokurtic or platykurtic according as 2>3, 2=3 or


2<3.

3.4 MOMENT GENERATING FUNCTION


Moments are very useful in representing the characteristic behaviour of a random variable.
To know the nature of distribution we have to find the moments. Moment generating
function (mgf) is a function which generates the moments.

3.4.1 Definition
The moment generating function (mgf) for a random variable X is defined as:
MX(t) = E(etx), - < t < , provided expectation exists.
=x etx ( , if X is discrete
= etx ( ) d , if X is continuous
The mgf generates the raw moments about the orgin. For any fixed t, we can write, Mx(t) =
E(etx)
=E[1+

)+

+ ] E(1)+ E( )+
!

E( 2) +

E( 3) + . . .

=1+ + + +...
! 1
! 2
! 3
Here , ,
1 2 3
wrt t

..............

are the moments of X So is the coefficient of inMx(t). Differentiate it

(t) = +
1

+ +
2 ! 3

(t) when t = 0 gives


1
Now

(t) = +(6t/3!) +
3
2
(t) when t = 0 gives

Proceeding like this we get =

ProbabilityTheory

(t)

Page26

3.4.2 Properties of mgf


1. McX(t) = MX(ct), where c is a constant
bt

2. MaX+b(t) = e MX(at) where a,b are constants.


3. MX+Y (t) = MX(t)MY (t) if X & Y are independent
4. MX(t) = 1 at t = 0.
5. The mgf uniquely determines the probability distribution if it exists.
6.

(t) at t=0 , r= 1,2,3,.

3.4.3 Characteristic Function


mgf may not exist in all situations. Characteristic functions, which characterises the distribution can
be used in such contexts. Characteristic function uniquely determines the distribution function.

Definition
tX

For a random variable X, the characteristic function, X(t) is defined as : X(t) = E(e ), where t is a
tx
e
( ) if X is discrete and
real number and = 1. If ( ) is a pdf of a r.v X then X(t) =

X(t) =
( ) d if X is continuous.

Properties of characteristic Function


1. |X(t)| 1 for all values of t.
2. |X(0)| = 1
3. |X(-t)| = |X(t)|.

3.5 MULTIPLE CHOICE QUESTIONS


1. If X and Y are independent, then which one of the following is not true?
(a) E(X+Y)=E(X)+E(Y)
(b) E(XY) = E(X)E(Y)
(c) E(XY) = E(X) + E(Y)
(d) E(X - Y) = E(X) - E(Y)

2. Variance of a r.v X is given by


(a) E(X - )2
(b)

E(X - E(X))2

(c)

E(X2) - (E(X))2

(d) All the above


3. If r.vs X and Y have expected values 32 and 28 respectively, then E(X - Y) will be
(a) 60
(b) 4
(c) 16
(d) None of these
4. If a and b are constants, MaX+b(t)
ProbabilityTheory

Page27

(a) btMX(at)
bt

(b) e MX(at)
(c) (a+b)MX(t)
(d) None of thsese
5. If X(t) is the characteristic fuction of X , What is the value of (0)
(a) 0
(b) 1
(c)
(d) None of these
6. If and are mean and s. d of a r. v X, M

(t)

(a) /
(b) e-t/MX( )
(c) e-tMX(t)
(d) None of these
7. If 0 ris the rth raw moment 0 3 is obtained as
(a)

Mx(t)

(b)

Mx(t) at t = 0

(c)

Mx(3)

(d)

8. If k is a constant E(k) will be:


(a) 0
(b) k
(c)
(d) Cannot be determined
9. If a rv X assume values 0, 1 and 2 with respective probabilities 0.2, 0.3 and 0.5
respectively, what is E(X)
(a) 1
(b) 0
(c) 1.3
(d) 3
10. If X and Y are two independent variables then:
(a) E(XY) = E(X).E(Y)
(b) Cov(X, Y) = 0
(c) xy = 0
(d) All the above

3.6 EXAMPLES
ProbabilityTheory

Page28

1. If a die is thrown at random, what is the expected value of face value?


Solution
Let X be the face value of the die. Then X takes values 1,2,3,4,5, or 6 with equal probability
.Now, by definition E(X) =

= (

)=

(1+2+3+4+5+6) =

2. If two dice are thrown, what is the expected value of sum of the face values?
Solution
Let X be the sum of the face value of the dice. Therefore X takes values 2, 3, . .. 12 with
following probabilities.
X:

= 2.

+ 3.

10

11

12

( ):

Now, by definition E(X) =

. . +

12.

= 7.

3. If X denotes number of failures preceding first success, with a probability of


success p , find E(X)
Solution
trials are failure and x + 1st is success. If p is the probability of success and 1-p is the

Let first

probability of failure,

= (1 -p) p. Therefore, E(X)=

= 30 4(1 - ), 0

4. If

(1- p) p=p(1- p)

(1- p)

-1

< 1 is the p.d.f of a r.v X. Find E(X).

Solution:
1
30
0

By definition, E(X) =

d =

= 30 [

= 30

= ,1 x < is the p.d.f of a r.v X. Find E(X).

5. If
Solution:

By definition, E(X) =

log

6.

If X is a r.v with pdf

i. e E(X) does not exists.

= when x = 1, 2, 3 and

= 0, otherwise. Find E(X + 2)2

Solution
.

We have E(g(X)) = g(x).


Therefore E(X + 2)2 =
= 9. + 16 + 25. =

ProbabilityTheory

2
.

Page29

=q -1p, = 1,2,3.........; p+q = 1. Find the mgf of X?

7. If X is a r.v having the pdf


Solution

)=

By definition mgf, M (t) = E(

= (qet)x
1

[qet + (qet)2 + (qet)3 + . ]

= qet [1 + qet + (qet)2 + . ]


= pet[1-qet]-1 =
= e-| |, - <

8. If

< is the p.d.f of a r.v X. Find the m.g.f of X.

Solution:
By definition, mgf , MX(t)= E(et ) = et 1
tx 1

-|x|

= e ( 2)e d

etx+xd +

etx+xd

e(1+t)xd +

e-(1-t)xd

9. Find the variance of the r.v whose m.g.f is

(2 + et+ 6e3t + 3e6t)

Solution:
We have Variance= E(X2) - [E(X)]2
i.e. V(X) = (
M

But,
=
=

e 18e 18e
2

12

=
=
=

2
1

12
2

24

+ 2

15

12
60

e 6e 3e

e 6e 3e

1 |t=0

|t=0 =

15

Now

2
|t

e 6e 3e

15

36e 90e

|t=0

|t=0 =

Therefore V(X) = (101/12) - (25/12)2


10. The random variable X has pdf ( ) =
ProbabilityTheory

2 - ) when 0 x 2 and 0 otherwise. Find


Page30

(a) Coefficient of skewness 1


(b) Coefficient of kurtosis 2
Solution:

For a rv X we have the rth moment


2
2

2

2
0
0

= E(X)r =

( )d

= 1,

Hence we get

, and

so on.

Now from recurrence relation of moments we get 1 = 0,


2=

= 6 5 -1 = 1

3 =

-3

+2

4 =

-4

+6

=0
-3

(a)

Now coefficient of skewness 1 =

(b)

coefficient of kurtosis 2 =

3.7 PROBLEMS
1. Find E(X) for the following pdf:
X:
1
2
3
4
5
( ):
0.1
0.1 0.3
0.3
0.2
2. If X has the pdf
X:
-3
-2
-1 0
1
( ) :
0.05
0.10
0.30
0.30
Find E(4X +5)
3. Find mean and variance of teh distribution with pdf
( )=

2
0.15

3
0.10

<.

4. If the mgf of X is(

et)5, Find P(X = 2).

5. Find the mgf of the r.v X having pdf ( )= ,0 <

< and 0 otherwise. Also find its

mean and variance?


6. If ( )= ke ,0 < is the pdf of a r.v X find first four central moments.
7. If ( ) = | | is the pdf of a rv ST its mean is 0 and variance 2.
8. If X is a rv for which E(X) = 10 and V(X) = 25 find positive values for
expectation of Y = X - b has expectation 0 and variance 1.
9. If ( ) =

and b such that

e-m x= 0,1,2,........., find the characteristic function of X.

10. If ( ) = e-ax

ProbabilityTheory

> 0,

> 0 , find the characteristic function of X.

Page31

Chapter 4
CHANGE OF VARIABLES
There will be situations where we know the distribution of a random variable and we have to find the
distribution of some functions of the given random variable. In this chapter we will discuss the
method of finding the probability distribution of a function of a variable. Note that if X is a random
variable defined over a sample space and let g(x) is a function of X then Y = g(X) is also a
random variable defined on .

4.1 THEOREM
Let X be a continuous random variable with p.d.f. ( ). Also let Y = g( ) be a function of X, which is
differentiable. Then the pdf of the random variable Y is given by
hY ( ) = x( )|

|, where

is expressed in terms of .

The following is the working rule for change of variable in case of a continuous random
variable when the transformation is one to one.

Note:-

(a) Find
(b) Find inverse function

= g-1 ( )

(c) Find ( )| | in terms of


(d) Find the range of variation of the transformed variable.

4.2 MULTIPLE CHOICE QUESTIONS


(a) If following is the pdf of a r.v X
X

-1

p( )

1
4

1
2

1
4

What is the pdf of Y = 3X + 1


i.

ii.

iii.

-1

p( )

1
4

1
2

1
4

-2

p( )

1
4

1
2

1
4

-2
1
4

1
1
2

4
1
4

-2

Y
p( )

iv.

Y
p( )

(b) If X has the pdf ( ) = e-x,


ProbabilityTheory

>0

=e

-x

Page32

e-x
1

None of the above

i.
ii.
iii.
iv.

(c) If X is a continuous rv with p.d.f. ( )and Y is a function of X, then the pdf of Y is given by
g( ) = ( )|

g( ) = ( )|

g( ) = ( )|

g( ) = ( )|

4.3 EXAMPLES
EXAMPLE 1
Suppose that the random variable X takes three values -1, &0&and&1 with probabilities
,

respectively. Find the pdf of Y = 2X +1

Solution

If X take on three values -1,0 and 1, and Y = 2X + 1 the random variable Y can take on values -1,
1, and 3 respectively. Now P(Y = -1) = P(2X +1 = -1) = P(X = -1) =
P(2X+1=1)=P(X=0)=

. And P(Y=3)=P(2X+1=3)= P(X = 1) =

.Similarly P(Y = 1) =

Thus the probability distribution of

Y is
Y

-1
11
32

p( )

1
16
32

3
5
32

Total
1

EXAMPLE 2
1

Let X be a discrete rv with pdf ( ) =

, = 0, 1, 2, . . . , n. Find the distribution of Y = 3X.

Solution
Let g( )) is the pdf of Y. If X takes the values 0, 1, 2, . n; Y will take the values 0, 3, 6, ...3n.

g( ) =P(Y= )=P(3X= )=P(X=


g( ) =n

Since ( ) =

, =0,1,2,...,n and x=

, =0,3,6,.............3n

EXAMPLE 3
Let X has the pdf ( ) =

x = 0, 1,2, .... Find the probability distribution of Y = X2.

Solution
Let g( ) is the pdf of Y. If X takes the values 0, 1, 2 ..; Y will take the values 0, 1, 4, 9....
g( ) = P(Y = ) = P(X2 = ) = P(X =
,
0,1,2,... and
Since ( )= !
g( )=

y = 0, 1, 4, 9, . . .
EXAMPLE 4
If X is a continuous r.v with pdf ( )=
ProbabilityTheory

2 0
0

1
Page33

Find the density of Y = 3X + 1


Solution

Let the distribution function of Y is G( ) = P(Y )


1
= P(X

( )d

0
Ie. G( ) =

2 d(

= (

if 0 <

< 1

The density function of

is given by g(

EXAMPLE 5
-x
If ( ) = e , x 0 find the pdf of

2
9

1 ,1

0, otherwise

= .

Solution

Let h(y) is the pdf of Y. Then we have hY ( ) = x( ) )|


Therefore

So, h( ) =

| Given

2 = 2

0.

4.4 PROBLEMS
1. If X has pdf
( )=

2.

= -1

when

=0

when

=1

Find the density function of Y = X2

otherwise

If X has pdf
( )=

0
3.

when

If X follows

( )=

when

= -2

when

= -1 Find the density function of Y = 2X2 -1

when

= 0,1,2

when

=3

otherwise

Find the density function of Y = 2X 3

4.

If X follows

( )=

2 0
0

Find the density function of Y = 8X3.

5.

If X follows

( )=

2 0
0

Find the density function of Y = e-x.

ProbabilityTheory

Page34

S-ar putea să vă placă și