Sunteți pe pagina 1din 3

Chapter 4.

Probability

Interpretation of Probabilities:

1. Relative frequency: The chance of having some event can be estimated based on
repeated observation of outcomes. ep: The probability of raining tomorrow, can be
estimated based on number or raining days in historical days with similar climate
conditions.

2. Personal probability: The chance can only be estimated based on one's personal belief.
ep: The chance of giving up, when you failed three times in a puzzle game.

Equally Likely Outcomes: In some experiment, all outcomes have the same relative frequency in the
long run. Such as tossing a fair coin, heads and tails are equally likely to appear. In this case, we can
obtain the probability of each outcome directly. Suppose there are N equally likely outcomes in an
experiment, then each outcome has the probability 1/N to happen.

Simple Probability Rules

Events: In an experiment, we do not merely care about the probability of each single outcome. For
example, when tossing a dice, we are not only interested in “something” like whether the outcome
is 6, but also probabilities of other “thing”, such as whether the outcome is an even number. To
abstract this idea, we define probabilities for “events”, which can be thought of as combinations of
outcomes.

Notation: P(an event)

 Complement rule: P(not having A) = 1 - P(A)


Sometimes “not having A” is equivalent to “having B”. For tossing a dice,

P(an even number) = 1 – P(not an odd number)

 Addition rule: P(A or B) = P(A)+P(B) when A and B are mutually exclusive


It means two events share no common single outcomes. For tossing a dice
“2 or 3 or 5” and “an even number” are not mutually exclusive events as 2 is
an even number.
 Multiplication rule: P(A and B) =P(A)*P(B) when A and B are independent
Independence:
Definition: P(A and B) = P(A)*P(B) (*)
Independence is used to describe things are not related. ep: Event A: “Mark
will finish his homework”, and Event B: “the temperature on Jupiter will
experience a sudden decrease.”
In many cases, it is dangerous to assume independence if you have no good
reason to do so.
Expectations/Average Values

Expectation = V(A)P(A) + V(B)P(B) + …


V(X): The value assigned to event X. etc.
It represents the average value of the outcomes of the experiment in the long run. For tossing a fair
dice, the mean value is 3.5 = 1*(1/6) + 2*(1/6) + 3*(1/6) + 4*(1/6) + 5*(1/6) + 6*(1/6).

Conditional Probabilities

The first thing is to distinguish P(A&B) and P(A|B). By definition


𝑃(𝐴&𝐵)
P(A|B) = (∗∗)
𝑃(𝐵)
For tossing a fair dice, let A be the event that getting a 2, and let B be the event that getting an even
number. P(A&B) = P(A) = 1/6, P(A|B) = 1/3, P(B) = ½.

Independence revisited: If A and B are independent events, then P(A&B) = P(A)P(B). By (**), we have
P(A|B) = P(A), which is a more intuitive description of independence. It says, the occurrence of B
does not affect the chance of having A.

Application of conditional probability: Testing rare events.


If we are looking at a medical test for certain diseases, the following terminologies are often used.
Suppose the test can show a positive or negative result.

Base Rate: P(disease)

Sensitivity of the test: P(positive|disease)

Specificity of the test: P(disease|positive)

Construct a contingency table is often helpful.

Hypothesis test

Choice of Hypothesis: One should choose the hypothesis under which all probabilities can be
calculated.

P-value: The probability of having an outcome that is more extreme or as extreme as the present
one. It is usually calculated using the addition rule, as we can easily see the two cases are mutually
exclusive.

P(More or as extreme as the present outcome)


= 𝑃(𝑀𝑜𝑟𝑒 𝑒𝑥𝑡𝑟𝑒𝑚𝑒) + 𝑃(𝐴𝑠 𝑒𝑥𝑡𝑟𝑒𝑚 𝑎𝑠 𝑡ℎ𝑒 𝑝𝑟𝑒𝑠𝑒𝑛𝑡 𝑜𝑛𝑒)

Critical Value: A small P-value means the present outcome is a strong evidence against the
hypothesis. (You might want to spend a few minutes here to think of the logic. In philosophical or
mathematical logic terms, this is called “proof by contradiction”) However, how small is enough? To
be able to make judgement, we choose critical values to compare with P-value. When P is less than
the critical value, we say that the hypothesis is rejected. Common critical values are 0.10, 0.05, 0.01.

S-ar putea să vă placă și