Download Fundamental Principles of Counting

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Conditional Probability and Independence
• If A and B are events in sample space S and P(B) > 0, then
the conditional probability of A given B is denoted
P(A|B) and
P(AB)
P(A | B) 
.
P(B)
• Example. A coin is flipped twice.
Let S = {(H,H),(H,T),(T,H),(T,T)} and assume all four
outcomes are equally likely. Let A be the event that both
flips land on heads and let B be the event that at least one
flip lands on heads. Since A = {(H,H)} and
B = {(H,H), (H,T), (T,H)}, we have
P(AB) 1 / 4
P(A | B) 

 1/3.
P(B)
3/ 4
Reduced Sample Space
• When working with conditional probability P(A|B), it is
often easier to treat B as the new sample space.
• Example. A coin is flipped twice (continued).
Let S = {(H,H),(H,T),(T,H),(T,T)},
= {(H,H)},
{(H,H), (H,T), (T,H)}.
A
B=
Now, think of B as the sample space, where all outcomes
are equally likely. Clearly, P(A|B) = 1/3, which agrees
with the calculation on the previous slide.
The Law of Multiplication
• If we multiply through the conditional probability of A given
B by P(B), we obtain the law of multiplication
P(AB)  P(B)P(A | B).
This rule can be generalized (see the textbook).
• Problem. Let an urn contain 8 red balls and 4 white balls. We
draw 2 balls from the urn without replacement. If we assume
that at each draw each ball in the urn is equally likely to be
chosen, what is the probability that both balls are red?
Solution. Let R1 and R2 denote, resp., the events that the first
and second balls are red. Using the multiplication rule, we
have
P(R 2R1 )  P(R1 )(P(R 2 | R1 )  (8/12)(7/1 1)  14/33.
Of course, we could solve this problem by a direct count
without the use of conditional probability.
Probability of no king on four draws without replacement
• Draw from an ordinary deck of 52 cards
• Let Ai be the event that no king is drawn on the ith draw.
P(A1A 2 A 3A 4 )  P(A1 )P(A 2 | A1 )P(A 3 | A1A 2 )P(A 4 | A1A 2 A 3 )
48 47 46 45
   
52 51 50 49
• This is the same result we previously obtained by counting.
Law of Total Probability
• Let B be an event with P(B) > 0 and P(Bc) > 0. Then for any
event A,
P(A)  P(A | B)P(B)  P(A | Bc )P(Bc ).
This law may also be generalized--see textbook.
• Example. An insurance company rents 35% of the cars for its
customers from agency I and 65% from agency II. If 8% of
the cars of agency I and 5% of the cars of agency II break
down during the rental periods, what is the probability that a
car rented by this insurance company breaks down?
P(A)  P(A | I)P(I)  P(A | II)P(II)
 (0.08)(0.3 5)  (0.05)(0.6 5)  0.0605
• A tree diagram is often useful for the law of total probability.
Bayes’ Formula--see text for a generalization
• Suppose F1, F2, and F3 are pairwise disjoint and S=F1F2 F3.
P(E)  P(EF1 )  P(EF2 )  P(EF3 )
 P(E | F1 )P(F1 )  P(E | F2 )P(F2 )  P(E | F3 )P(F3 ).
• Now,
P(Fj | E) 

P(EFj )
P(E)
P(E | Fj )P(Fj )
P(E | F1 )P(F1 )  P(E | F2 )P(F2 )  P(E | F3 )P(F3 )
.
• If event E is known to have occurred, then we can update the
probabilities that the events Fj (the hypotheses) will occur by using
Bayes’ formula. P(Fj) is called the prior probability of Fj and the
conditional probability P(Fj | E) is the posterior probability of Fj
after the occurrence of E.
Example for Bayes’ formula
• Suppose we have 3 cards which are identical in form. The
first card has both sides red, the second card has both sides
black, and the third card has one red side and one black side.
Suppose that one of the cards is randomly selected and put
down on the ground. If the upturned side of the chosen card
is red, what is the probability that the other side is black?
• Let R2, B2, and M denote the events that the chosen card is,
resp., all red, all black, and mixed (red-black). Letting R be
the event that the upturned side of the chosen card is red, we
have
P(R | M)P(M)
P(M | R) 
P(R | R2)P(R2)  P(R | M)P(M)  P(R | B2)P(B2)
(1/2)(1/3)
1

 .
(1)(1/3)  (1/2)(1/3)  (0)(1/3) 3
Independent events
• In general, P(E|F) and P(E) are different. That is, knowing
that F has occurred generally changes the probability of E’s
occurrence. This leads to the following definition.
• Events E and F are independent in case
P(EF)  P(E)P(F).
If E and F are not independent, we say they are dependent.
• Example. Two coins are flipped and all 4 outcomes are
assumed to be equally likely. If E is the event that the first
coin lands heads and F is the event that the second coin lands
tails, then E and F are independent since
P(EF)  P({H,T})  1/4, P(E)  P({(H,H), (H, T)})  1/2, and
P(F)  P({(H,T), (T, T)})  1/2.
More on independent events
• If E and F are independent, so are E and Fc. See proof in textbook.
What can you say about the independence of Ec and Fc?
• Assuming P(EFG) = P(E)P(F)P(G) for three events E, F, G does
not imply pairwise independence. See Example 3.29 in the
textbook.
• We say E1, E2, …, En is independent if for every subset
E1' , E 2' ,..., E r' ; 1'  2'  ...  r'  n, we have
P(E1'E 2'...E r' )  P(E1' )P(E 2' )...P(E r' ) *
• Example. Suppose we conceive of an experiment involving an
infinite number of coin flips. Suppose Ei is the event that the ith
flip turns up heads. We believe that these events are independent,
and this means that all equations of type * will hold (without the
restriction that subscripts are  n). This shows how to extend the
concept of independence to a sequence of events.
Example—An experiment with independent subexperiments
• Independent trials, consisting of rolling a pair of dice are performed.
What is the probability of the event E that we get a sum of 5 before
we get a sum of 7?
• Let En denote the event that no 5 or 7 appears on the first n–1 trials
and a 5 appears on the nth trial. The desired probability is
P( n 1 E n )



n 1
P(E n ). Do you see why ?
• Since P({5 on any trial}) = 4/36 and P({7 on any trial}) = 6/36, by
the independence of trials, P(En) = (1– 10/36)n-1(4/36) and thus the
desired probability is 2/5 using the result of Appendix 2.
• Let F be event that a 5 occurs on 1st trial, G be event that a 7 occurs
on 1st trial, and H be event that neither 5 nor 7 occurs on 1st trial.
P(E) = P(E|F)P(F)+P(E|G)P(G)+P(E|H)P(H). Now P(E|F) = 1 and
P(E|G) = 0. P(E|H) = P(E) since H has no effect. We have, P(E) =
1/9 +P(E)(13/18) or P(E) = 2/5. This is the same result as before.
Related documents