Download Probwkshp

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Statistics wikipedia , lookup

History of statistics wikipedia , lookup

Inductive probability wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Probability interpretations wikipedia , lookup

Probability wikipedia , lookup

Transcript
PROBABILITY WORKSHOP
Spring 1999-( Feb 9, 10, 9:40-10:30a.m.)
(MAT 114, 117, & 119)
by Paul Vaz
1. The Sample Space (S) associated with any experiment is the set of all possible outcomes that
can occur as a result of the experiment. So naturally, we will call each element of the sample
space an outcome.
Consider the experiment of rolling a pair of fair dice. The figure below gives a
representation of all the 36 equally likely outcomes of the sample space associated with this
experiment.
EXAMPLE 1:
(1,1)
(1,2)
(1,3)
(1,4)
(1,5)
(1,6)
(2,1)
(2,2)
(2,3)
(2,4)
(2,5)
(2,6)
(3,1)
(3,2)
(3,3)
(3,4)
(3,5)
(3,6)
(4,1)
(4,2)
(4,3)
(4,4)
(4,5)
(4,6)
(5,1)
(5,2)
(5,3)
(5,4)
(5,5)
(5,6)
(6,1)
(6,2)
(6,3)
(6,4)
(6,5)
(6,6)
2. An event (E) is any subset of the sample space.
3. The probability of an event E (written as P(E))in a sample space (S) with equally likely
outcomes is given by
number of outcomes in E
P(E) =
number of outcomes in S
EXAMPLE 2:
For the sample space in example 1, consider the event
E = {the sum of the faces is 7 or 3}.
Then
E = { (1,6), (2,5), (3,4), (4,3), (5,2), (6,1), (1,2), (2,1) }
Thus
P(E) =
Alternatively:
8
2
number of outcomes in E

=
36 9
number of outcomes in S
If we let
A = {sum of faces is 7} and B = {sum of faces is 3}
Then, E = A  B, and P(E) = P(A  B)
= P(A) +P(B) - P(A  B)
=
6
2
2

0 
36 36
9
[ Additive Rule]
Observe:
Let F = {the sum of faces is neither 7 nor 3}, i.e., F is the complement of E.
Then,
P(F) = 1 - P(E)
[ Complement Rule]
2
7
=1=
9
9
Properties:
1. 0  P( E )  1 ;
2. P( )  0, & P( S )  1
number of elements in E
P( E )
3. Odds for an event E =
=
number of elements in E
P( E )
P( E )
number of elements in E

4. Odds against E =
number of elements in E
P( E )
Conditional Probability
A regular deck of playing cards consists of 52 cards:
13 clubs (black), 13 diamonds (red), 13 spades (black), 13 hearts (red).
The 13 cards are labeled: Ace (A), 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack (J), Queen (Q), King (K).
Consider the experiment of drawing a single card from the deck. The sample space associated
with the experiment has 52 equally likely outcomes. Consider the event
E = {a black ace is drawn}.
Then we have,
P(E) = 2/52.
i.e.,
the probability of drawing a black ace is 1/26.
However, suppose a card is drawn and we are informed that it is a club, then the question would
be, ' what is the probability of drawing a black ace, given the information that the card drawn
is a club' ? If F = {a club is drawn}, the question can be rephrased as ' what is the
probability of E given F' ? This is symbolically written as: Find
EXAMPLE 3:
P(E | F)
i.e., P(E | F) represents - the probability of the event E given the condition F.
Clearly, the given condition reduces the size of the event E to 1 outcome, since there is only one
black ace that is a club; the given condition also reduces the size of the sample space to 13
outcomes since there are 13 clubs.
Thus,
P(E | F) = 1/13
Using the Formula:
P( E  F )
P( E | F ) 
,
P( F )  0
P( F )
Note:
1
1
 52 
13 13
52
P( F )  0 [ Product Rule]
P( E  F )  P( E | F )  P( F ),
Independent Events
Definition: Let E and F be two events of a sample space S with P(F) > 0.
The event E is independent of the event F iff
P(E | F) = P(E).
Theorem: Let E, F be events for which P(E) > 0 and P(F) > 0. If E is independent of F, then F
is independent of E.
Test for Independence:
Two events of a sample space S are independent iff
P( E  F )  P( E )  P( F ),
P( F )  0
EXAMPLE 4:
A fair coin is tossed twice. Define the events E and F to be
E: A head turns up on the first throw of a fair coin;
F: A tail turns up on the second throw of a fair coin.
Show that E and F are independent.
Solution:
E = {HH, HT}, and F = {HT, TT}.
E  F  {HT }, therefore P( E  F )  1 / 4. ,
Also, P( E )  P( F )  (2 / 4).( 2 / 4)  1 / 4
Thus, events E and F are independent.
Warning!: Mutually exclusive events are generally not independent.
Bayes' Formula
Consider the partition of the sample space U into three subsets A, B, and C.
Let E be any event in S so that P(E) > 0 (see figure below).
U
A
C
B
E
P(A), P(B), and P(C) are referred to as a priori probabilities, and
P(A | E), P(B | E), and P(C | E) are called a posteriori probabilities, and are given by Bayes'
Formula, for example
P( A)  P( E | A)
P( A | E ) 
P( A)  P( E | A)  P( B)  P( E | B)  P(C )  P( E | C )
EXAMPLE 5
A computer manufacturer has three assembly plants. Records show that 2% of the sets shipped
from plant A turn out to be defective, as compared to 3% of those that come from plant B and
4% of those that come from plant C. In all, 30% of the manufacturer's total production comes
from plant A, 50% from plant B, and 20% from plant C. If a customer finds that her computer
is defective, what is the probability it came from plant B?
Solution:
You first recognize that the problem is solvable using the Bayes' formula based on a partitioning
of the sample space (in example, plants A, B, and C). Note that with every problem solvable by
Bayes' formula is associated a probability tree diagram. If we let D denote 'defective' and D
denote 'non-defective', then, the tree diagram associated with our example is:
D
.02
A
D
.3
.03
.5
D
B
D
.2
.04
D
C
D
Using Bayes' Formula
P ( B | D) 
P( B)  P( D | B)
P( A)  P( D | A)  P( B)  P( D | B)  P(C )  P( D | C )
P ( B | D) 
(.5)  (.03)
(.3)  (.02)  (.5)  (.03)  (.2)  (.04)
= .51724
Binomial Probability
Bernoulli Trial: Random experiments are called Bernoulli trials if
a. the same experiment is repeated several times
b. there are only two possible outcomes (success and failure) on each trial
c. the repeated trials are independent
d. the probability of each outcome remains the same for each trial
Bernoulli trials can always be represented by a tree diagram. Let the outcome success be denoted
by S and the outcome failure, by F. If P(S) = p, and P(F) = q, then p + q = 1.
The tree diagram for the experiment repeated twice is:
p
p
S
q
p
F
S
q
F
S
F
q
EXAMPLE 6:
A marksman hits a target with a probability 4/5. Assuming independence for
successive firings, find the probability of getting two misses and one hit.
Let S represent 'hit' and F represent 'miss'. Then P(S) = 4/5 = p, and P(F) = 1/5 = q.
Then by the binomial probability formula, the probability of getting two misses, and one hit
(k =1, and n = 3) is given by:
b(n, k ; p) 
b(3,1;4 / 5) 
n!
p k q nk
k!(n  k )!
3!
(4 / 5)1 (1 / 5) 31
1!(3  1)!
= .096
Expected Value
Let us examine what we mean when we assign probabilities to events. In the experiment of
tossing a coin once, we assign a probability of 1/2 for obtaining a Head (or a Tail ). This is so,
because if we perform a long sequence of tosses, we notice that the number of times heads
occurs is equal to the number of times tails appears, i.e., if you toss the coin many times, you will
get a Head (or a Tail) once every two tosses on an average.
Now, assume that you are in a casino playing roulette, and you are concentrating on the $1
single-number bet. This bet involves your selecting a single number from a set of 38 numbers. If
the casino selects that number (by spinning a ball in a wheel), you win $35; if another number is
selected you lose your $1.
Question:
If you were to play this game many times, how much should you 'expect' to win(or lose) on an
'average'.
Solution:
P(winning) = 1/38, and P(losing) = 37/38, i.e., if you were to play the game many times, you
will win once for every 38 times you placed the bet, and lose the other 37 times. So the
'long-term' average winnings ( or losses)would be:
1.($35)  37($1)
 $0.053  1 nickel.
38
Thus, you should expect to lose about a nickel (on an average) on every game, if you played the
game several times.
We say, that the expected value is -1nickel.
Note: If you play the game a few times, anything could happen; you could for instance win
every bet. The casino makes so many bets that on an average it profits a nickel on every game.
Standard way to find the expected value:
From the information given, construct a probability distribution. Consider the roulette game. The
experiment has two partitions (win/lose) with probabilities p1 and p2 , and payoffs
m1 and m2 .
The probability distribution:
Outcome
Probabililty
Payoff
Expected value (EV) =
WIN
1
p1 
38
m1  $35
LOSE
37
p2 
38
m2  $1
p1m1  p2 m2
= (1/38)(35) + (37/38)(-1)
= -$0.053
Formula:
If an experiment has n partitions that are assigned the payoffs m1 , m2 ,...mn occurring with
probabilities p1 , p2 ,... pn , then the expected value
EV  p1m1  p2 m2  ...  pn mn