Download Introduction to Probability Experiments Sample Space Event

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Statistics wikipedia , lookup

History of statistics wikipedia , lookup

Birthday problem wikipedia , lookup

Inductive probability wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Probability interpretations wikipedia , lookup

Probability wikipedia , lookup

Transcript
These are
processes that
generate welldefined
outcomes
Introduction to Probability
Experiments
•Experiments
Experiment
•Counting Rules
•Combinations
•Permutations
•Assigning Probabilities
Sample Space
Experimental Outcomes
Toss a coin
Head, tail
Select a part for
inspection
Defective, nondefective
Conduct a sales call
Purchase, no purchase
Roll a die
1, 2, 3, 4, 5, 6
Play a football game
Win, lose, tie
Event
The sample space for an experiment is the
set of all experimental outcomes
Any subset of the sample space is called an event. S  Head, Tail
For a coin toss:
Selecting a part
for inspection:
Rolling a die:
S  Defective, Nondefective
S  1, 2, 3, 4, 5, 6
Rolling a die:
S  1, 2, 3, 4, 5, 6
Events:
{1}
// the outcome is 1 (elementary event)
{ 1, 3, 5 } // the outcome is an odd number
{ 4, 5, 6 } // the outcome is at least 4.
Basic Requirements for Assigning Probabilities
Probability is a
numerical measure of
the likelihood of an
event occurring
• Let Ei denote the ith experimental outcome (elementary event) and P(Ei) is its probability of occurring. Then:
0  P( Ei )  1 for all i
0
0.5
Probability:
1.0
• The sum of the probabilities for all experimental outcomes must be must equal 1. For n experimental outcomes:
P ( E1 )  P ( E2 )  . . .  P ( En )  1
The occurrence of the event is just
as likely as it is unlikely
1
Relative Frequency Method
Principle of Indifference
We assign equal probability to
elementary events if we have
no reason to expect one over
the other.
P( Ei ) 
1
n
For a coin toss:
This method is indicated when the data are available to
estimate the proportion of the time the experimental outcome
will occur if the experiment is repeated a large number of
times.
This method of
assigning probabilities
is indicated if each
experimental outcome
is equally likely
What if experimental
outcomes are NOT equally
likely. Then the Principle of
Indifference is out. We must
assign probabilities on the
basis of experimentation or
historical data.
S  Head, Tail
P(Head) = P(Tail) = 1/2
S  1, 2, 3, 4, 5, 6
Rolling a die:
P(1) = P(2) = … = P(6) = 1/6
S  Defective, Nondefective
Selecting a part
for inspection:
Selecting a part
for inspection:
P(Defective) = ?
Example: Lucas Tool Rental
Counting Experimental Outcomes
To assign probabilities, we must
first count experimental
outcomes. We have 3 useful
counting rules for multiple-step
experiments. For example, what
is the number of possible
outcomes if we roll the die 4
times?
1. Counting rule for multi-step
experiments
2. Counting rule for combinations
3. Counting rule for permutations
Relative Frequency Method
Each probability assignment is given by dividing the frequency (number of days) by the total frequency (total number of days).
Number of
Cleaners Rented
0
1
2
3
4
Number
of Days
4
6
18
10
2
40
Probability
.10
.15
4/40
.45
.25
.05
1.00
S  Defective, Nondefective
N parts: n1 defective and n2 nondefective
P(Defective) = n1/N, P(Nondefective) = n2/N

Relative Frequency Method
Ace Rental would like to assign probabilities to the number of carpet cleaners it rents each day. Office records show the following frequencies of daily rentals for the last 40 days.
Number of
Cleaners Rented
0
1
2
3
4
Number
of Days
4
6
18
10
2
Subjective Method
 When economic conditions and a company’s
circumstances change rapidly it might be
inappropriate to assign probabilities based solely on
historical data.
 We can use any data available as well as our
experience and intuition, but ultimately a probability
value should express our degree of belief that the
experimental outcome will occur.
 The best probability estimates often are obtained by
combining the estimates from the classical or relative
frequency approach with the subjective estimate.
2
Counting Rule for Multi‐Step Experiments
If an experiment can be described as a
sequence of k steps with n1 possible outcomes
on the first step, n2 possible outcomes on the
second step, then the total number of
experimental outcomes is given by:
Example: Bradley Investments
Bradley has invested in two stocks, Markley Oil and Collins Mining. Bradley has determined that the possible outcomes of these investments three months from now are as follows.
Investment Gain or Loss
in 3 Months (in $000)
Markley Oil Collins Mining
10
8
5
2
0
20
(n1 )(n2 ) . . . (nk )
A Counting Rule for Multiple‐Step Experiments
Bradley Investments can be viewed as a
two‐step experiment. It involves two stocks, each
with a set of experimental outcomes.
Markley Oil:
Collins Mining:
Total Number of
Experimental Outcomes:
n1 = 4
n2 = 2
Tree Diagram
Markley Oil Collins Mining
(Stage 2)
(Stage 1)
Gain 8
Gain 10
Gain 8
Gain 5
n1n2 = (4)(2) = 8
Lose 2
Lose 2
Gain 8
Even
Lose 20
Gain 8
Lose 2
Lose 2
This rule allows us to
Counting Rule for
count the number of
Combinations
experimental outcomes
when we select n objects
from a (usually larger) set
of N objects.
The number of N objects taken n
at a time is
N
N!
CnN    
 n  n!( N  n)!
where
And by definition
N !  N ( N  1)( N  2) . . .(2)(1)
n! n(n  1)(n  2) . . .(2)(1)
Experimental
Outcomes
(10, 8)
Gain $18,000
(10, -2) Gain
$8,000
(5, 8)
Gain $13,000
(5, -2)
Gain
$3,000
(0, 8)
Gain
$8,000
(0, -2)
Lose
$2,000
(-20, 8) Lose $12,000
(-20, -2) Lose $22,000
Example: Quality Control
An inspector randomly selects 2 of 5
parts for inspection. In a group of 5
parts, how many combinations of 2
parts can be selected?
5 
5!
(5)(4)(3)(2)(1) 120
C25    


 10
 2  2!(5  2)! (2)(1)(3)(2)(1) 12
Let the parts de designated A, B, C, D, E. Thus we could
select:
AB AC AD AE BC BD BE CD CE and DE
0! 1
3
Iowa Lottery
Counting Rule for Permutations
Sometimes the order of
selection matters. This
rule allows us to count the
number of experimental
outcomes when n objects
are to be selected from a
set of N objects and the
order of selection matters.
Iowa randomly selects 6 integers from a group of 47 to
determine the weekly winner. What are your odds of winning if
you purchased one ticket?
 47 
47!
(47)(46)(45)(44)(43)(42)

 10,737,573
C647    
(6)(5)(4)(3)(2)(1)
 6  6!(47  6)!
N
N!
PnN  n!  
n
(
N
 n)!
 
Some Basic Relationships of Probability
Example: Quality Control Again
There are some basic probability relationships that
can be used to compute the probability of an event
without knowledge of all the sample point probabilities.
An inspector randomly selects 2 of 5
parts for inspection. In a group of 5
parts, how many permutations of 2
parts can be selected?
Complement of an Event
5!
5! (5)(4)(3)(2)(1) 120
P 


 20
(5  2)! 3! (3)(2)(1)
6
5
2
Union of Two Events
Again let the parts be designated A, B, C, D, E. Thus we could
select:
AB BA AC CA AD DA AE EA BC CB BD DB BE
EB CD DC CE EC DE and ED
Complement of an Event
Intersection of Two Events
Mutually Exclusive Events
Union of Two Events
The complement of event A is defined to be the event
consisting of all sample points that are not in A.
The union of events A and B is the event containing
all sample points that are in A or B or both.
The complement of A is denoted by Ac.
The union of events A and B is denoted by A B
Event A
Ac
Sample
Space S
Event A
Event B
Sample
Space S
Venn
Diagram
4
Union of Two Events
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
M C = Markley Oil Profitable
or Collins Mining Profitable
Intersection of Two Events
The intersection of events A and B is the set of all
sample points that are in both A and B.
The intersection of events A and B is denoted by A 
M C = {(10, 8), (10, 2), (5, 8), (5, 2), (0, 8), (20, 8)}
P(M C) = P(10, 8) + P(10, 2) + P(5, 8) + P(5, 2)
+ P(0, 8) + P(20, 8)
= .20 + .08 + .16 + .26 + .10 + .02
Event A
Event B
Sample
Space S
= .82
Intersection of A and B
Intersection of Two Events
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
M C = Markley Oil Profitable
and Collins Mining Profitable
M C = {(10, 8), (5, 8)}
Addition Law
The addition law provides a way to compute the
probability of event A, or B, or both A and B occurring.
The law is written as:
P(A B) = P(A) + P(B)  P(A  B
P(M C) = P(10, 8) + P(5, 8)
= .20 + .16
= .36
Mutually Exclusive Events
Addition Law
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
M C = Markley Oil Profitable
or Collins Mining Profitable
We know: P(M) = .70, P(C) = .48, P(M C) = .36
Thus: P(M  C) = P(M) + P(C)  P(M  C)
= .70 + .48  .36
= .82
Two events are said to be mutually exclusive if the
events have no sample points in common.
Two events are mutually exclusive if, when one event
occurs, the other cannot occur.
Event A
Event B
Sample
Space S
(This result is the same as that obtained earlier
using the definition of the probability of an event.)
5
Mutually Exclusive Events
If events A and B are mutually exclusive, P(A  B = 0.
The addition law for mutually exclusive events is:
P(A B) = P(A) + P(B)
Conditional Probability
The probability of an event given that another event
has occurred is called a conditional probability.
The conditional probability of A given B is denoted
by P(A|B).
A conditional probability is computed as follows :
There’s no need to
include “ P(A  B”
Conditional Probability
P( A|B) 
P( A  B)
P( B)
Multiplication Law
Event C = Collins Mining Profitable
The multiplication law provides a way to compute the
probability of the intersection of two events.
P(C | M ) = Collins Mining Profitable
The law is written as:
Event M = Markley Oil Profitable
given Markley Oil Profitable
We know: P(M C) = .36, P(M) = .70
P(A B) = P(B)P(A|B)
P(C  M ) .36
Thus: P(C | M ) 

 .5143
P( M )
.70
Multiplication Law
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
M C = Markley Oil Profitable
and Collins Mining Profitable
We know: P(M) = .70, P(C|M) = .5143
Thus: P(M  C) = P(M)P(M|C)
= (.70)(.5143)
= .36
Independent Events
If the probability of event A is not changed by the
existence of event B, we would say that events A
and B are independent.
Two events A and B are independent if:
P(A|B) = P(A)
or
P(B|A) = P(B)
(This result is the same as that obtained earlier
using the definition of the probability of an event.)
6
Multiplication Law
for Independent Events
The multiplication law also can be used as a test to see
if two events are independent.
The law is written as:
P(A B) = P(A)P(B)
Multiplication Law
for Independent Events
Event M = Markley Oil Profitable
Event C = Collins Mining Profitable
Are events M and C independent?
DoesP(M  C) = P(M)P(C) ?
We know: P(M  C) = .36, P(M) = .70, P(C) = .48
But: P(M)P(C) = (.70)(.48) = .34, not .36
Hence: M and C are not independent.
Terminology
Events may or may not be mutually exclusive.
If E and F are mutually exclusive events, then
P(E U F) = P(E) + P(F)
If E and F are not mutually exclusive, then
P(E U F) = P(E) + P(F) – P(E n F).
All elementary events are mutually exclusive.
Simple probabilities
If A and B are mutually exclusive events,
then the probability of either A or B to occur
is the union
P(A  B)  P(A)  P(B)
The birth of a son or a
daughter are mutually
exclusive events.
The event that the outcome
of rolling a die is even and
the event that the outcome of
rolling a die is at least four
are not mutually exclusive.
Simple probabilities
If A and B are independent events, then the
probability that both A and B occur is the
intersection
P(A  B)  P(A) P(B)
Example: The probability of a hat being red is ¼, the probability of
the hat being green is ¼, and the probability of the hat being black is
½. Then, the probability of a hat being red OR black is ¾.
7
Simple probabilities
Example: The probability that a US president is bearded is
~14%, the probability that a US president died in office is
~19%. If the two events are independent, the probability that
a president both had a beard and died in office is ~3%. In
reality, 2 bearded presidents died in office. (A close enough
result.)
Conditional probabilities
What is the probability of event A to occur
given that event B did occur. The conditional
probability of A given B is
P(A | B) 
P(A  B)
P(A)
Harrison, Taylor, Lincoln*, Garfield*, McKinley*, Harding, Roosevelt, Kennedy* (*assassinated)
Example: The probability that a US president dies in office
if he is bearded 0.03/0.14 = 22%. Thus, out of 6 bearded
presidents, 22% are expected to die in office. In reality, 2
died. (Again, a close enough result.)
Probability Distribution
The probability distribution refers
to the frequency with which all
possible outcomes occur. There are
numerous types of probability
distribution.
The Uniform Distribution
A variable is said to be uniformly distributed if the
probability of all possible outcomes are equal to one
another. Thus, the probability P(i), where i is one of n
possible outcomes, is
P(i) 
The Binomial Distribution
A process that has only two possible outcomes is called a
binomial process. In statistics, the two outcomes are
frequently denoted as success and failure. The
probabilities of a success or a failure are denoted by p and
q, respectively. Note that p + q = 1. The binomial
distribution gives the probability of exactly k successes in
n trials
1
n
The Binomial Distribution
The mean and variance of a binomially distributed variable
are given by
  np
V  npq
n 
P(k)   p k 1 pn  k
k 
8
The Poisson distribution
The Poisson distribution
When the probability of “success” is very small, e.g., the
probability of a mutation, then pk and (1 – p)n – k become
too small to calculate exactly by the binomial distribution.
In such cases, the Poisson distribution becomes useful.
Let  be the expected number of successes in a process
consisting of n trials, i.e.,  = np. The probability of
observing k successes is
P(k) 
Poisson d’April
Normal Distribution
Siméon Denis Poisson
1781-1840
k e 
k!
The mean and variance of a Poisson distributed
variable are given by  =  and V = , respectively.
Gamma Distribution
9