Download Bernouli trials and binomial probabilities

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Statistics wikipedia, lookup

History of statistics wikipedia, lookup

Inductive probability wikipedia, lookup

Probability wikipedia, lookup

Probability interpretations wikipedia, lookup

Ars Conjectandi wikipedia, lookup

Transcript
1.5 Bernoulli trials and binomial probabilities
1.5.1 Independent and identically distributed random variables
A stochastic process X1, ..., Xn is said to be a sequence of independent and identically distributed (iid for
short) random variables or a sequence of repeated independent trials if
(1)
X1, ..., Xn are independent of each other, i.e. Pr{ X1 = x1, ..., Xn = xn } =
Pr{X = x } ... Pr{X = x } for any x , ..., x .
1
(2)
1
n
n
1
n
X1, ..., Xn all have the same probability distribution, i.e. for each xj, the value of
Pr{Xi = xj} is the same for each i.
It follows from (2) that the Xi all have the same probability mass function. Let’s denote this simply by f(a)
so
f(a) = Pr{X1 = a} = Pr{X2 = a} = ... = Pr{Xn = a}.
It follows from the independence of the Xi that
(3)
f(x1 ..., xn) = Pr{ X1 = x1, ..., Xn = xn } = f(x1) ... f(xn).
where f(x1 ..., xn) is the joint pmf of X1, ..., Xn.
Example 1. Suppose, as in Example 9 of section 1.2.5, we roll a die n times. Let Xj be the result of the jth
roll. The outcomes are (x1, ..., xn) where x1, ..., xn can each be 1, 2, 3, 4, 5 or 6. Suppose the n rolls of the
die are a sequence of independent trials with Pr{Xj = k} = 1/6 for each i and j. Then
Pr{(x1, ..., xn)} = (1/6)n for each outcome (x1, ..., xn).
Problem 1. Under the assumptions of Example 1
a. What is the probability that we don't get a 6 in n rolls of a die?
Ans: (5/6)n.
b. What is the probability that we get at least one 6 in n rolls of a die?
Ans: 1 - (5/6)n.
1.5.2 Bernoulli trials.
A Bernoulli trial is a series of repeated independent trials X1, ..., Xn where each observation can only have
two possible outcomes. The two outcomes are often called success and failure. It is also common to let S
or 1 represent success and F or 0 represent failure and to denote the probability of success on any one trial
by p. Thus, the probability of failure is 1-p. For Bernoulli trials, formula (6.3) becomes
(4)
Pr{ (x1, ..., xn) } = Pr{ X1 = x1, ..., Xn = xn) } = pk (1-p)n-k,
where k is the number of successes in x1, ..., xn. Note that each xi is either success or failure.
1.5 - 1
Example 2. During January the probability that any child at Washington Elementary School is sick is 0.4.
Furthermore whether one child is sick is independent of whether any other child is sick. Suppose five
children are selected at random. What is the probability they are all sick?
Suppose we let S or 1 indicate that a child is sick and W or 0 indicate that a child is well and Xj be the
condition of the jth child. So Xj = 1 indicats the jth child is sick and Xj = 0 indicates the jth child is well. The
sample space consists of the 32 possible outcomes
SSSSS, SSSSW, SSSWS, …, WWWWW
Each outcome has an equivalent representation in terms of random variables. For example, the outcome
SSWSW could be represented by X1 = 1, X2 = 1, X3 = 0, X4 = 1, X5 = 0. The probability that there are all
five children are sick is Pr{SSSSS} = Pr{X1 = 1, ..., X5 = 1}. Since X1, ..., X5 are independent, this is equal
to Pr{X = 1} ... Pr{X = 1} = (0.4)5 ≈ 0.01024.
1
5
Example 3. A company produces transistors. They estimate that the probability of any one of the
transistors is defective is 0.1. Suppose a box contains 10 transistors. What is the probability that there are
no defective transistors? Assume that whether each transistor is defective is independent of whether the
others are are not.
Let Xi be the random variable which is 1 if the i-th transistor is defective and 0 if it is not defective. The
probability that there are no defectives is Pr{X1 = 0, ..., X10 = 0}. Since X1, ..., X10 are independent, this is
equal to Pr{X = 0} ... Pr{X = 0} = (0.9)10 ≈ 0.3487.
1
10
Let N be the number of successes in a sequence of n Bernoulli trials X1, ..., Xn. If success is denoted by 1
and failure by 0 then
(5)
N = X 1 +  + Xn
We are interested in the distribution of N, i.e. probability that N has a given value k.
Example 4. In the context of Example 2, what is the probability that exactly 2 of the 5 children are sick?
Solution. One possiblility is that the first two children are sick and the last three are well. This
corresponds to SSWWW or X1 = 1, X2 = 1, X3 = 0, X4 = 0, X5 = 0. The probability of this outcome is
Pr{SSWWW} = Pr{X1 = 1, X2 = 1, X3 = 0, X4 = 0, X5 = 0} = (0.4)2 (0.6)3 = 0.03456.
However, there are other outcomes where two of the five children are sick. In this case it is not hard to list
them all. They are SSWWW, SWSWW, SWWSW, SWWWS, WSSWW, WSWSW, WSWWS, WWSSW,
WWSWS, WWWSS. So there are 10 different outcome where two of the five children are sick. Each of
these outcomes has the same probability, namely (0.4)2 (0.6)3 = 0.03456. So the probability of exactly two
of the five children sick is (10)(0.03456) = 0.3456.
Here is a way to think about the outcomes where two of the five children are sick. To each such outcome
we list the positions where the two S’s occur. Thus SSWWW corresponds to 12 and SWSWW corresponds to
13. Thus the 10 outcomes where there are 2 S’s can be listed as 12, 13, 14, 15, 23, 24, 25, 34, 34, 45.
1.5 - 2
Let's return to the general case where we are interested in the probability that exactly k outcomes are
success in a sequence of n Bernoulli trials. We need to count the number of sequences (x1, ..., xn) which
have k successes. Each such sequence gives a set {i1, ..., ik} of k numbers from the set {1, ..., n}, namely
the numbers i where xI is success. In the previous section, 1.4.2, we showed that the number of ways we can
select k different objects from a set of n objects was
(nk )
(6)
=
n!
k! (n-k)!
This quantity is called the number of combinations of n things taken k at a time and often denoted by one of
(n )
the symbols k . It is also called the binomial coefficient of n things taken k at a time.
(7)
n
Pr{ N = k } = Pr{k successes in a sequence of n Bernoulli trials} = k  pk (1-p)n-k.
n
The set of probabilities pk = k  pk qn-k where k runs 0, 1, ..., n, is called a binomial probability
distribution and a random variable N with a probability mass function of this form is called a binomial
random variable.
Example 5. In the context of Example 3, what is the probability pk that exactly k of the 5 children are sick
for each of the values k = 0, 1, 2, 3, 4, and 5?
5
Solution. pk = k  (0.4)k (0.6)5-k. Here is a table of values of this distribution and its graph.
0.35
0.3
k
0
1
2
3
4
5
pk
0.07776
0.2592
0.3456
0.2304
0.0768
0.01024
0.25
0.2
0.15
0.1
0.05
1
2
3
4
5
This graph illustrates one of the general features of a binomial probability distribution. For a fixed n and p
n
the values of pk = k  pk (1-p)n-k increase with k until a value m at which point the decrease with k. The
value of m is the largest integer less than or equal to (n+1)p. An extreme case is when (n+1)p < 1 in which
case the values of pk decrease with k for k = 0, 1, ..., n. The other extreme is when (n+1)p > n in which case
the values of pk increase with k for k = 0, 1, ..., n.
1.5 - 3
Example 6. What is the probability of getting exactly 5 heads in 10 flips of a fair coin 10?
This is a series of Bernoulli trials where the probability of success is p = ½. We want the probability that
the number N of successes is k = 5. By (7) this is
(105 ) (½)
10
= 252/210  0.2461.
n
Problem 2. a) Show that the probability of getting exactly r heads in n tosses of fair coin is  r  2-n.
n
b) Suppose n is even. Show that the probability, n/2  2-n, of getting exactly n/2 heads in n
tosses is approximately
2
n
. Use Stirling's approximation: n! 
2 n (n/e)n.
Example 7. If we expand the binomial (x + y)n then we get the sum of the 2n different terms of the form
z1...zn where each zi is either x or y. The number of different terms which have exactly r x's and (n-r) y's is
n . These terms are all equal to xryn-r. So
r 
n
(x + y)n =
 nr x y
r n-r
r=0
This is called the binomial theorem.
Example 8. What is the probability pk of getting exactly k defective transistors in a box of 10 in Example
3?
Solution. pk =
10  (0.1)k (0.9)10-k. Here is a graph of this distribution.
k 
0.3
0.2
0.1
2
4
6
8
10
1.5.3 Random walks.
A random walk is closely related to a sequence of Bernoulli trials. The most common way of thinking
about a random walk is as follows. We start at z = 0 on the z axis. We flip a (perhaps unfair) coin with
1.5 - 4
probability of getting a head equal to p. If the coin comes up heads we take a step to the right to z = 1 while
if the coin comes up tails we take a step to the left to z = - 1. We continue in this fashion flipping the coin
and taking steps to the right or left depending on whether the coin comes up heads or tails. We are
interested in the probability of being at position z after n flips. Let Zn be the location after n flips. We are
interested in the distribution of Zn. Let pz = pn,k be the probability of being at location z after n flips.
After n = 1 flip we are either at location z = 1 or z = - 1 with probabilities p1 = Pr{Z1 = 1} = p and
p-1 = Pr{Z1 = - 1} = 1 - p.
After n = 2 flips we are either at location – 2, 0 or 2 with probabilities p2 = Pr{X2 = 2} = Pr{HH} = p2 and
p0 = Pr{X2 = 0} = Pr{HH, TH} = 2p(1-p) and p-2 = Pr{X2 = -2} = Pr{TT} = (1-p)2.
After n flips we could be at any of the location – n, - n+2, - n+4, …, n-2 or n. If we get k heads and n-k tails
in the n flips we will be at location z = k – (n-k) = 2k – n. By (7), the probability of this occurring is
 n  pk (1-p)n-k. Since k = (z+n)/2 one has
k
(8)
pk = Pr{Zn = z} =
 n  p(z+n)/2 (1-p)(n-z)/2.
 (z+n)/2 
Example 9. We go to the casino and play a game where we can either win a dollar or lose a dollar on each
play. Suppose the probability of winning is p = 0.4 and the probability of losing is 0.6. Suppose we play
the game n = 5 times. What are the probabilities of that the net winning over these five plays will be
5
z = - 5, - 3, - 1, 1, 3, 5? According to the above formula this will be p5,z =  (z+5)/2  (0.4)(z+5)/2 (0.6)(5-z)/2.
Here is a table of values and graph of this distribution.
0.35
0.3
z
-5
-3
-1
1
3
5
Pz
0.07776
0.2592
0.3456
0.2304
0.0768
0.01024
0.25
0.2
0.15
0.1
0.05
-4
-2
2
4
A slightly different way of expressing the above derivation if (8) is as follows. Note that Zn = Y1 + … + Yn
where Y1, …, Yn are independent with Pr{Yj = 1} = p and Pr{Yj = - 1} = 1 – p for j = 1, …, n. If we let
Xj = (Yj + 1)/2 then X1, …, Xn are independent with Pr{Xj = 1} = p and Pr{Xj = 0} = 1 – p. So X1, …, Xn are
a series of Bernoulli trials. Since Yj = 2Xj – 1, one has Zn = (2X1 – 1) + … + (2Xn – 1) = 2(X1 + … + Xn) – n
= 2N – n where N = X1 + … + Xn is given by (5). Then Pr{Zn = z} = Pr{2N – n = z} = Pr{N = (z+n)/2}.
Using (7) this is equal to (8).
1.5 - 5