Download Chapter 5 - Chris Bilder`s

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Statistics wikipedia , lookup

Transcript
5.1
Chapter 5: Some Discrete Probability Distributions
Take Sample
Sample
Inference
Population
Again, PDFs are population quantities which gives us
information about the distribution of items in the population.
There are many PDFs where are used to understand
probabilities associated with random variables. There are a
few PDFs which are used for multiple real-life situations.
These PDFs are described next. From this chapter, it is
important to learn the following:
 What are these PDFs which can be used for multiple
situations
 When can these PDFs be used
 The means and variances for random variables with
these PDFs
All PDFs in this chapter will be for discrete random variables.
 2005 Christopher R. Bilder
5.2
5.2: Discrete Uniform Distribution
The simplest PDF for discrete random variables is when
the probability of observing a particular value of X is
equal for all possible values of X! Since the probabilities
are the same, this PDF is called the uniform PDF.
Discrete uniform PDF – If the random variable X assumes
values of x1, x2, …, xk with equal probabilities, then the
discrete uniform distribution is given by
1
f(x;k)  , for x  x1,x 2 ,...,xk
k
Notes:
 x1, x2, …, xk are just a convenient way to iterate out all
possible values that X can take on.
 Since f(x) is dependent on whatever we put in for k, the
function is typically written as f(x;k). This notational
convention will be used for the PDFs described in
Chapters 5 and 6.
Theorem 5.1 – The mean and variance of the discrete
uniform distribution f(x;k) are
k
  E(X) 
 xi
i 1
k
  xi   
k
and 2  Var(X) 
 2005 Christopher R. Bilder
2
i 1
k
5.3
Why are these the values for E(X) and Var(X)?
Remember that  = E(X) =  x f(x) and 2 = E[(X-)2] =
x
Var(X) =  (x  )2 f(x) are from Chapter 4.
x
Then
k
k
i 1
i 1
E(X) =  xi f(xi )   xi
1
and
k
1
Var(X)   (xi  ) f(xi )   (xi  )
i 1
i 1
k
k
2
k
2
Example: Roll one die (die.xls)
Let X = die #1 result. The uniform PDF is:
1
for x=1,2,3,4,5,6

f(x)   6
 0
otherwise
6
Notice that x1=1, x2=2, …, x6=6 and  f(x)  1.
x 1
Finding the mean and variance produces:
 2005 Christopher R. Bilder
5.4
k
 xi
1  2  3  4  5  6 21

 3.5 and
k
6
6
k
2
x


2
2
 i

(1

3.5)
(6

3.5)
2  Var(X)  i1

 
 2.9167
k
6
6
  E(X) 
i 1

 2005 Christopher R. Bilder
5.5
5.3: Binomial and Multinomial Distributions
To help explain the binomial PDF and its characteristics,
below is an example.
Example: Field goal kicking (fg_ch5.xls)
Suppose a field goal kicker attempts 4 field goals during
a game and each field goal has the same probability of
being successful (the kick is made). Also, assume each
field goal is attempted under similar conditions; i.e.,
distance, weather, surface, … .
Below are the characteristics that must be satisfied in
order for the binomial PDF to be used.
1) There are n trials for each experiment.
n=4 field goals attempted
2) Two possible outcomes of a trial. These are typically
referred to as a success or failure.
Each field goal can be made (success) or missed
(failure)
3) The trials are independent of each other.
 2005 Christopher R. Bilder
5.6
The result of one field goal does not affect the result
of another field goal.
4) The probability of success, denoted by p, remains
constant for each trial. The probability of a failure is
1-p.
Suppose the probability a field goal is good is 0.6; i.e.,
P(success) = p = 0.6.
5) The random variable, X, represents the number of
successes.
Let X=number of field goals that are successful.
Thus, X can be 0,1,2,3, or 4.
Since these 5 items are satisfied, the binomial PDF can
be used!
What is P(0 successful) = P(X=0)?
Let G=Field goal is good (success) and M=Field
goal is missed (failure)
f(0) = P(X=0)
=P(1st M  2nd M  3rd M  4th M)
=P(1st M)P(2nd M)P(3rd M)P(4th M) because ind.
=P(M)P(M)P(M)P(M) each trial has same prob.
 2005 Christopher R. Bilder
5.7
=(1-p)4
=0.44
=0.0256
What is P(1 good) = P(X=1)?
f(1) = P(X=1)
= P(1st G  2nd M  3rd M  4th M) +
P(1st M  2nd G  3rd M  4th M) +
P(1st M  2nd M  3rd G  4th M) +
P(1st M  2nd M  3rd M  4th G)
= P(G)P(M)P(M)P(M) + P(M)P(G)P(M)P(M) +
P(M)P(M)P(G)P(M) + P(M)P(M)P(M)P(G)
= (0.6)(0.4)(0.4)(0.4) + … + (0.4)(0.4)(0.4)(0.6)
= 4(0.6)(0.4)(0.4)(0.4)
= 4(0.6)1(0.4)3
= 0.1536
Note: f(1) = 4(0.6)1(0.4)3
 1 success, with probability of 0.6
 3 failures, with probability of 0.4
 4 different ways for 1 success and 3 failures to
happen.
Continuing this same process, the PDF can be found to
be:
 2005 Christopher R. Bilder
5.8
x
0
1
2
3
4
f(x)
0.0256
0.1536
6(0.6)2(0.4)2=0.3456
4(0.6)3(0.4)1=0.3456
1(0.6)4(0.4)0=0.1296
In general, the equation for the binomial PDF is defined
below.
n x
f(x;n,p)  b(x;n,p)    p (1  p)n x for x=0,1,2,…,n
x
Notes:
 The book uses q = 1-p in the above PDF.
 Remember that n! = n(n-1)(n-2)…21.
n
n!
  
: This gives the number of unique
 x  x!(n  x)!
combinations of ways to choose “x” items from “n”
items where the order of choosing the items is not
important. For this PDF, we are choosing x successes
out of n trails which result in a success or failure.
Often, it is read as “n choose x”. See p. 5.24 for other
examples.
 The book also uses b(x;n,p) as “fancy” way just to say
f(x;n,p).
 In the most general case for a PDF of a discrete
random variable, you need to know the probability for
 2005 Christopher R. Bilder
5.9
each value of x. In order to find probabilities with the
binomial PDF, you only need to know n and p!
 p is a population parameter. In Section 9.10, we will
learn how to estimate it using a sample from the
population. We will also learn how to estimate it with a
specific level of confidence!
 The binomial CDF is
x n
F(x;n,p)  P(X  x)  B(x;n,p)     pt (1  p)n t
t 0  t 
1 n
n
Thus, F(1) =    pt (1  p)n t =   p0 (1  p)n0 +
t 0  t 
0
n 1
n1
p
(1

p)
= P(X=0) + P(X=1)
 1
 
Terminology: Suppose X is a random variable with a
binomial PDF. One can shorten how this is said by saying
“X is a binomial random variable”.
When the number of trials is 1 (i.e., n=1), the binomial PDF
simplifies to
f(x;1,p)  b(x;1,p)  px (1  p)1 x for x=0,1
 2005 Christopher R. Bilder
5.10
 1
since   is 1 for x=0 or 1. This special case of the
x
binomial PDF is called a Bernoulli PDF. Also, suppose
X1, X2,…, Xn are independent random variables with a
n
Bernoulli PDF. Then Y   Xi has a Binomial PDF of
i 1
n y
f(y;n,p)    p (1  p)n y . Because of this relationship,
y
the book calls each “trial” for a binomial random variable
a “Bernoulli trial”.
One can find the E(X) and Var(X) using the methods
discussed in Chapter 4. Fortunately, there is a nice
simplification for these values!
Theorem 5.2 – The mean and variance for a binomial
random variable are:
  E(X)  np and 2  Var(X)  np(1  p)
n
pf: E(X) =  xf(x)
x 0
n!
px (1  p)n x
x 0
x!(n  x)!
n
(n  1)! x 1
= np  x
p (1  p)n x
x 0
x!(n  x)!
n
= x
 2005 Christopher R. Bilder
5.11
(n  1)! x 1
p (1  p)n x since x=0 does not
x 1 x!(n  x)!
contribute to the sum
n
(n  1)!
= np 
px 1(1  p)n x
x 1 (x  1)!(n  x)!
n1
(n  1)!
= np 
py (1  p)n y 1 where y=x-1
y 0 y!(n  y  1)!
= np  1
since a binomial PDF with n-1 trials is
inside the sum!
= np
n
= np  x
Take special note of how I got another binomial PDF
inside the sum and use the property of it summing to
1. This is a common “trick” that often appears in
doing proofs like these!
Do the Var(X) proof on your own. I recommend
finding E(X2) using similar methods as done for E(X)
above.
The book does these proofs a different way on p.
121.
In Maple,
> assume(p>0,p<1);
> assume(x>=0, x::integer);
> assume(n>=0, n::integer);
 2005 Christopher R. Bilder
5.12
> about(p,x,n);
Originally p, renamed p~:
is assumed to be: RealRange(Open(0),Open(1))
Originally x, renamed x~:
is assumed to be: AndProp(integer,RealRange(0,infinity))
Originally n, renamed n~:
is assumed to be: AndProp(integer,RealRange(0,infinity))
> f(x):=n!/(x!*(n-x)!)*p^x*(1-p)^(n-x);
( n~ x~ )
n~ ! p~ x~ ( 1p~ )
f( x~ ) :=
x~! ( n~ x~ )!
> E(X):=sum(x*f(x),x=0..n);
1

n~ ! p~ ( 1 p~ )

 1 p~
E( X ) :=
( n~ 1 )!
( n~ 1 )



( n~ 1 )
> simplify(E(X));
n~ p~
> E(X^2):=sum(x^2*f(x),x=0..n);
E( X ) := n~ ! p~ ( 1 p~ )
2
1
 
 1 p~



n~
( n~ 1 )

1

  1 p~

1
p~  
 1 p~
2



n~
n~
1
 2  



 1 p~



1
p~ n~  
 1 p~



n~
n~
p~

p~ 2 n~ /( n~ 1 )!

> Var(X):=simplify(E(X^2)-(E(X))^2);
Var ( X ) := n~ p~ ( 1p~ )
Notice how E(X) and Var(X) can be further simplified to
their stated values in Theorem 5.2. Also, notice that
n1
(n  1)!
py (1  p)n y 1  1 from

y 0 y!(n  y  1)!
 2005 Christopher R. Bilder
5.13
> assume(y>0, y::integer);
> f(y):=(n-1)!/(y!*(n-y-1)!)*p^y*(1-p)^(ny-1);
( n~ y~1 )
( n~ 1 )! p~ y~ ( 1p~ )
f( y~ ) :=
y~ ! ( n~ y~ 1 )!
> simplify(sum(f(y),y=0..n-1));
1
which was needed in the proof above.
Example: FG kicking (FG_ch5.xls)
Remember that n = 4 and p = 0.6
Find the mean and variance.
 = 40.6 = 2.4 and 2=40.6(1-0.6) = 0.96
What would  be if we just used the formula of
4
 =  xf(x) ?
x 0
x
0
1
2
f(x)
0.0256
0.1536
0.3456
xf(x)
0
0.1536
0.6912
 2005 Christopher R. Bilder
5.14
x f(x)
xf(x)
3 0.3456 1.0368
4 0.1296 0.5184
4
 =  xf(x) = 0 + 0.1536 + 0.6912 + 1.0368 + 0.5184
x 0
= 2.4
Suppose a field goal kicker makes 0 out of 4 in a game.
What can we conclude about the kicker?
One would expect a field goal kicker with p=0.6 to
make between
  2=2.4  2*(0.9798) = (0.4404, 4.3596) field
goals
The field goal kicker had an very unusual game or
maybe his probability of success is lower than 0.6.
Probabilities for selected values of X and n have been tabled
in Table A.1 on p. 661-666 in the book. We will not use
these in class since the probabilities can be easily calculated
with a calculator or Excel. To find these probabilities in
Excel, the BINOMDIST(x,n,p,FALSE) function can be used.
 2005 Christopher R. Bilder
5.15
The last entry denotes whether or not you want f(x) or
F(x). If FALSE is given, then Excel assumes you want
f(x). If TRUE is given, Excel assumes you want F(x).
Information about these formulas are also available at
Chris Malone’s Excel help website at
http://www.statsclass.com/excel/tables/prob_values.html
#prob_b.
Example: FG kicking (FG_ch5.xls)
Below is the PDF calculated in Excel and the
corresponding formulas.
 2005 Christopher R. Bilder
5.16
In Maple, the PDF of a binomial can be evaluated with the
following commands:
stats[statevalf,pf,binomiald[n,p]](x);
Yes, this is strange syntax! Here’s an explanation:
 The first call to “stats” tells Maple to use the “statistics”
package inside of it which is not automatically ready to
be used.
 The call to “statevalf” is a subpackage in stats that tells
Maple to evaluate some PDFs.
 The “pf” part tells Maple that you want to find f(x) for a
discrete random variable x. Other possible useful
values instead of “pf” include “dcdf” that tells Maple to
use the CDF.
 2005 Christopher R. Bilder
5.17
 The “binomiald[n,p]” part tells Maple to use the binomial
PDF with n and p. Note that you need to put in there
what n and p are!
 The (x) part just tells Maple what is the value of x.
 An equivalent call to this function is:
> with(stats);
> statevalf[pf,binomiald[n,p]](x);
Example: FG kicking (chapter5.mws)
> stats[statevalf,pf,binomiald[4,0.6]](0);
.0256
> with(stats);
[ anova , describe , fit, importdata , random , statevalf , statplots , transform ]
> statevalf[pf,binomiald[4,0.6]](0);
.0256
> stats[statevalf,pf,binomiald[4,0.6]](3);
.3456
> evalf(stats[statevalf,dcdf,binomiald[4,
0.6]](3),4);
.8704
Shape of the binomial PDF
 2005 Christopher R. Bilder
5.18
The file, binomial_dist.xls, contains a template that you
can modify to see the shape of the binomial distribution
for n=20 and various values of p. Below are a few
examples.
 2005 Christopher R. Bilder
5.19
 2005 Christopher R. Bilder
5.20
Examine the following:
 When is the PDF “symmetric” and when is it
“skewed”?
 Where is the largest probability?
 Notice the 2 and 3 lines.
 In Section 9.10, we will learn how to estimate p
using a sample from a population. Given the results
of these plots, why do you think it is important to
estimate it with a sample instead of just setting it to a
particular value of choice?
Simulating a sample from a population characterized by a
binomial PDF
Observed values of a binomial random variable can also
be generated in the same way as what was done in the
Section 3.1-3.2 fifty numbers example for a general
PDF. Excel also has a specific binomial PDF option in
the Random Number Generation window. The file,
bin_rand.xls, gives an example of using the window
below. More directions are available at Chris Malone’s
Excel help website at
http://www.ksu.edu/stats/tch/malone/computers/excel/mi
sc/bin_dist.html.
 2005 Christopher R. Bilder
5.21
In this case, 1 variable with 100 observed values are
generated. The parameter p=0.25 and n=20. The PDF
corresponds to the first plot on p. 5.18. The seed
number gives Excel a random place to start when
generating these observed values. I can use this seed
number again and generate the exact same data!
Below are part of the results. Notice how close , 2,
and the PDF are close to the sample statistics and the
relative frequency distribution.
Observed X
Sample
5
mean 5.05
5
variance 4.25
5
 2005 Christopher R. Bilder
Population
5
3.75
5.22
3
4
3
2
6
7
10
6
8
7
6
7
8
3
5
8
8
4
5
4
8
5

x
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Sum
Rel. Freq.
0.00
0.04
0.07
0.17
0.10
0.19
0.18
0.10
0.13
0.01
0.01
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
1
f(x)
0.0032
0.0211
0.0669
0.1339
0.1897
0.2023
0.1686
0.1124
0.0609
0.0271
0.0099
0.0030
0.0008
0.0002
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
1
Notes:
 The sample mean is calculated as the average of the
100 observed values for a random variable with a
binomial PDF.
2
n  xi  x 
 The sample variance is calculated as 
where
i 1
n 1
x is the sample mean and xi for i=1,…,n is the ith
observed value. Explanation for why this formula was
used will be given in Chapter 8.
 2005 Christopher R. Bilder
5.23
 Here is an example of how to simulate a sample from a
population characterized by a binomial PDF using
Maple,
> randomize(4516);
4516
> data:=stats[random, binomiald[20,
0.25]](100);
data := 4 , 3 , 7 , 6 , 12 , 4 , 7 , 8 , 2 , 2 , 4 , 4 , 2 , 7 , 4 , 5 , 3 , 3 , 5 , 5 , 8 , 5 , 6 , 3 , 7 , 2 , 5 , 6 , 3 , 8 , 5 ,
8 , 5 , 6 , 7 , 5 , 7 , 5 , 6 , 3 , 2 , 4 , 10 , 7 , 5 , 8 , 7 , 4 , 7 , 2 , 0 , 6 , 7 , 5 , 4 , 4 , 5 , 6 , 9 , 5 , 6 , 10 , 2 ,
4 , 5 , 3 , 7 , 5 , 3 , 4 , 7 , 4 , 3 , 6 , 6 , 7 , 4 , 3 , 11 , 3 , 2 , 3 , 6 , 3 , 4 , 7 , 4 , 4 , 6 , 5 , 5 , 5 , 5 , 7 , 4 , 7 ,
6 , 4 , 10 , 8
> evalf(stats[describe,mean]([data]),2);
5.2
Note that randomize() sets a seed number so that
the exact same sample can be reproduced.
“Decribe” is a subpackage of stats.
 2005 Christopher R. Bilder
5.24
More about “combinations”
This is from my Section 2.3 (full version) notes available
at the bottom of the schedule web page on the course
website.
Theorem 2.8: The number of combinations of n distinct
objects take r at a time is
n
n!
C


n r
 r  r!(n  r)!
 
Note that the order in selecting the objects (items) is not
n
important. Often   is read as “n choose r”.
r 
Example: How many ways can TWO of the letters a, b, and c
be chosen from the three?
First, it is instructive to answer the question, “How many
ways can two of the letters a, b, and c be arranged?”
Letter 1 Letter 2
1
a
b
2
a
c
3
b
a
 2005 Christopher R. Bilder
5.25
Letter 1 Letter 2
4
b
c
5
c
a
6
c
b
To answer the original question of “How many ways can
two of the letters a, b, and c be chosen from the three?”
there is no longer a distinction between cases like (a,b)
and (b,a). Thus, order is no longer important. Then,
1
2
3
4
5
6
Letter 1 Letter 2
a
b
a
c
b
a
b
c
c
a
c
b
only (a,b), (a,c), and (b,c) remain. From Theorem 2.8,
3
3!
 3.
we obtain   
 2  2!(3  1)!
Example: How many different number combinations are
there in the Pick 5 game of the Nebraska lottery (5 numbers
1 through 38 are picked)?
 2005 Christopher R. Bilder
5.26
#1 #2 #3 #4 #5
1 2 3 4 5
1 2 3 4 6
1
2

501,942 34 35 36 37 38
 38 
38!
38!
C



38 5
 5  5!(38  5)! 5!33! = 501,942
 
 2005 Christopher R. Bilder
5.27
5.6: Poisson Distribution and the Poisson Process
Another special case of a PDF is called the Poisson
PDF. It is often used for counting the number of
occurrences of an event over a period of time.
Poisson PDF – The PDF of the Poisson random variable X,
representing the number of outcomes in a given time interval
or specified region denoted by t and  is the average number
of outcomes per unit of time or region is
et (t)x
for x = 0, 1, 2, …
f(x; t)  p(x; t) 
x!
Notice x does not have an upper bound! Notice that t
denotes the “average” of X for a specific time period of
interest.
Example: Telephone calls (tele.xls)
Consider an inbound telemarketing operator who, on the
average, handles five phone calls every three minutes.
What is the probability that there will be no phone calls in
the next three minutes (one unit of time)?
 2005 Christopher R. Bilder
5.28
Let X = the number of phone calls in a time interval
where a unit of time is three minutes. The  is 5 for the
ONE unit of time (3 minutes). Thus, t = 51 = 5. Then
0
5
5
5 e
P(X  0)  f(0) 
 e  0.0067
0!
What is the probability that there will be no phone calls in
the next minute?
Since there is only one minute, we have only 1/3 of
a unit of time. Then t = 5/3  1.67. Let X = the
number of phone calls. Then
0
(5 / 3) e
f(0) 
0!
 ( 5 / 3)
e
 ( 5 / 3)
 0.1889
What is the probability that there will be 2 or more phone
calls in the next minute?
P(X2)
= P(X=2) + P(X=3) + P(X=4) + P(X=5) + P(X=6) + …
= 1 – P(X=0) – P(X=1)
0  ( 5 / 3)
1  ( 5 / 3)
(5 / 3) e
(5 / 3) e
 1

0!
1!
= 1 – (0.1889 + 0.3148)
= 1 – 0.5037
= 0.4963
 2005 Christopher R. Bilder
5.29
These calculations can be done in Excel using the
POISSON(x,*t,FALSE) function. The last entry denotes
whether or not you want f(x) or F(x). If FALSE is given,
then Excel assumes you want f(x). If TRUE is given,
Excel assumes you want F(x).
Below is the PDF calculated in Excel and the
corresponding formulas. Notice that you do not
necessarily need the POISSON() function to do the
calculations. One could also just type in the f(x) formula.
 2005 Christopher R. Bilder
5.30
Notes:
 Of course, X can be greater than 13. I cut it off here
since the probability will be small. Notice the “Sum”
row in the spreadsheet.
 Chris Malone’s website does not have help for the
POISSON() function.
 Probabilities for selected values of t and X have been
tabled in Table A.2 on p. 667-669 in the book. We will
not use these in class since the probabilities can be
easily calculated with a calculator or Excel.
Example: Telephone calls (chapter5.mws)
> stats[statevalf,pf,poisson[5/3]](0);
 2005 Christopher R. Bilder
5.31
.1888756028
> stats[statevalf,pf,poisson[5/3]](1);
.3147926714
> stats[statevalf,dcdf,poisson[5/3]](1);
.5036682742
Theorem 5.5 – The mean and variance for the Poisson PDF
are:
  E(X)  t and 2  Var(X)  t
pf:
E(X)

=  x f(x)
x 0
et (t)x
= x
x 0
x!
x

(

t)
= et  x
x 1
x!

since x=0 does not contribute to sum
and e-t does not contain an x value
(t)x
=e 
x 1 (x  1)!
 (t)x 1
t
= e (t) 
x 1 (x  1)!
 (t)y
t
= e (t) 
where y=x-1
y 0 y!
t

 2005 Christopher R. Bilder
5.32
ba
= e (t)e using the result of e  
a  0 a!
= t
t
t
b

See Appendix A.26 for the Var(X) proof. The proof
finds E[X(X-1)] = E[X2] – E[X] to help find the
variance.
Example: Telephone calls (tele.xls)
Consider an inbound telemarketing operator who, on the
average, handles five phone calls every three minutes.
For the one unit of time (3 minutes),
  E(X)  5 and 2  Var(X)  5
For the 1/3 unit of time (1 minute),
  E(X)  5 / 3 and 2  Var(X)  5 / 3
Shape of the Poisson distribution
The file, pois.xls, contains a template that you can
modify to see the shape of the PDF for various values of
 = t. Below are a few examples.
 2005 Christopher R. Bilder
5.33
 2005 Christopher R. Bilder
5.34
 2005 Christopher R. Bilder
5.35
Examine the following:
 When is the PDF “symmetric” and when is it
“skewed”?
 Where is the largest probability?
 Notice the 2 and 3 lines.
 In Section 9.15 (p. 277) one would learn how to
estimate  using a sample from a population. Given
the results of these plots, why do you think it is
important to estimate it with a sample instead of just
setting it to a particular value of choice?
Simulating a sample from a population characterized by a
Poisson PDF
Observed values of a Poisson random variable can also
be generated in the same way as what was done with
the binomial. Excel has a distribution option for the
Poisson in the Random Number Generation window.
 2005 Christopher R. Bilder
5.36
In this case, 1 variable with 100 observed values are
generated. Notice that “lambda” represents our t here.
Be careful! Since this is very similar to the binomial
example, I do not have a file which shows the result of
implementing this random number generation example.
In Maple, you can use
> randomize(2513);
2513
 2005 Christopher R. Bilder
5.37
> stats[random,poisson[3]](5);
1, 6, 3, 4, 2
Read on your own about the relationship between the
Poisson PDF and the binomial PDF.
Why examine PDFs?
They can be used to help model real life events. Many
examples were given in this chapter and will be given in
future chapters demonstrating this. Remember we are
making ASSUMPTIONS about the population. Rarely (if
ever) will these assumptions be totally satisfied! Often,
these assumptions will be satisfied “close enough” to
justify their use.
 2005 Christopher R. Bilder