Download Slide 1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Statistics for Business and
Economics
Module 1:Probability Theory and Statistical Inference
Spring 2010
Lecture 2: Probability and discrete random variables
Priyantha Wijayatunga, Department of Statistics, Umeå University
[email protected]
These materials are altered ones from copyrighted lecture slides (© 2009 W.H.
Freeman and Company) from the homepage of the book:
The Practice of Business Statistics Using Data for Decisions :Second Edition
by Moore, McCabe, Duckworth and Alwan.
Randomness and probability
A phenomenon is random if individual
outcomes are uncertain, but there is
nonetheless a regular distribution of
outcomes in a large number of
repetitions.
The probability of any outcome of a random phenomenon can be
defined as the proportion of times the outcome would occur in a very
long series of repetitions.
Randomness and Probability Models

Randomness and probability

Random variables and probability distributions

Independence and rules of probability

Assigning probabilities: finite number of outcomes (discrete variables)

Conditional probability and Baye’s theorem

Mean and variance of a discrete random variable

Rules for means and variances

Discrete probability models: binomial model, Poisson model, etc.
Coin toss
The result of any single coin toss is
random. But the result over many tosses
is predictable, as long as the trials are
independent (i.e., the outcome of a new
coin flip is not influenced by the result of
the previous flip).
The probability of
heads is 0.5 =
the proportion of
times you get
heads in many
repeated trials.
First series of tosses
Second series
Two events are independent if the probability that one event occurs
on any given trial of an experiment is not affected or changed by the
occurrence of the other event.
When are trials not independent?
Imagine that these coins were spread out so that half were heads up and half
were tails up. Close your eyes and pick one. The probability of it being heads is
0.5. However, if you don’t put it back in the pile, the probability of picking up
another coin and having it be heads is now less than 0.5.
The trials are independent only when
you put the coin back each time. It is
called sampling with replacement.
Probability models
Probability models describe mathematically the outcome of random
processes and consist of two parts:
1) S = Sample Space: This is a set, or list, of all possible outcomes
of a random process. An event is a subset of the sample space.
2) A probability for each possible event in the sample space S.
Example: Probability Model for a Coin Toss:
S = {Head, Tail}
Probability of heads = 0.5
Probability of tails
= 0.5
Sample spaces
It’s the question that determines the sample space.
A. A basketball player shoots
three free throws. What are
the possible sequences of
hits (H) and misses (M)?
H
H -
HHH
M -
HHM
H
M
M…
H -
HMH
M -
HMM
…
B. A basketball player shoots
three free throws. What is the
number of baskets made?
S = { 0, 1, 2, 3 }
S = { HHH, HHM,
HMH, HMM, MHH,
MHM, MMH, MMM }
Note: 8 elements, 23
Probability rules
1) Probabilities range from 0
(no chance of the event) to
1 (the event has to happen).
For any event A, 0 ≤ P(A) ≤ 1
Coin Toss Example:
S = {Head, Tail}
Probability of heads = 0.5
Probability of tails = 0.5
Probability of getting a Head = 0.5
We write this as: P(Head) = 0.5
P(neither Head nor Tail) = 0
P(getting either a Head or a Tail) = 1
2) Because some outcome must occur
on every trial, the sum of the probabilities Coin toss: S = {Head, Tail}
for all possible outcomes (the sample
P(head) + P(tail) = 0.5 + 0.5 =1
space) must be exactly 1.
 P(sample space) = 1
P(sample space) = 1
Probability rules (cont d)
Coin Toss Example:
S = {Head, Tail}
Probability of heads = 0.5
Probability of tails = 0.5
3) The complement of any event A is the
event that A does not occur, written as Ac.
The complement rule states that the
probability of an event not occurring is 1
minus the probability that is does occur.
P(not A) = P(Ac) = 1 − P(A)
Tailc = not Tail = Head
P(Tailc) = 1 − P(Head) = 0.5
Venn diagram:
Sample space made up of an
event A and its complementary
Ac, i.e., everything that is not A.
Probability rules (cont d )
Venn diagrams:
A and B disjoint
4) Two events A and B are disjoint if they have
no outcomes in common and can never happen
together. The probability that A or B occurs is
then the sum of their individual probabilities.
P(A or B) = “P(A U B)” = P(A) + P(B)
This is the addition rule for disjoint events.
A and B not disjoint
Example: If you flip two coins, and the first flip does not affect the second flip:
S = {HH, HT, TH, TT}. The probability of each of these events is 1/4, or 0.25.
The probability that you obtain “only heads or only tails” is:
P(HH or TT) = P(HH) + P(TT) = 0.25 + 0.25 = 0.50
Probability rules (cont d )
Venn diagrams:
A and B disjoint
The joint probability of happening
events A and B is written as P(A and B).
For disjoint events A and B, P(A and B)=0
A and B not disjoint
General addition rule
General addition rule for any two events A and B:
The probability that A occurs,
or B occurs, or both events occur is:
P(A or B) = P(A) + P(B) – P(A and B)
What is the probability of randomly drawing either an ace or a heart from a deck of
52 playing cards? There are 4 aces in the pack and 13 hearts. However, 1 card is
both an ace and a heart. Thus:
P(ace or heart) = P(ace) + P(heart) – P(ace and heart)
= 4/52 + 13/52 - 1/52 = 16/52 ≈ .3
Multiplication Rule for Independent Events
Two events A and B are independent when knowing that one occurs
does not change the probability that the other occurs.
If A and B are independent, P(A and B) = P(A)P(B)
This is the multiplication rule for independent events.
Two consecutive coin tosses:
P(first Tail and second Tail) = P(first Tail) * P(second Tail) = 0.5 * 0.5 = 0.25
Venn diagram:
Event A and event B. The intersection
represents the event {A and B} and
outcomes common to both A and B.
Applying the Multiplication Rule

If two events A and B are independent, the event that A does not
occur is also independent of B.

The Multiplication Rule extends to collections of more than two
events, provided that all are independent.

Example: A transatlantic data cable contains repeaters to amplify
the signal. Each repeater has probability 0.999 of functioning without
failure for 25 years. Repeaters fail independently of each other. Let
A1 denote the event that the first repeater operates without failure for
25 years, A2 denote the event that the second repeater operates
without failure for 25 years, and so on. The last transatlantic cable
had 662 repeaters. The probability that all 662 will work for 25 years
is:
P(A1 and A2 and…and A662) = 0.999662 = 0.516
Independent vs. disjoint events

Disjoint events are not independent.

If A and B are disjoint, then the fact that A occurs tells us that B
cannot occur. So A and B are not independent.

Independence cannot be pictured in a Venn Diagram.
Assigning probabilities: finite number of outcomes
Finite sample spaces deal with discrete data — data that can only
take on a limited number of values. These values are often integers or
whole numbers.
Throwing a die:
S = {1, 2, 3, 4, 5, 6}
The individual outcomes of a random phenomenon are always disjoint.
 The probability of any event is the sum of the probabilities of the
outcomes making up the event (addition rule).
M&M candies
If you draw an M&M candy at random from a bag, the candy will have one
of six colors. The probability of drawing each color depends on the proportions
manufactured, as described here:
Color
Probability
Brown
Red
Yellow
Green
Orange
Blue
0.3
0.2
0.2
0.1
0.1
?
What is the probability that an M&M chosen at random is blue?
S = {brown, red, yellow, green, orange, blue}
P(S) = P(brown) + P(red) + P(yellow) + P(green) + P(orange) + P(blue) = 1
P(blue) = 1 – [P(brown) + P(red) + P(yellow) + P(green) + P(orange)]
= 1 – [0.3 + 0.2 + 0.2 + 0.1 + 0.1] = 0.1
What is the probability that a random M&M is any of red, yellow, or orange?
P(red or yellow or orange) = P(red) + P(yellow) + P(orange)
= 0.2 + 0.2 + 0.1 = 0.5
Probabilities: equally likely outcomes
We can assign probabilities either:

empirically  from our knowledge of numerous similar past events

Mendel discovered the probabilities of inheritance of a given trait from
experiments on peas without knowing about genes or DNA.
or theoretically  from our understanding the phenomenon and
symmetries in the problem


A 6-sided fair die: each side has the same chance of turning up

Genetic laws of inheritance based on meiosis process
If a random phenomenon has k equally likely possible outcomes, then
each individual outcome has probability 1/k.
count of outcomes in A
And, for any event A:
P(A) 
count of outcomes in S
Dice
You toss two dice. What is the probability of the outcomes summing to 5?
This is S:
{(1,1), (1,2), (1,3),
……etc.}
There are 36 possible outcomes in S, all equally likely (given fair dice).
Thus, the probability of any one of them is 1/36.
P(the roll of two dice sums to 5) =
P(1,4) + P(2,3) + P(3,2) + P(4,1) = 4 / 36 = 0.111
Example: A couple wants three children. What are the arrangements of boys (B)
and girls (G)?
Genetics tell us that the probability that a baby is a boy or a girl is the same, 0.5.
Sample space: {BBB, BBG, BGB, GBB, GGB, GBG, BGG, GGG}
 All eight outcomes in the sample space are equally likely.
The probability of each is thus 1/8.
 Each birth is independent of the next, so we can use the multiplication rule.
Example: P(BBB) = P(B)* P(B)* P(B) = (1/2)*(1/2)*(1/2) = 1/8
Conditional probability
Conditional probabilities reflect how the probability of an event can
change if we know that some other event has occurred/is occurring.

Example: The probability that a cloudy day will result in rain is different if
you live in Los Angeles than if you live in Seattle.

Our brains effortlessly calculate conditional probabilities, updating our
“degree of belief” with each new piece of evidence.
The conditional probability
of event B given event A is:
(provided that P(A) ≠ 0)
P( A and B)
P( B | A) 
P( A)
General Multiplication Rule

Conditional probability gives the probability of one event under the
condition that we know another event.

General multiplication rule: The probability that any two events, A
and B, happen together is:
P(A and B) = P(A)P(B|A)
Here P(B|A) is the conditional probability that B occurs, given the
information that A occurs.
Independent Events
Recall: A and B are independent when they have no influence on each
other’s occurrence.
Two events A and B that both have positive probability are
independent if P(B|A) = P(B)


The general multiplication rule then becomes: P(A and B) = P(A)P(B)
What is the probability of randomly drawing an ace of hearts from a deck of 52
playing cards? There are 4 aces in the pack and 13 hearts.
P(heart|ace) = 1/4
P(ace) = 4/52
P(ace and heart) = P(ace)* P(heart|ace) = (4/52)*(1/4) = 1/52
Notice that heart and ace are independent events.
Tree diagrams
Conditional probabilities can get complex, and it is often a good strategy
to build a probability tree that represents all possible outcomes
graphically and assigns conditional probabilities to subsets of events.
Tree diagram for chat room
habits for three adult age
groups A1, A2 & A3.
Internet
user
0.47
P(A1|C ) = 0.47
P(A1 and C) = P(C|A1)P(A1) = 0.136.
P(chatting) = P(C and A1) + P(C and A2) + P(C and A3)
= 0.136 + 0.099 + 0.017 = 0.252
About 25% of all adult Internet users visit chat rooms.
Breast cancer screening
If a woman in her 20s gets screened for breast cancer and receives a positive
test result, what is the probability that she does have breast cancer?
Diagnosis
sensitivity 0.8
Disease
incidence
0.0004
Positive
Cancer
0.2
Mammography
0.9996
0.1
Negative False negative
False positive
Positive
No cancer
Incidence of breast
cancer among
women ages 20–30
0.9
Diagnosis
specificity
Negative
Mammography
performance
She could either have a positive test and have breast cancer or have a positive
test but not have cancer (false positive).
Diagnosis
sensitivity 0.8
Disease
incidence
Positive
Cancer
0.0004
0.2
Mammography
0.1
0.9996
Negative
False negative
Positive
False positive
No cancer
Incidence of breast
cancer among
women ages 20–30
0.9
Diagnosis
specificity
Negative
Mammography
performance
Possible outcomes given the positive diagnosis: positive test and breast cancer
or positive test but no cancer (false positive).
P(cancer and pos )
P(cancer and pos)  P(nocancer and pos)
0.0004*0.8

 0.3%
0.0004*0.8  0.9996*0.1
P(cancer | pos) 
This value is called the positive predictive value, or PV+. It is an important piece
of information but, unfortunately, is rarely communicated to patients.
Bayes’s rule
An important application of conditional probabilities is Bayes’s rule. It is
the foundation of many modern statistical applications beyond the
scope of this textbook.
* If a sample space is decomposed in k disjoint events, A1, A2, … , Ak
— none with a null probability but P(A1) + P(A2) + … + P(Ak) = 1,
* And if C is any other event such that P(C) is not 0 or 1, then:
However, it is often intuitively much easier to work out answers with a
probability tree than with these lengthy formulas.
If a woman in her 20s gets screened for breast cancer and receives a positive test
result, what is the
Disease
incidence
probability that
she does have
Diagnosis
sensitivity 0.8
Positive
Cancer
0.0004
0.2
breast cancer?
Mammography
0.1
0.9996
Negative
False negative
Positive
False positive
No cancer
Incidence of breast
cancer among
women ages 20–30
0.9
Diagnosis
specificity
Negative
Mammography
performance
This time, we use Bayes’s rule:
A1 is cancer, A2 is no cancer, C is a positive test result.
P( pos | cancer ) P(cancer )
P( pos | cancer ) P(cancer )  P( pos | nocancer ) P (nocancer )
0.8*0.0004

 0.3%
0.8*0.0004  0.1*0.9996
P(cancer | pos) 
Random variable
A random variable is a variable whose value is a numerical outcome
of a random phenomenon.
A couple want three children. We define the random variable X as the
number of girls they may get (or equivalently number of boys that they may
get).
A discrete random variable X has a finite number of possible values.
A continuous random variable X takes all values in an interval.
Probability distributions

The probability distribution of a random variable X tells us what
values X can take and how to assign probabilities to those values.

Because of the differences in the nature of sample spaces for
discrete and continuous sample random variables, we describe
probability distributions for the two types of random variables
separately.
Define a discrete random variable
A couple wants three children. What are the numbers of girls they could have?
Define the random variable: X=number of girls
Sample space of X (possible values of X): {0, 1, 2, 3}
Then we get the probabilty model:
P(X = 0) = p(0) =P(BBB) = 1/8
P(X = 1) = p(1) =P(BBG or BGB or GBB) = P(BBG) + P(BGB) + P(GBB) = 3/8
x P(X=x) or p(x) or f(x)
F(x)
0
1/8
1/8
1
3/8
4/8
2
3/8
7/8
3
1/8
8/8 =1
p(x) : probability distribution
F(x) : distribution function or cumulative
distribution function
The probability distribution of a discrete
random variable X lists the values
and their probabilities:
The probabilities pi must add up to 1.
A basketball player shoots three free throws. The random variable X is the
number of baskets successfully made.
H H
HHH
M -
HHM
H -
HMH
Value of X
0
1
2
3
Probability
1/8
3/8
3/8
1/8
MMM
HMM
MHM
MMH
HHM
HMH
MHH
HHH
H
M
M…
M -
HMM
…
The probability of any event is the sum of the probabilities pi of the
values of X that make up the event.
A basketball player shoots three free throws. The random variable X is the
number of baskets successfully made.
What is the probability that the player
Value of X
0
1
2
3
successfully makes at least two
Probability
1/8
3/8
3/8
1/8
MMM
HMM
MHM
MMH
HHM
HMH
MHH
HHH
baskets (“at least two” means “two or
more”)?
P(X≥2) = P(X=2) + P(X=3) = 3/8 + 1/8 = 1/2
What is the probability that the player successfully makes fewer than three
baskets?
P(X<3) = P(X=0) + P(X=1) + P(X=2) = 1/8 + 3/8 + 3/8 = 7/8 or
P(X<3) = 1 – P(X=3) = 1 – 1/8 = 7/8
Frequency tables for discrete variables
Frequency: Number of observations in/for each category/value of the variable
Relative Frequency: proportion/percentage of observations in/for each
category/value of the variable
Frequency Distribution: Listing of frequncies for all values of the variable
Relative Frequency Distribution: Listing of relative frequencies for all values
of the variable
Example
To study the age structure of the people in Umeå, a random sample of 90
people was taken and recorded their : 23 people were less than 21yrs, 30
people were less than 36yrs, 15 people were less than 51yrs, 12 people were
less than 71yrs and 10 people were 71yrs or older
Age
Frequency
Relative
frequency
Cumulative
relative
frequency
0–20
23
0.26
0.26
21–35
30
0.33
0.59
36–50
15
0.17
0.76
51–70
12
0.13
0.89
71–
10
0.11
1.00
Total
90
1.00
Obtaining a probability model
If our random sample is a good representation of the total population
Define a random variable
X=
1, if the person’s age is in 0–20
2, if the person’s age is in 21–35
3, if the person’s age is in 36–50
4, if the person’s age is in 51–70
5, if the person’s age is 71 or older
Value of X
Probability f(x)
Cumulative probability F(x)
1
0.26
0.26
2
0.33
0.59
3
0.17
0.76
4
0.13
0.89
5
0.11
1.00
1.00
Mean of a discrete random variable
The mean of a set of observations is their arithmetic average.
The mean µ of a random variable X is a weighted average of the
possible values of X, reflecting the fact that all outcomes might not be
equally likely.
A basketball player shoots three free throws. The random variable X is the
number of baskets successfully made (“H”).
MMM
HMM
MHM
MMH
HHM
HMH
MHH
HHH
Value of X
0
1
2
3
Probability
1/8
3/8
3/8
1/8
The mean of a random variable X is also called expected value of X.
Mean of a discrete random variable
For a discrete random variable X with
probability distribution 
the mean µ of X is found by multiplying each possible value of X by its
probability, and then adding the products.
A basketball player shoots three free throws. The random variable X is the
number of baskets successfully made.
Value of X
0
1
2
3
Probability
1/8
3/8
3/8
1/8
The mean µ of X is
µ = (0*1/8) + (1*3/8) + (2*3/8) + (3*1/8)
= 12/8 = 3/2 = 1.5
Variance of a discrete random variable
The variance and the standard deviation are the measures of spread
that accompany the choice of the mean to measure center.
The variance σ2X of a random variable is a weighted average of the
squared deviations (X − µX)2 of the variable X from its mean µX. Each
outcome is weighted by its probability in order to take into account
outcomes that are not equally likely.
The larger the variance of X, the more scattered the values of X on
average. The positive square root of the variance gives the standard
deviation σ of X.
Variance of a discrete random variable
For a discrete random variable X
with probability distribution 
and mean µX, the variance σ2 of X is found by multiplying each squared
deviation of X by its probability and then adding all the products.
A basketball player shoots three free throws. The random variable X is the
number of baskets successfully made.
µX = 1.5.
The variance
σ2
Value of X
0
1
2
3
Probability
1/8
3/8
3/8
1/8
of X is
σ2 = 1/8*(0−1.5)2 + 3/8*(1−1.5)2 + 3/8*(2−1.5)2 + 1/8*(3−1.5)2
= 2*(1/8*9/4) + 2*(3/8*1/4) = 24/32 = 3/4 = .75
Rules for means and variances
If X is a random variable and a and b are fixed numbers, then
µa+bX = a + bµX
σ2a+bX = b2σ2X
If X and Y are two independent random variables, then
µX+Y = µX + µY
σ2X+Y = σ2X + σ2Y
σ2X-Y = σ2X + σ2Y
If X and Y are NOT independent but have correlation ρ, then
µX+Y = µX + µY
σ2X+Y = σ2X + σ2Y + 2ρσXσY
σ2X-Y = σ2X + σ2Y - 2ρσXσY
Binomial setting
Binomial distributions are models for some categorical variables,
typically representing the number of successes in a series of n trials.
The observations must meet these requirements:

The total number of observations n is fixed in advance.

The outcomes of all n observations are statistically independent.

Each observation falls into just one of 2 categories: success and failure.

All n observations have the same probability of “success,” p.
We record the next 50 births at a local hospital. Each newborn is either a
boy or a girl; each baby is either born on a Sunday or not.
Binomial distribution
The distribution of the count X of successes in the binomial setting is the
binomial distribution with parameters n and p: B(n,p).
The parameter n is the total number of observations.
 The parameter p is the probability of success on each observation.
 The count of successes X can be any whole number between 0 and n.

A coin is flipped 10 times. Each outcome is either a head or a tail.
The variable X is the number of heads among those 10 flips, our count
of “successes.”
On each flip, the probability of success, “head,” is 0.5. The number X of
heads among 10 flips has the binomial distribution B(n = 10, p = 0.5).
Applications for binomial distributions
Binomial distributions describe the possible number of times that a
particular event will occur in a sequence of observations.
They are used when we want to know about the occurrence of an
event, not its magnitude.

In a clinical trial, a patient’s condition may improve or not. We study the
number of patients who improved, not how much better they feel.

Is a person ambitious or not? The binomial distribution describes the
number of ambitious persons, not how ambitious they are.

In quality control we assess the number of defective items in a lot of
goods, irrespective of the type of defect.
Binomial probabilities
The number of ways of arranging k successes in a series of n
observations (with constant probability p of success) is the number of
possible combinations (unordered sequences).
This can be calculated with the binomial coefficient:
n!
 n  n C 
k
k
k!(n  k )!
Where k = 0, 1, 2, ..., or n.
Binomial formulas


The binomial coefficient “n_choose_k” uses the factorial notation
“!”.
The factorial n! for any strictly positive whole number n is:
n! = n × (n − 1) × (n − 2) × · · · × 3 × 2 × 1

For example: 5! = 5 × 4 × 3 × 2 × 1 = 120

Note that 0! = 1.
Calculations for binomial probabilities
The binomial coefficient counts the number of ways in which k
successes can be arranged among n observations.
The binomial probability P(X = k) is this count multiplied by the
probability of any specific arrangement of the k successes:
P( X  k )   n  p k (1  p) nk
k
X
0
0 n
nC0 p q =
1
1 n-1
nC1 p q
2
2 n-2
nC2 p q
The probability that a binomial random variable takes any
…
range of values is the sum of each probability for getting
k
exactly that many successes in n observations.
…
P(X ≤ 2) = P(X = 0) + P(X = 1) + P(X = 2)
P(X)
n
Total
qn
…
k n-k
nCx p q
…
n 0
nCn p q =
1
pn
Finding binomial probabilities: tables

You can also look up the probabilities for some values of n and p in
Table C in the back of the book.

The entries in the table are the probabilities P(X = k) of individual
outcomes.

The values of p that appear in Table C are all 0.5 or smaller. When
the probability of a success is greater than 0.5, restate the problem
in terms of the number of failures.
Color blindness
The frequency of color blindness (dyschromatopsia) in the
Caucasian American male population is estimated to be
about 8%. We take a random sample of size 25 from this population.
What is the probability that exactly five individuals in the sample are color blind?

Use Excel’s “=BINOMDIST(number_s,trials,probability_s,cumulative)”
P(x = 5) = BINOMDIST(5, 25, 0.08, 0) = 0.03285

P(x = 5) = (n! / k!(n  k)!)pk(1  p)n-k = (25! / 5!(20)!) 0.0850.925
P(x = 5) = (21*22*23*24*24*25 / 1*2*3*4*5) 0.0850.9220
P(x = 5) = 53,130 * 0.0000033 * 0.1887 = 0.03285
Binomial mean and standard deviation
0.3
distribution for a count X are defined by
P(X=x)
The center and spread of the binomial
0.25
0.2
a)
0.15
0.1
0.05
the mean m and standard deviation s:
0
0
1
2
0.3
s  npq  np(1  p)
4
5
6
7
8
9
10
8
9
10
8
9
10
Number of successes
0.25
P(X=x)
m  np
3
b)
0.2
0.15
0.1
0.05
0
Effect of changing p when n is fixed.
a) n = 10, p = 0.25
0
1
2
3
4
5
6
7
Number of successes
0.3
b) n = 10, p = 0.5
c) n = 10, p = 0.75
For small samples, binomial distributions
are skewed when p is different from 0.5.
P(X=x)
0.25
0.2
c)
0.15
0.1
0.05
0
0
1
2
3
4
5
6
7
Number of successes
The Poisson setting

A count X of successes has a Poisson distribution in the Poisson
setting:

The number of successes that occur in any unit of measure is
independent of the number of successes that occur in any nonoverlapping unit of measure.
The probability that a success will occur in a unit of measure is the
same for all units of equal size and is proportional to the size of the
unit.
The probability that 2 or more successes will occur in a unit
approaches 0 as the size of the unit becomes smaller.


Poisson distribution


The distribution of the count X of successes in the Poisson setting is
the Poisson distribution with mean μ. The parameter μ is the mean
number of successes per unit of measure.
The possible values of X are the whole numbers 0, 1, 2, 3, ….If k is
any whole number 0 or greater, then
P(X = k) = (e-μμk)/k!
where
e  2.72
e m m k
P( X  k ) 
k!

The standard deviation of the distribution is the square root of μ.
Poisson distribution: example

Number accidents X happening in a certain town in 6–month period is
known to be a Poisson distribution with mean μ=2.

What is the probability that no accident in past 6 months in the town?
e 2 2 0
P( X  0) 
 e 2  0.135
0!

What is the probability that only 1 accident in past 6 months in the
town?
2 1
e 2
P( X  1) 
 0.271
0!

What is the probability that more than 1 accident in past 6 months in
the town?
P( X  1)  1  P( X  0)  P( X  1)  1  0.135  0.271  0.594

Poisson probabilities can be found in statistical tables
Related documents