Download Contents 14 Uniform Distribution

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Central limit theorem wikipedia , lookup

Transcript
Contents
14 Uniform Distribution
1
14.1 Probability Density Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
14.2 Expectation & Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
14.3 Uniform in R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
14.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
15 Normal Distribution
3
15.1 Probability Density Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
15.2 Expectation & Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
15.3 Normal in R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
15.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
14
Uniform Distribution
14.1
Probability Density Function
Definition: uniform distribution
For θ1 < θ2 , the continuous random variable Y ∼ Uniform(θ1 , θ2 ) is said to have a uniform
distribution on interval (θ1 , θ2 ) if and only if its pdf is
1
θ2 −θ1 , θ1 ≤ y ≤ θ2
f (y) =
0,
otherwise
0.0
0.2
0.4
f(x)
0.6
0.8
1.0
For example, the following is a plot of the pdf for Uniform(3.2, 4.7).
0
2
θ1
4
x
Notes.
θ2
6
8
• The uniform distribution has 2 parameters, θ1 and θ2 that define the lower and upper limit of the range
of Y .
• All allowable values of the random variable, i.e. all points in (θ1 , θ2 ), are equally probable.
• The uniform distribution is important for two reasons:
– Random number generation. If a computer program can generate U ∼ Uniform(0, 1) (e.g.
runif(1)), then Y = F −1 (U ) ∼ F(y). In other words, once a computer can generate a
uniform random variables, it can also generate random variables Y from any distribution F (y) if
the inverse function F −1 (·) is available.
– Many physical phenomena have approximate uniform distribution. If events occur as a Poisson process, then given an event has occurred in the interval (a, b), the exact time or location of
that event has a Uniform(a, b) distribution.
y
Ry 1
t dt = b−a
• F (y) = a b−a
= y−a
b−a .
a
• The standard uniform random variable is Y ∼ Uniform(0, 1), with f (y) = 1.
Relation to Poisson
To be more concrete about the relationship to the Poisson, we will prove the following theorem about the
time/location of an event after it is known that an event has occurred. (Recall, that the Poisson random
variable arises in physical situations where events are happening randomly within some inteval in time or
space.)
Theorem 38. Suppose Y ∼ Poisson(λ) on interval (a, b) and it is known that Y = 1. Let T be the random
location of the single event in (a, b). Then,
P (T = t | Y = 1) =
1
b−a
In other words, T | Y ∼ Uniform(a, b).
Proof. Suppose t ∈ (a, b) is a possible time of the event.
P (T ≤ t | Y = 1)
=
=
P (T ≤t∩Y =1)
P (Y =1)
P (1 event in
definition of conditional probability
(a,t]∩0 events in (t,b))
P (Y =1)
in other words
Now, consider events occurring in interval (a, t]. These events follow a Poisson process with mean λ1 =
t−a
λ × b−a
by transformation of rate from original interval (a, b) to smaller interval (a, t]. Let the number
of events in this interval be Z1 ∼ Poisson(λ1 ). Also, consider events occurring in interval (t, b). These
b−t
events follow a Poisson process with mean λ2 = λ × b−a
. Let the number of events in this interval be
Z2 ∼ Poisson(λ2 ). Furthermore, the events in each of these intervals are independent because the intervals do
not overlap, and by independence of events in the Poisson process, so Z1 and Z2 are independent. Continuing
our derivation,
P (T ≤ t | Y = 1)
=
=
=
=
=
=
=
P (1
event in (a,t]∩0 events in (t,b))
P (Y =1)
P (Z1 =1∩Z2 =0)
P (Y =1)
P (Z1 =1)P (Z2 =0)
P (Y =1)
e−λ1 λ1 e−λ2
e−λ λ
e−(λ1 +λ2 ) λ1
e−λ λ
λ1
λ
t−a
b−a
which is the cdf of Uniform(a, b).
2
from above
new definitions
independence
Poisson pmf
14.2
Expectation & Variance
Theorem 39. If Y ∼ Uniform(θ1 , θ2 ), then
E[Y ] =
θ1 + θ2
2
V (Y ) =
(θ2 − θ1 )2
12
Proof. You should be able to derive this proof.
14.3
Uniform in R
dunif(x,
punif(q,
qunif(p,
runif(n,
Function
min=0,
min=0,
min=0,
min=0,
max=1)
max=1)
max=1)
max=1)
Arguments
min= θ1 , max= θ2
min= θ1 , max= θ2
min= θ1 , max= θ2
min= θ1 , max= θ2
What it Computes
f (x)
F (q)
θp
Y ∼ Uniform(θ1 , θ2 )
If only the first argument is provided, then the standard uniform random variable is used.
14.4
Examples
Insert your own examples here learned in class. I covered one review example.
Example:
Trucks haul concrete to a construction site with Uniform(50, 70) cycle time, measured in minutes. What is the probability that the cycle time exceeds 65 minutes given that it is known to
exceed 55 minutes (for example, say you have been waiting 55 minutes and want to know the
probability you will have to wait at least 10 more minutes). What are the mean and variance of
the cycle time?
Let Y be the cycle time, then Y ∼ Uniform(50, 70). Note, F (y) =
P (Y > 65 | Y > 55) =
E[Y ] =
and
V (Y ) =
15.1
We seek
1−
P (Y > 65)
1 − F (65)
P (Y > 65 ∩ Y > 55)
=
=
=
P (Y > 55)
P (Y > 55)
1 − F (55)
1−
Also,
15
y−50
20 .
65−50
20
55−50
20
=
50 + 70
= 60
2
(70 − 50)2
100
=
12
3
Normal Distribution
Probability Density Function
Definition: normal random variable
We say continuous random variable Y ∼ Normal(µ, σ 2 ) or Y ∼ N(µ, σ 2 ), σ > 0, −∞ < µ <
∞, has normal distribution if and only if its probability density function is
f (y) =
(y−µ)2
1
√ e 2σ2 ,
σ 2π
3
−∞ < y < ∞
1
3
f(x)
0.00
0.05
0.10
0.15
0.20
The following plot shows the normal pdf for µ = 3 and σ 2 = 4.
−4
−2
0
µ−σ
µ
2
4
µ+σ
6
8
10
x
Properties:
• Unimodal: there is a single peak at y = µ.
• Symmetric around µ: f (y + µ) = f (−y + µ) for all y.
• No closed-form expression for F (y) exists. Numerical methods must be used.
• σ > 0 is required otherwise f (y) < 0, which cannot be true (because the cdf is increasing).
15.2
Expectation & Variance
Theorem 40. Given Y ∼ N(µ, σ 2 ),
E[Y ] = µ
V (Y ) = σ 2
Proof. Omitted.
Definition: standard normal
Z ∼ N(0, 1) is said to have the standard normal distribution.
Any Y ∼ N(µ, σ 2 ) can be transformed to a standard normal random variable using the following function
Z=
Y −µ
σ
The proof is left for Stat342. This transformation is less important in today’s computer age, but printed
statistical tables for the normal random variable provide probabilities for Z. There are other reasons to
remember this transformation; useful for finding µ and σ 2 such that the random variable satisfies certain
properties.
4
15.3
Normal in R
dnorm(x,
pnorm(q,
qnorm(p,
rnorm(n,
Function
mean=0,
mean=0,
mean=0,
mean=0,
Arguments
mean= µ, sd= σ
mean= µ, sd= σ
mean= µ, sd= σ
mean= µ, sd= σ
sd=1)
sd=1)
sd=1)
sd=1)
What it Computes
f (x)
F (q)
φp
Y ∼ N(µ, σ 2 )
If only the first argument is provided, then the standard normal random variable is used.
15.4
Examples
You should know how to, for Y ∼ N(µ, σ 2 ):
• compute P (Y > a), P (a ≤ Y ≤ b), or area of shaded regions specified in graphs
• compute probability that n independent realizations of a normal random variable satisfy a particular
condition, e.g. ∈ [a, b] (just a reminder of Binomial distribution)
• find quantiles (median is interesting–to see if they get it–because of symmetry, word problems, e.g.
how high in order to guarantee greater than X% of occurrences)
• P (Z 2 < a) or P (Z 2 > b), requires a little thinking
• Find µ, given σ such that a certain probability statement is met. Vice versa. Another use of Z.
Example:
Suppose a soft-drink machine discharges an average µ oz. per cup. If the amount dispensed is
normally distributed with standard deviation 0.3, when µ will result in overlow only 1% of the
time?
Let X be the amount dispensed by the machine. We are given X ∼ N(µ, 0.32 ).
When asked to compute µ or σ such that certain properties are satisfied, it is useful to work with
the standardized value Z. We seek µ such that
P (X > 8)
=
0.01
P (X − µ > 8 − µ)
X −µ
8−µ
P
>
0.3
0.3
8−µ
P Z>
0.3
8−µ
P Z≤
0.3
=
0.01
=
0.01
=
0.01
=
0.99
We know that qnorm(0.99) is the quantile φ0.99 such that P (Z ≤ φ0.99 ) = 0.99. Thus
φ0.99 =
8−µ
0.3
⇒
µ = 8 − 0.3φ0.99 ≈ 7.302
Some other quantities we might need to compute, where we set mu= 7.302.
1. P (X > 7.5) = 1 − P (X ≤ 7.5) is computed as 1-pnorm(7.5,mean=mu, sd=0.3)= 0.255.
2. P (7.5 ≤ X ≤ 8) = P (X ≤ 8)−P (X ≤ 7.5) = pnorm(8, mean=mu, sd=0.3) - pnorm(7.5,
mean=mu, sd=0.3)= 0.245
5
3. What is the probability that 100 cups do not overflow? This is just a binomial probability
size=100, prob=0.1)= 2.65614e − 05.
100
0
0.10 0.9100 =dbinom(0,
4. If you are given µ and asked to compute σ, make the same calculations but obtain a formula involving
σ, not µ.
√
√
√
√
5. P (X 2 < a) = P (− q < X < √ a) = P (X ≤ √a) − P (X ≤ − a) and compute as in item 2.
Similarly, P (X 2 > a) = P (X > a) + P (X < − a).
6