Download Chapter 7 Special Continuous Distributions § 7.1 Uniform Random

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
_________________________________________________________________________________
Chapter 7 Special Continuous Distributions
§ 7.1 Uniform Random Variable
• Definition: A random variable is said to be uniformly
distributed over an interval (a, b) if its probability density
function is given by
f(x) =
=
1/(b-a)
0
a<x<b
otherwise.
* We can easily show that for the above random variable,
E[X] = (a+b)/2 and Var(X) = (b-a)2/12.
* Example 7.3: What is the probability that a random
chord of a circle is longer than a side of an equilateral
triangle inscribed into that circle.
Solution: There are three possible ways of randomly
choosing a chord in a circle. All are uniform except that
the sample space is different, thus giving different
answers.
Fig. 7.2, Fig. 7.3, Fig. 7.4
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
§ 7.2 Normal Random Variable
• Theorem 7.1 (De Moivre-Laplace Theorem):
Let X be a binomial random variable with parameters n
and p. Then for any numbers a and b, a < b,
lim P( a < [X-np]/[np(1-p)]0.5 < b ) = 1/(2)0.5 ab e-t
2/2
dt
n
Note that E(X) = np and X = [np(1-p)]0.5 .
* Note that as n approaches infinity, the standardized
binomial random variable has a distribution function
FX*(t) = 1/(2)0.5 - t e-x
2/2
dx
= (t).
* It can be shown that (see text) (-) = 0 and () = 1.
• Definition: A random variable X is called standard
normal if its distribution function is
FX(t) = (t) = 1/(2)0.5 - t e-x
2/2
* The density function is f(x) = 1/(2)0.5e-x
2/2
dx
.
Fig. 7.5
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
• To approximate a probability based on a discrete
random variable by a probability based on a continuous
random variable, we have
P(i  X  j) ~  i-0.5j+0.5
f(x) dx ;
P(X = k) ~  k -0.5k+0.5 f(x) dx ;
P(X  i) ~ i -0.5 f(x) dx ;
P(X  j) ~ - j +0.5 f(x) dx .
* (x) can not be calculated directly, often we look up its
value in a table (see Table 1 of the appendix in the
textbook). There is also an approximation formula
(x) ~ 0.5 + x(4.4-x)/10.
* see Example 7.4
* It can be shown (see text) that the expectation and
variance of the standard normal random variable are 0
and 1, respectively.
• Definition: A random variable X is called normal, with
parameters  and , if its density function is given by
f(x) = 1/[(2)0.5]e-(x-)
2/22
and we write X ~ N(,2). The normal random variable is
sometimes called the Gaussian random variable.
• Lemma: If X ~ N(,2), then X* = (X-)/ is N(0,1).
That is if X ~ N(,2), then the standardized X is N(0,1).
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
Fig. 7.8
Fig. 7.9
* Example 7.5: Suppose that a man's chest size is
N(39.8,2.052). Randomly choose 20 men, what is the
probability that five have a chest size larger than 40.
Solution: First find the probability of a men's chest size
is larger than 40.
p = P(X  40) = P{(X-39.8)/2.05  0.2/2.05} = P(X*  0.1)
= 1 - (0.1) = 1- 0.5398  0.46.
Then
C(20,5) (0.46)5 (0.54)15
* Example 7.7: The score on a test is ~ N(500,1002). What
should the score of a student be to place him on the top
10%?
Solution:
P{(X-500)/100 < (k-500)/100} = 0.9. From the
appendix, we have (1.28) = 0.8997 ~ 0.9. So
(k-500)/100 = 1.28
and k = 628.
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
§ 7.3 Exponential Random Variable
• Let {N(t), t0} be a Poisson process and let Xi be the
time between the (i-1)th event and the ith event. We have
P(X1 > t) = P(N(t)=0) = e-t(t)0/0! = e-t
and
P(X1  t) = 1 - e-t
Since the Poisson process is stationary, we have
P(Xn  t) = 1 - e-t
=0
t0
t<0
We have the cdf of Xn being FXn(t) and is given by
FXn(t) = 1 - e-t
=0
t0
t<0
and
fXn(t) = F'Xn(t) = e-t
=0
It can also e shown that -fXn(t) = 1.
t0
t<0
• Definition: A continuous random variable X is called
exponential with parameter  > 0 if its probability
density function is as given above.
* Example of exponential random variables
1. The inter-arrival time between two customers.
2. The duration between two phone calls.
3. The time between two consecutive earthquakes.
4. The time between two accidents in an intersection.
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
• We know that  is the average number of events in a
unit time interval. So we expect the average time
between two events is 1/.

E(X) = 0 x e-xdx


= [-xe-x]0 + 0 e-xdx

= 0 - (1/)[e-x]0 = 1/
• It can also be shown that for an exponential random
variable X,
Var(X) = (1/)2 and X = 1/
Fig. 7.10
Fig. 7.11
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
• Memoryless Property: A nonnegative random variable
is called memoryless if for all s, t  0,
P(X > s+t|X > t) = P(X > s)
* It can be shown that the exponential random variable is
the only continuous random variable that is memoryless.
It can also be shown that the geometric random variable
is the only discrete memoryless random variable.
* Example 7.10: Suppose that on average two earthquakes
occur in SF and two in LA every year. If the last
earthquake in SF is 10 months ago and the last
earthquake in LA is 2 months ago. What is the
probability that the next earthquake in SF occurs after the
next earthquake in LA?
Solution: Due to memoryless property, the times before
the next earthquake in SF and LA are both exponential
with parameter 2. So the probability is 0.5 due to
symmetry.
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
Q: How to generate an exponential function from
U[0,1]?
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
§ 7.4 Gamma Distribution
• Given X1, X2, X3 .... as exponential random variables in
the previous section. Define a new random variable
Yn = X1 + X2 + .... + Xn
In other words, Yn is the time it takes before the nth event
occurs. We have
e-t(t)i/i!
FYn (t) = P(Yn  t) = P(N(t)  n) = 
i=n,
* Differentiating FYn with respect to t, we have
fYn (t) = e-t(t)n-1/(n-1)!
* The density function
f(t) =
=
e-t(t)n-1/(n-1)!
0
t0
elsewhere
is called the n-Erlang density or the gamma density with
parameter (n,).
* We can extend the gamma density to parameters (r,),
where r is a positive real number. Let the gamma
function be defined as

(r) =  tr-1e-t dt
r>0
It can then be shown that for r=n, where n is an integer,
then (n+1) = n!. (p. 293)
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___
_________________________________________________________________________________
• Definition: A random variable with probability
density function
f(x) = e-x(x)r-1/(r)
=0
x0
elsewhere
is said to have a gamma distribution with parameters
(r,), r, >0.
* Note that the exponential random variable is a special
case of the gamma random variable with r=1.
Fig. 7.12, Fig. 7.13
* Finding the expectation and variance of a gamma
random variable is rather tedious and it can be shown
that for a gamma random variable with parameters (r,)
E(X) = r/ and Var(X) = r/2.
* Example 7.15 (p. 296)
__________________________________________________________
© Shi-Chung Chang, Tzi-Dar Chiueh
___