Download 4. Continuous Random Variables

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
http://statwww.epfl.ch
4. Continuous Random
Variables
4.1: Definition. Density and distribution functions. Examples:
uniform, exponential, Laplace, gamma. Expectation, variance.
Quantiles.
4.2: New random variables from old.
4.3: Normal distribution. Use of normal tables. Continuity
correction. Normal approximation to binomial distribution.
4.4: Moment generating functions.
4.5: Mixture distributions.
References: Ross (Chapter 4); Ben Arous notes (IV.1, IV.3–IV.6).
Exercises: 79–88, 91–93, 107, 108, of Recueil d’exercices.
Probabilité et Statistique I — Chapter 4
1
http://statwww.epfl.ch
Petit Vocabulaire Probabiliste
Mathematics
English
Français
probability of A given B
la probabilité de A sachant B
independence
indépendance
(mutually) independent events
les événements (mutuellement) indépendants
pairwise independent events
les événements indépendants deux à deux
conditionally independent events
les événements conditionellement indépendants
random variable
une variable aléatoire
indicator random variable
une variable indicatrice
fX
probability mass/density function
fonction de masse/fonction de densité
FX
probability distribution function
fonction de répartition
E(X)
E(X r )
expected value/expectation of X
l’espérance de X
rth moment of X
rième moment de X
conditional expectation of X given B
l’espérance conditionelle de X, sachant B
var(X)
variance of X
la variance de X
MX (t)
moment generating function of X, or
la fonction génératrices des moments
the Laplace transform of fX (x)
ou la transformée de Laplace de fX (x)
P(A | B)
X, Y, . . .
I
E(X | B)
Probabilité et Statistique I — Chapter 4
2
http://statwww.epfl.ch
4.1 Continuous Random Variables
Up to now we have supposed that the support of X is countable, so
X is a discrete random variable. Now consider what happens when
D = {x ∈ R : X(ω) = x, ω ∈ Ω} is uncountable. Note that this
implies that Ω itself is uncountable.
Example 4.1: The time to the end of the lecture lies in (0, 45)min.•
Example 4.2: Our (height, weight) pairs lie in (0, ∞)2 .
•
Definition: Let X be a random variable. Its cumulative
distribution function (CDF) (fonction de répartition) is
FX (x) = P(X ≤ x) = P(Ax ),
x ∈ R,
where Ax is the event {ω : X(ω) ≤ x}, for x ∈ R.
Probabilité et Statistique I — Chapter 4
3
http://statwww.epfl.ch
Recall the following properties of FX :
Theorem : Let (Ω, F, P) be a probability space and X : Ω 7→ R a
random variable. Its cumulative distribution function FX satisfies:
(a) limx→−∞ FX (x) = 0;
(b) limx→∞ FX (x) = 1;
(c) FX is non-decreasing, that is, FX (x) ≤ FX (y) whenever x ≤ y;
(d) FX is continuous to the right, that is,
lim FX (x + t) = FX (x),
t↓0
x ∈ R;
(e) P(X > x) = 1 − FX (x);
(f) if x < y, then P(x < X ≤ y) = FX (y) − FX (x).
•
Probabilité et Statistique I — Chapter 4
4
http://statwww.epfl.ch
Definition: A random variable X is continuous if there exists a
function fX (x), called the probability density function (la
densité) of X, such that
Z x
fX (u) du, x ∈ R.
P(X ≤ x) = FX (x) =
−∞
The properties of FX imply (i) fX (x) ≥ 0, and (ii)
Note: The fundamental theorem of calculus gives
R∞
−∞
fX (x) dx = 1.
dFX (x)
fX (x) =
.
dx
Ry
Note: As P(x < X ≤ y) = x fX (u) du when x < y, for any x ∈ R,
Z x
Z y
fX (u) du = 0.
fX (u) du =
P(X = x) = lim P(x < X ≤ y) = lim
y↓x
y↓x
x
x
Note: If X is discrete, then its pmf fX (x) is also called its density.
Probabilité et Statistique I — Chapter 4
5
http://statwww.epfl.ch
Some Examples
Example 4.3 (Uniform distribution): The random variable U
with density function
1
, a < u < b,
f (u) = b−a
a < b,
0,
otherwise,
is called a uniform random variable. We write U ∼ U (a, b).
•
Example 4.4 (Exponential distribution): The random variable
X with density function
−λx
λe
, x > 0,
f (x) =
λ > 0,
0,
otherwise,
is called an exponential random variable with rate λ. We write
X ∼ exp(λ). Establish the lack of memory property for X, that
P(X > x + t | X > t) = P(X > x) for t, x > 0.
Probabilité et Statistique I — Chapter 4
•
6
http://statwww.epfl.ch
Example 4.5 (Laplace distribution): The random variable X
with density function
λ −λ|x−η|
f (x) = e
,
2
x ∈ R,
η ∈ R, λ > 0,
is called a Laplace (or sometimes a double exponential) random
variable.
•
Example 4.6 (Gamma distribution): The random variable X
with density function
λα xα−1
e−λx , x > 0,
Γ(α)
f (x) =
λ, α > 0,
0,
otherwise,
is called a gamma random variable with shape parameter α and rate
R ∞ α−1 −u
e du is the gamma function.
parameter λ. Here Γ(α) = 0 u
Note that setting α = 1 yields the exponential density.
•
Probabilité et Statistique I — Chapter 4
7
http://statwww.epfl.ch
0.0
0.0
f(x)
0.4 0.8
Gamma, shape=5,rate=3
f(x)
0.4 0.8
exp(1)
0
2
4
6
8
−2
0
2
4
6
x
Gamma, shape=0.5,rate=0.5
Gamma, shape=8,rate=2
8
0.0
0.0
f(x)
0.4 0.8
x
f(x)
0.4 0.8
−2
−2
0
2
4
x
Probabilité et Statistique I — Chapter 4
6
8
−2
0
2
4
6
8
x
8
http://statwww.epfl.ch
Moments of Continuous Random Variables
Definition: Let g(x) be a real-valued function and X a continuous
random variable with density function fX (x). Then the expectation
of g(X) is defined to be
Z ∞
E{g(X)} =
g(x)fX (x) dx,
−∞
provided E{|g(X)|} < ∞. In particular the mean and variance of
X are
Z ∞
Z ∞
E(X) =
xfX (x) dx, var(X) =
{x − E(X)}2 fX (x) dx.
−∞
−∞
Example 4.7: Compute the mean and variance of (a) the U (a, b),
(b) the exp(λ), (c) the Laplace, and (d) the gamma distributions. •
Probabilité et Statistique I — Chapter 4
9
http://statwww.epfl.ch
Quantiles
Definition: Let 0 < p < 1. The p quantile of distribution function
F (x) is defined as
xp = inf{x : F (x) ≥ p}.
For most continuous random variables, xp is unique and is found as
xp = F −1 (p), where F −1 is the inverse function of F . In particular,
the 0.5 quantile is called the median of F .
Example 4.8 (Uniform distribution): Let U ∼ U (0, 1). Show
that xp = p.
•
Example 4.9 (Exponential distribution): Let X ∼ exp(λ).
Show that xp = −λ−1 log(1 − p).
•
Exercise: Find the quantiles of the Laplace distribution.
•
Probabilité et Statistique I — Chapter 4
10
http://statwww.epfl.ch
4.2 New Random Variables From Old
Often in practice we consider Y = g(X), where g is a known
function, and want to find FY (y) and fY (y).
Theorem : Let Y = g(X) be a random variable. Then
(R
f (x) dx, X continuous,
Ay X
P
FY (y) = P(Y ≤ y) =
x∈Ay fX (x), X discrete,
where Ay = {x ∈ R : g(x) ≤ y}. When g is monotone increasing and
has inverse function g −1 , we have
FY (y) = FX {g −1 (y)},
dg −1 (y)
fY (y) =
fX {g −1 (y)},
dy
with a similar result if g is monotone decreasing.
Probabilité et Statistique I — Chapter 4
•
11
http://statwww.epfl.ch
Example 4.10: Let Y = X β , where X ∼ exp(λ). Find FY (y) and
fY (y).
•
Example 4.11: Let Y = dXe, where X ∼ exp(λ) (thus Y is the
smallest integer no smaller than X). Find FY (y) and fY (y).
•
Example 4.12: Let Y = − log(1 − U ), where U ∼ U (0, 1). Find
FY (y) and fY (y). Find also the density and distribution functions of
W = − log U . Explain.
•
Example 4.13: Let X1 and X2 be the results when two fair dice
are rolled independently. Find the distribution of X1 − X2 .
•
Example 4.14: Let a, b be constants. Find the distribution and
density functions of Y = a + bX in terms of FX , fX .
Probabilité et Statistique I — Chapter 4
•
12
http://statwww.epfl.ch
4.3 Normal Distribution
Definition: A random variable X with density function
2
1
(x − µ)
f (x) =
, x ∈ R, µ ∈ R, σ > 0,
exp −
2σ 2
(2π)1/2 σ
is a normal random variable with mean µ and variance σ 2 : we
write X ∼ N (µ, σ 2 ).
When µ = 0, σ 2 = 1, the corresponding random variable Z is
−1/2 −z 2 /2
standard normal, Z ∼ N (0, 1), with density φ(z) = (2π)
e
,
for z ∈ R. The corresponding cumulative distribution function is
Z x
Z x
1
−z 2 /2
φ(z) dz =
e
dz.
P(Z ≤ x) = Φ(x) =
1/2
(2π)
−∞
−∞
This integral is tabulated in the formulaire and can be obtained
electronically.
Probabilité et Statistique I — Chapter 4
13
http://statwww.epfl.ch
Standard Normal Density Function
0.2
0.0
0.1
phi(x)
0.3
0.4
N(0,1) density
−3
−2
−1
0
1
2
3
x
Probabilité et Statistique I — Chapter 4
14
http://statwww.epfl.ch
Properties of the Normal Distribution
Theorem : The density function φ(z), cumulative distribution
function Φ(z), and quantiles zp of Z ∼ N (0, 1) satisfy:
(a) the density is symmetric about z = 0, φ(z) = φ(−z) for all z ∈ R;
(b) P(Z ≤ z) = Φ(z) = 1 − Φ(z) = 1 − P(Z ≥ z), for all z ∈ R;
(c) the standard normal quantiles zp satisfy zp = −z1−p , for all
0 < p < 1;
(d) z r φ(z) → 0 as z → ±∞, for all r > 0;
(e) φ0 (z) = −zφ(z), φ00 (z) = (z 2 − 1)φ(z), etc.
•
Probabilité et Statistique I — Chapter 4
15
http://statwww.epfl.ch
Example 4.15: Show that the mean and variance of X ∼ N (µ, σ 2 )
are indeed µ and σ 2 .
•
Example 4.16: Find the p quantile of Y = µ + σZ, where
Z ∼ N (0, 1).
•
Example 4.17: Find the distribution and density functions of
Y = |Z| and W = Z 2 , where Z ∼ N (0, 1).
•
Example 4.18: Find P(Z ≤ −2), P(Z ≤ 0.5), P(−2 < Z < 0.5),
P(Z ≤ 1.75), z0.05 , z0.95 , z0.5 , z0.8 , and z0.15 .
•
Note: The next page gives an extract from the tables showing the
function Φ(z) in the Formulaire.
Probabilité et Statistique I — Chapter 4
16
http://statwww.epfl.ch
z
0
1
2
3
4
5
6
7
8
9
0.0
.50000
.50399
.50798
.51197
.51595
.51994
.52392
.52790
.53188
.53586
0.1
.53983
.54380
.54776
.55172
.55567
.55962
.56356
.56750
.57142
.57535
0.2
.57926
.58317
.58706
.59095
.59483
.59871
.60257
.60642
.61026
.61409
0.3
.61791
.62172
.62552
.62930
.63307
.63683
.64058
.64431
.64803
.65173
0.4
.65542
.65910
.66276
.66640
.67003
.67364
.67724
.68082
.68439
.68793
0.5
.69146
.69497
.69847
.70194
.70540
.70884
.71226
.71566
.71904
.72240
0.6
.72575
.72907
.73237
.73565
.73891
.74215
.74537
.74857
.75175
.75490
0.7
.75804
.76115
.76424
.76730
.77035
.77337
.77637
.77935
.78230
.78524
0.8
.78814
.79103
.79389
.79673
.79955
.80234
.80511
.80785
.81057
.81327
0.9
.81594
.81859
.82121
.82381
.82639
.82894
.83147
.83398
.83646
.83891
1.0
.84134
.84375
.84614
.84850
.85083
.85314
.85543
.85769
.85993
.86214
1.1
.86433
.86650
.86864
.87076
.87286
.87493
.87698
.87900
.88100
.88298
1.2
.88493
.88686
.88877
.89065
.89251
.89435
.89617
.89796
.89973
.90147
1.3
.90320
.90490
.90658
.90824
.90988
.91149
.91309
.91466
.91621
.91774
1.4
.91924
.92073
.92220
.92364
.92507
.92647
.92786
.92922
.93056
.93189
1.5
.93319
.93448
.93574
.93699
.93822
.93943
.94062
.94179
.94295
.94408
1.6
.94520
.94630
.94738
.94845
.94950
.95053
.95154
.95254
.95352
.95449
1.7
.95543
.95637
.95728
.95818
.95907
.95994
.96080
.96164
.96246
.96327
1.8
.96407
.96485
.96562
.96638
.96712
.96784
.96856
.96926
.96995
.97062
1.9
.97128
.97193
.97257
.97320
.97381
.97441
.97500
.97558
.97615
.97670
2.0
.97725
.97778
.97831
.97882
.97932
.97982
.98030
.98077
.98124
.98169
Probabilité et Statistique I — Chapter 4
17
http://statwww.epfl.ch
Normal Approximation to Binomial Distribution
Before computers were widespread, one use of the normal
distribution was as an approximation to the binomial distribution.
Theorem (de Moivre–Laplace): Let Xn ∼ B(n, p), where
0 < p < 1, set µn = E(Xn ) = np, σn2 = var(Xn ) = np(1 − p), and let
Z ∼ N (0, 1). Then as n → ∞,
X n − µn D
X n − µn
P
≤ z → Φ(z), z ∈ R; that is,
−→ Z.
σn
σn
•
This gives an approximation for the probability that Xn ≤ r:
X n − µn
r − µn .
r − µn
P(Xn ≤ r) = P
≤
=Φ
.
σn
σn
σn
In practice this should be used only when min{np, n(1 − p)} ≥ 5.
Probabilité et Statistique I — Chapter 4
18
http://statwww.epfl.ch
Normal and Poisson Approximations to Binomial
density
0.20
B(16, 0.1) and Normal approximation
0.00
0.00
density
0.20
B(16, 0.5) and Normal approximation
0
5
10
15
0
5
10
15
B(16, 0.5) and Poisson approximation
B(16, 0.1) and Poisson approximation
0.00
0.00
density
0.20
r
density
0.20
r
0
5
10
r
Probabilité et Statistique I — Chapter 4
15
0
5
10
15
r
19
http://statwww.epfl.ch
Continuity Correction
A better approximation to P(Xn ≤ r) is given by replacing r by
r + 12 ; the 21 is known as a continuity correction.
0.00
0.05
Density
0.10
0.15
0.20
Binomial(15, 0.4) and Normal approximation
0
5
10
15
x
Example 4.19: Let X ∼ B(15, 0.4). Compute exact and
approximate values of P(X ≤ r) for r = 1, 8, 10, with and without
continuity correction. Comment.
Probabilité et Statistique I — Chapter 4
•
20
http://statwww.epfl.ch
4.4 Moment Generating Functions
Recall that the moment generating function of a random variable
X is defined as MX (t) = E{exp(tX)}, for t ∈ R such that
MX (t) < ∞.
MX (t) is also called the Laplace transform of fX (x).
Example 4.20: Find MX (t) when X ∼ exp(λ).
•
Example 4.21: Find the moment generating function of the
Laplace distribution.
•
Example 4.22: Find MX (t) when X ∼ N (µ, σ 2 ).
•
Example 4.23: Let X ∼ exp(λ). Find the moment generating
functions of Y = 2X, of X conditional on the event X < a, and of
W = min(X, 3).
•
Probabilité et Statistique I — Chapter 4
21
http://statwww.epfl.ch
4.5 Mixture Distributions
In practice random variables are almost always either discrete or
continuous. Exceptions can arise, however.
Example 4.24 (Petrol): Describe the distribution of the money
spent by motorists buying petrol at an automate.
•
Example 4.25 (Mixture): Let X1 ∼ Geom(p) and X2 ∼ exp(λ).
Suppose that X = X1 with probability γ and X = X2 with
probability 1 − γ. Find FX , fX , E(X) and var(X).
•
Probabilité et Statistique I — Chapter 4
22
Related documents