Download Continuous Random Variables and Continuous Distributions

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Statistics wikipedia , lookup

Randomness wikipedia , lookup

Transcript
Continuous Random Variables and Continuous Distributions
Continuous Random Variables and Continuous
Distributions
Expectation & Variance of Continuous Random
Variables (§ 5.2)
The Uniform Random Variable (§ 5.3)
The Normal Random Variable (§ 5.4)
The Exponential Random Variable (§ 5.5)
The Gamma Random Variable (§ 5.6.1)
The Beta Random Variable (§ 5.6.4)
Transformation - Distributions of a Function of a
Random Variable (§ 5.7)
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Expectation & Variance of Continuous Random Variables
♦ Expectation & Variance of Continuous Random Variables
Definition
X is said to be a continuous random variable if there exists a nonnegative function
f , defined for all real values x ∈ (−∞, ∞), having the property that for any set of
real numbers
Z
P{X ∈ B} = f (x)dx.
B
Note 4.1: The function f is called the probability density function
(p.d.f.) of the random variable X .
Note 4.2: A function f is a probability density function if it
satisfies the conditions
1. f (x) ≥ 0 for every x ∈ R.
2.
R∞
−∞ f (x)dx
= 1.
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Expectation & Variance of Continuous Random Variables
♦ Expectation & Variance of Continuous Random Variables
Example 4.1
If f (x) = 34 (1 − x 2 ) for −1 ≤ x ≤ 1, and 0 otherwise. Is f (x) a probability density
function?
Solution:
Note 4.3: P{a ≤ X ≤ b} =
Rb
a
f (x)dx.
Example 4.2
Using the probability density function in Example 4.2, find P 0 ≤ X ≤ 12 and
P{0 ≤ X ≤ 3}.
Solution:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Expectation & Variance of Continuous Random Variables
♦ Expectation & Variance of Continuous Random Variables
Note 4.4: The probability that a continuous random variable will
assume any fixed value is zero, that is
P{X = a} =
Z a
f (x)dx = 0.
a
Example 4.3
Using the probability density function in Example 4.2, find P X = 12 .
Solution:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Expectation & Variance of Continuous Random Variables
♦ Expectation & Variance of Continuous Random Variables
Definition
Given a continuous random variable X with probability density function f (x) for
every x ∈ R, the cumulative distribution function of X is defined to be
F (x) = P{X ≤ x} =
Z x
f (t)dt.
−∞
Example 4.3 (Cont.)
Note 4.5: The probability density function of a continuous
random variable X can be found by differentiating the cumulative
distribution function, such as
f (x) =
d
F (x).
dx
That is
f (x) = lim
δ →0
F (x + δ ) − F (x)
= Slope of F (x) at point x.
δ
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Expectation & Variance of Continuous Random Variables
♦ Expectation & Variance of Continuous Random Variables
Example 4.3 (Cont.)
Note 4.6: Intuitive interpretation of the density function f (a)
The density f (a) is a measure of how likely the random variable X
closed to “a”.
Note 4.7: For any real values a < b, we have
P{a ≤ X ≤ b} = F (b) − F (a).
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Expectation & Variance of Continuous Random Variables
♦ Expectation & Variance of Continuous Random Variables
Expected value of a continuous random variable X
Given a continuous random variable X with probability density function f (x), the
Expect Value (or Mean) of X is defined to be
E(X ) =
Z
xf (x)dx.
x∈X
Example 4.3 (Cont.) Find E(X ).
Solution:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Expectation & Variance of Continuous Random Variables
♦ Expectation & Variance of Continuous Random Variables
Proposition 4.1
Given a continuous random variable X with probability density function f (x), and
let g be any real-valued function, then
E[g(X )] =
Z
g(x)f (x)dx.
x∈R
Proof:
Example 4.4
Using the probability density function in Example 4.2, and let g(X ) = X 2 , find
E[g(X )].
Solution:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Expectation & Variance of Continuous Random Variables
♦ Expectation & Variance of Continuous Random Variables
Corollary 4.1
Given a continuous random variable X with probability density function f (x), then
E[aX + b] = aE(X ) + b,
where a and b are constants.
Proof:
Example 4.5
Using the probability density function in Example 4.2, and let g(X ) = 5X + 8, find
E[g(X )].
Solution:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Expectation & Variance of Continuous Random Variables
♦ Expectation & Variance of Continuous Random Variables
Variance of a continuous random variable X
Given a continuous random variable X with probability density function f (x) and
mean µ, the Variance of X is defined to be
Var(X ) = E[(X − µ)2 ].
Proposition 4.2
Given a continuous random variable X with probability density function f (x) and
mean µ, then
Var(X ) = E(X 2 ) − µ 2 .
Proof:
Corollary 4.2
Given a continuous random variable X with probability density function f (x), then
Var(aX + b) = a2 Var(X ),
where a and b are constants.
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
⇒
Continuous Random Variables and Distributions
The Uniform Random Variable
♦ The Uniform Random Variable
Definition
A random variable X is said to be a uniform random variable on the interval (α, β )
if and only if the probability density function of X is given by
1
β −α ,
f (x) =
if α < x < β
otherwise
0,
,
and denoted by X ∼ U(α, β ).
Proposition 4.3
If X ∼ U(α, β ), then the cumulative distribution function of X is
F (x) =

 0,
x−α
β −α ,

1,
x ≤α
α <x <β
x ≥β
.
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Uniform Random Variable
♦ The Uniform Random Variable
Corollary 4.3
If X ∼ U(α, β ), then
E(X ) =
α +β
2
and
Var(X ) =
(β − α)2
.
12
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Uniform Random Variable
♦ The Uniform Random Variable
Note 4.7: If X ∼ U(0, 1), then X has probability density function
1, 0 < x < 1
f (x) =
.
0, otherwise
Note 4.8: If X ∼ U(0, 1), then the cumulative distribution function
of X is

 0, x ≤ 0
x, 0 < x < 1 .
F (x) =

1, x ≥ 1
Note 4.9: If X ∼ U(0, 1), then
E(X ) =
1
2
Qihao Xie
and
Var(X ) =
1
.
12
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Normal Random Variable
♦ The Normal Random Variable
Definition
A random variable X is said to be a normal random variable with parameters µ
and σ 2 if and only if the probability density function of X is given by
f (x) =
1
√ exp
σ 2π
1 x − µ 2
,
2
σ
−∞ < x, µ < ∞, σ > 0,
and denoted by X ∼ N(µ, σ 2 ).
We often refer the normal distribution as the Gaussian distribution.
Note 4.10: The normal probability density function has a
bell-shaped curve, symmetric about µ.
Note 4.11: In particular, many random phenomena follow a
normal probability distribution, at least approximately.
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Normal Random Variable
♦ The Normal Random Variable
Note 4.12: If X ∼ N(µ, σ 2 ) with probability density function f (x),
then
Z ∞
f (x)dx = 1.
−∞
Proof:
Corollary 4.4
If X ∼ N(µ, σ 2 ), then
E(X ) = µ
and
Var(X ) = σ 2 .
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Normal Random Variable
♦ The Normal Random Variable
Proposition 4.4
If X ∼ N(µ, σ 2 ), then Y = αX + β ∼ N(α µ + β , α 2 σ 2 ), where α > 0, −∞ < β < ∞.
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Normal Random Variable
♦ The Normal Random Variable
Note 4.13: If X ∼ N(µ, σ 2 ), then Z = X σ−µ is said to have a
standard normal distribution, and denoted by Z ∼ N(0, 1).
Proof:
Note 4.14: If Z ∼ N(0, 1), then we traditionally denote its
probability density function by φ (z), and its cumulative distribution
function by Φ(z), that is
F (z) = P{Z ≤ z} =
Z z
−∞
Qihao Xie
φ (x)dx.
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Normal Random Variable
♦ The Normal Random Variable
Note 4.15: Table 5.1 (page 201) presents values of Φ(z) for
non-negative z.
Example 4.5
Find P{Z ≤ 1.96} from Table 5.1.
Example 4.6
Find a such that P{Z ≤ a} = 0.95 from Table 5.1.
Solution:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Normal Random Variable
♦ The Normal Random Variable
Note 4.16: For any x ∈ R, we have
Φ(−x) = 1 − Φ(x).
Proof:
Example 4.7
Find P{Z ≤ −2.55} from Table 5.1.
Note 4.17: If X ∼ N(µ, σ 2 ), then the cumulative distribution
function of X can be written as
x −µ
F (x) = Φ
.
σ
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Normal Random Variable
♦ The Normal Random Variable
Example 4.8
If X ∼ N(5, 4), Find P{4 ≤ X ≤ 7} from Table 5.1.
Solution:
Example 4.9
If X ∼ N(2, 16), Find P{|X − 1| < 3} from Table 5.1.
Solution:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Normal Random Variable
♦ The Normal Random Variable
The Normal Approximation to the Binomial Distribution (The
DeMoivre - Laplace Limit Theorem)
If Sn denotes the number of successes that occur when n independent trials, each
resulting a success with probability p, are performed, then for any a < b
)
(
Sn − np
≤ b −→ Φ(b) − Φ(a) as n → ∞.
P a≤ p
np(1 − p)
Note 4.18: Since Sn ∼ Bin(n, p), we call √Sn −np
np(1−p)
the
“standardized” Bin(n,p) random variable, i.e., mean is 0 and
variance is 1.
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Normal Random Variable
♦ The Normal Random Variable
Note 4.19: There are two possible approximations to Binomial
distribution.
(1) Poisson approximation – Good when n is large and np moderate.
(2) Normal approximation – Good when np(1 − p) ≥ 10.
Example 4.10
Given X ∼ Bin 100, 12 , find P(48 ≤ X ≤ 52).
Solution:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
⇒
Continuous Random Variables and Distributions
The Exponential Random Variable
♦ The Exponential Random Variable
Definition
A random variable X is said to be an exponential random variable with parameter
λ > 0 if and only if the probability density function of X is given by
f (x) =
λ e−λ x ,
0,
ifx ≥ 0
,
otherwise
and denoted by X ∼ Exp(λ ).
Note 4.20: In practice, the exponential random variable is often
used to model the length of time until an event occurs.
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Exponential Random Variable
♦ The Exponential Random Variable
Proposition 4.5
If X ∼ Exp(λ ), then the cumulative distribution function of X is
F (x) =
0,
1 − e−λ x ,
x <0
.
x ≥0
Proof:
Corollary 4.5
If X ∼ Exp(λ ), then
E(X ) =
1
λ
and
Var(X ) =
1
.
λ2
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Exponential Random Variable
♦ The Exponential Random Variable
Definition
A nonnegative random variable X is said to be memoryless if
P{X > s + t|X > t} = P{X > s}, for all s, t ≥ 0.
Note 4.21: X ∼ Exp(λ ) is a memoryless random variable, and is
the only continuous memoryless random variable.
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
⇒
Continuous Random Variables and Distributions
The Double Exponential Random Variable
♦ The Double Exponential Random Variable
Definition
A random variable X is said to have a double exponential (Laplace) distribution
with parameter λ > 0 if and only if the probability density function of X is given by
f (x) =
1 −λ |x|
λe
,
2
−∞ < x < ∞.
and denoted by X ∼ Laplace(λ ).
Proposition 4.6
If X ∼ Laplace(λ ), then the cumulative distribution function of X is
1 λx
x ≤0
2e ,
F (x) =
.
1 − 12 e−λ x , x > 0
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Double Exponential Random Variable
♦ The Double Exponential Random Variable
Corollary 4.6
If X ∼ Laplace(λ ), then
E(X ) = 0
and
Var(X ) =
2
.
λ2
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Gamma Random Variable
♦ The Gamma Random Variable
Definition
A random variable X is said to have a gamma distribution with parameters α > 0
and λ > 0 if and only if the probability density function of X is given by
(
λα
α−1 e−λ x ,
if x ≥ 0
Γ(α) x
f (x) =
,
0,
otherwise
and denoted by X ∼ Gamma(α, λ ), where Γ(α) =
gamma function.
R ∞ α−1 −t
e dt is called the
0 t
Note 4.22: For an integral value of n, we have
Γ(n) = (n − 1)!.
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Gamma Random Variable
♦ The Gamma Random Variable
Note 4.23: When α = 1, the Gamma(α, λ ) distribution is the
Exp(λ ) distribution.
Note 4.24: In practice, the Gamma(α, λ ) distribution with integer
α > 0 often arise as the distribution of the amount of time until a
total of α events has occurred.
Note 4.25: If α = n2 and λ = 12 , (n > 0 integer), the Gamma(α, λ )
distribution is called the chi-square distribution with n degree of
freedom.
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Gamma Random Variable
♦ The Gamma Random Variable
Corollary 4.7
If X ∼ Gamma(α, λ ), then
E(X ) =
α
λ
and
Var(X ) =
α
.
λ2
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Beta Random Variable
♦ The Beta Random Variable
Definition
A random variable X is said to have a beta distribution with parameters α > 0 and
β > 0 if and only if the probability density function of X is given by
(
Γ(α+β ) α−1
(1 − x)β −1 , if 0 < x < 1
Γ(α)Γ(β ) x
f (x) =
,
0,
otherwise
and denoted by X ∼ Beta(α, β ).
Note 4.26: When α = 1 = β , the Beta(α, β ) random variable is
the same as the U(0, 1) random variable.
Note 4.27: When α = β , the beta distribution is symmetric about
x = 0.5.
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
The Beta Random Variable
♦ The Beta Random Variable
Corollary 4.8
If X ∼ Beta(α, β ), then
E(X ) =
α
α +β
and
Var(X ) =
αβ
.
(α + β )2 (α + β + 1)
Proof:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Transformation
♦ Transformation
Example 4.11
If X ∼ U(0, 1), find the probability density function of Y = 2X + 1.
Solution:
Qihao Xie
Introduction to Probability and Basic Statistical Inference
Continuous Random Variables and Distributions
⇒
Transformation
♦ Transformation
Theorem 4.1
Let X be a continuous random variable with probability density function fX (x).
Suppose that g(x) is a strictly monotonic (increasing or decreasing), differential
function of x, then the random variable g(X ) has a probability density function
given by
(
d −1 fX g −1 (y ) · dy
g (y ), if y = g(x) for some x
fY (y ) =
,
0,
if y 6= g(x) for all x
where g −1 (y ) is defined to equal that value of x such that g(x) = y .
Proof:
Example 4.12
Proof of Proposition 4.1 using Theorem 4.1
Qihao Xie
Introduction to Probability and Basic Statistical Inference