Download TMS-062: Lecture 3. Part 1. Continuous Random Variables

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Continuous Random Variables
Expectation
Important Continuous Distributions
Continuous Random Variables
Expectation
Important Continuous Distributions
Random variables
Recall, a Random variable X is a function X : Ω 7→ R on a
sample space Ω.∗
TMS-062: Lecture 3. Part 1.
Continuous Random Variables
X is discrete R.V. if it takes values on a discrete (finite or
countable) set and it is continuous if it takes values on a
non-countable sets, e. g. height, weight, length, time, etc.
We must find a way to describe how the unit of probability
is distributed over this infinite set.
Sergei Zuyev
Formally, X is continuous, if its c.d.f. FX (t) = P{X ≤ t} is a
continuous function.
∗
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
There are subtleties for non-countable Ω, however!
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Distribution of a continuous r.v.
Discrete vs. Continuous R.V.’s
Since now FX (t) is continuous, for any interval I of the form
[a, b], [a, b), (a, b] or (a, b) we have that
P{X ∈ I} = FX (b) − FX (a).
If, in addition, it has a derivative fX (t) = FX0 (t) for almost all t
such that
Z
When FX (t) is continuous, P(X = t) = FX (t) − FX (t−) = 0 for
any t ∈ R. Thus no probability is associated with specific values
of X when X is continuous! In contrast, in a discrete case all
probabilities are associated with specific values of X .
b
FX (b) − FX (a) =
fX (t) dt
a
then fX (t) is called the Probability Density Function (p.d.f).
Such R.V.’s are called continuous (more exactly, absolutely
continuous). We then have
Z +∞
fX (t) ≥ 0 and
fX (t) dt = P{X ∈ R} = 1.
Since P(X ∈ (t − dt/2, t + dt/2)) ≈ fX (t)dt for small dt, p.d.f. in
each point t shows how densely the probability is packed there,
much the same as the particles making a stick weigh nothing,
but not the stick itself which weight is the integral of the
particles’ density.
−∞
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Continuous Random Variables
Expectation
Important Continuous Distributions
Density and Histogram
Example: Discrete R.V. with Ω = {0, 1, 2, 3, 4}:
P(X ≤ 2) = p0 + p1 + p2 , but P(X < 2) = p0 + p1 .
For an (absolutely) continuous X with Ω = (0, 4) we have
Z
P(X ≤ 2) = P(X < 2) = FX (2) =
2
fX (t)dt
0
if a p.d.f. fX (t) exists.
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Histogram of relative frequencies show how often the values
from each bin occur. But fX (t) also shows the relative
frequencies of values around each t to occur. So when bin size
diminishes and the number of realisations of X grows in such a
way, that the number of observations in each bin also grows,
the histogram resembles the density function more more
closely.
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Expected value of a R.V.
The expected value or expectation or the average or the mean
value of a R.V. X is
P
 k xk P(X = xk ) (Discrete)
EX = R
 +∞ tf (t)dt
(Abs. Continuous)
−∞ X
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Continuous Random Variables
Expectation
Important Continuous Distributions
Variance
The Variance of a continuous R.V. X is defined the same way:
For any function g(X ) one has similarly
P
 k g(xk )P(X = xk ) (Discrete)
E g(X ) = R
 +∞ g(t)f (t)dt
(Abs. Continuous)
X
−∞
σ 2 = var X = E(X − E X )2 = E X 2 − (E X )2
√
and the standard deviation is σ = var X .
But for a r.v. X with p.d.f. fX (t), we have
Z
+∞
EX =
E X2 =
tfX (t)dt;
−∞
Z +∞
t 2 fX (t)dt.
−∞
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Z
10
0
10
= −e−t/100 0
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Example: A machine is overhauled. Suppose time until
breakdown T (in hours) has p.d.f.
1 −t/100
e
, t >0
fT (t) =
100
Find the probabilities that the machine
(a) breaks down within 10 hours;
(b) runs for at least 50 hours. What is the c.d.f. and E T , σ?
(a) P(T ≤ 10) =
Sergei Zuyev
1 −t/100
e
dt
100
Uniform distribution
X has a Uniform Unif(a, b) distribution on [a, b] if X falls with
the same probability in any subset (c, d) ⊂ (a, b) provided its
length d − c is constant. This implies that p.d.f. is constant
(
1/(b − a) t ∈ (a, b)
fX (t) =
0
otherwise
= 1 − e−10/100 ≈ 0.095
(b) P(T > 50) = 1 − P(T ≤ 50) . . . = e−50/100 ≈ 0.393
Z ∞
t −t/100
ET =
e
dt = int. by parts = 100;
100
0
Z ∞ 2
t
ET2 =
e−t/100 dt = int. by parts = 20 000;
100
0
p
σ = 20 000 − 1002 = 100
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
thus
P(X ∈ (c, d)) = (d − c)/(b − a)
E X = (a + b)/2
Sergei Zuyev
var X = (b − a)/12
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Continuous Random Variables
Expectation
Important Continuous Distributions
Exponential distribution
F X (t )
1
A random variable T has Exponential Exp(λ) distribution with
parameter (or rate) λ if
0
FT (t) = 1 − e−λt ,
a
fT (t) = λe−λt
b
for t ≥ 0 and 0 otherwise.


0
FX (t) = (t − a)/(b − a)


1
Sergei Zuyev
t <a
t ∈ [a, b]
t >b
E T = 1/λ
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
and
Sergei Zuyev
var T = 1/λ2
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Normal distribution
One of the most important distribution is Normal: N (m, σ 2 ) has
bell-shaped density
n (t − m)2 o
exp −
, t ∈ (−∞, +∞)
σ2
2πσ
EX = m
and
var X = σ 2 .
fX (t) = √
1
A Normal r.v. Z with parameters m = 0 and σ = 1 is called
2
standard normal and it has density fZ (t) = √1 e−t /2 .
2π
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Continuous Random Variables
Expectation
Important Continuous Distributions
Important property: Normal distribution is stable and preserved
by linear transformations: if X1 ∼ N (m1 , σ12 ), X2 ∼ N (m2 , σ22 )
are independent, then
X1 + X2 ∼ N (m1 + m2 , σ12 + σ22 )
aX + b ∼ N (am + b, a2 σ 2 )
for any const. a, b
There is no explicit expression for integrals of fX (t), so
computer codes are used. The c.d.f. Φ(t) = P(Z < t) for the
standard normal distribution Z ∼ N (0, 1) is one of special
functions called the Laplace function.
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
Standartisation
If X ∼ N (m, σ 2 ) then, by above, Z = (X − m)/σ has N (0, 1)
distribution. Therefore
Example: Assuming that the weight of screws is normally distr.
with mean 2.10 gm and st.dev. 0.15 gm, find the proportion of
screws weighing more than 2.55 gm.
P(X < t) = P((X − m)/σ < (t − m)/σ)
wt = X ∼ N (2.10, 0.152 );
= P(Z < (t − m)/σ) = Φ((t − m)/σ),
denote Z = (X − 2.10)/0.15
P(X > 2.55) = P(Z > (2.55 − 2.10)/0.15)
the latter can be computed in a software, e.g. Matlab.
= P(Z > 3) = 1 − P(Z ≤ 3) ≈ 1 − 0.9987 = 0.0013
2
Note that the symmetry of the p.d.f. fZ (t) =
Φ(t) = 1 − Φ(−t).
Sergei Zuyev
−t /2
e√
2π
implies that
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Continuous Random Variables
Expectation
Important Continuous Distributions
The rule of sigmas
Let us compute the probability that X ∼ N (m, σ 2 ) deviates from
its mean m by more than twice its standard deviation σ. By the
standartisation procedure and the symmetry of the normal
p.d.f.,
P(|X − m| > 2σ) = P(X < m − 2σ) + P(X > m + 2σ)
= P(Z < −2) + Φ(Z > 2) = 2Φ(2) ≈ 0.04550
i.e. less than 5%. For 3 sigmas, we already have
P(|X − m| > 3σ) = 2Φ(3) ≈ 0.0027.
Thus, although a normal r.v. may take any real value, with
probability at least 95% it does not deviate from its mean by
more than 2 sigmas and with at least 99.7% probability it is
within 3 sigmas.
Sergei Zuyev
TMS-062: Lecture 3. Part 1. Continuous Random Variables
Related documents