Download Homework for 1/8

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Name:
ID:
Homework for 1/8
1. [§4-6] Let X be a continuous random variable with probability density
function f (x) = 2x, 0 ≤ x ≤ 1.
(a) Find E[X].
(b) Find E[X 2 ].
(c) Find Var[X].
(a) We have
∞
Z
E[X] =
Z
1
xf (x) dx =
−∞
x · 2x dx =
2x3
3
1
0
1
= 2.
3
0
(b) We have
2
Z
∞
E[X ] =
Z
2
2
x · 2x dx =
x f (x) dx =
−∞
0
x4
2
1
= 1.
2
0
(c) We have
Var[X] = E[X 2 ] − E[X]2 =
1
−
2
2
2
1
.
=
3
18
2. [§4-31] Let X be uniformly distributed on the interval [1, 2]. Find E[1/X].
Is E[1/X] = 1/E[X]?
Since X is uniformly distributed on [1, 2], the pdf of X is
(
1, 1 ≤ x ≤ 2
f (x) =
.
0, otherwise
On one hand, we have
Z ∞
Z
E[X] =
xf (x) dx =
−∞
2
x · 1 dx =
1
x2
2
2
= 3.
2
1
On the other hand, we have
2
Z ∞
Z 2
1
1
1
E
=
f (x) dx =
· 1 dx = (ln x) = ln 2 − ln 1.
X
x
x
−∞
1
1
It follows immediately that E[1/X] 6= 1/E[X].
3. [§4-42] Let X be an exponential random variable with standard deviation
σ. Find P(|X − E[X]| > kσ) for k = 2, 3, 4, and compare the results to
the bounds from Chebyshev’s inequality.
Since X is exponentially distributed and has variance σ 2 , the pdf of X is

 1 e−x/σ , x > 0
f (x) = σ
.

0,
otherwise
It follows immediately that E[X] = σ. Thus we have
P(|X − E[X]| > kσ) = P(X > (k + 1)σ or X < (1 − k)σ)
Z ∞
1 −x/σ
e
dx we only consider k = 2, 3, 4
=
σ
(k+1)σ
Z ∞
=
e−y dy (change of variable y = x/σ)
k+1
= e−(k+1) .
Therefore, we have
P(|X − E[X]| > 2σ) = e−3 = .0498,
P(|X − E[X]| > 3σ) = e−4 = .0183,
P(|X − E[X]| > 4σ) = e−5 = .0067.
On the other hand, the Chebyshev’s inequality gives us
P(|X − E[X]| > kσ) ≤
Var[X]
1
= 2.
k2 σ2
k
In particular, we have
1
1
= 0.25, P(|X − E[X]| > 3σ) ≤ 2 = 0.1111,
2
2
3
1
P(|X − E[X]| > 4σ) ≤ 2 = 0.0625.
4
P(|X − E[X]| > 2σ) ≤
4. [§4-80] Let X be a continuous random variable with density function
f (x) = 2x, 0 ≤ x ≤ 1. Find the moment-generating function of X,
M (t), and verify that E[X] = M 0 (0) and that E[X 2 ] = M 00 (0).
2
The mgf of X is
Z ∞
Z 1
M (t) = E[etX ] =
etx f (x) dx =
etx · 2x dx
−∞
0
"
#
1
Z
1 tx 1 1 tx
= 2 e x −
e dx
t
0 t 0
t
e
1 t
=2
− 2 e −1
(integration by parts)
t
t
t
1
e
et
=2
− 2+ 2 .
t
t
t
Note that by L’Hospital rule, we have
tet − et + 1
tet + et − et
=
2
lim
= lim et = 1.
t→0
t→0
t→0
t2
2t
lim M (t) = 2 lim
t→0
Furthermore, we have
2et
2et
2
et
− 2 +
− 3 , and
M (t) = 2
t
t
t3
t
2 t
t
t
e
−
2te
+
2et − 2
lim M 0 (t) = 2 lim
t→0
t→0
t3
2 t
t e + 2tet − 2tet − 2et + 2et
= 2 lim
t→0
3t2
2
2
= lim et = ,
3 t→0
3
0
and
et
3et
6et
6et
6
− 2 +
− 4 + 4 , and
t
t
t3
t
t
3 t
2 t
t
t
e
−
3t
e
+
6te
− 6et + 6
lim M 00 (t) = 2 lim
4
t→0
t→0
t
t3 et + 3t2 et − 3t2 et − 6tet + 6tet + 6et − 6et
= 2 lim
t→0
4t3
1
1
= lim et = .
2 t→0
2
M 00 (t) = 2
Combing with the first problem (§4-6), we see that E[X] = M 0 (0) and
E[X 2 ] = M 00 (0).
3
5. [§4-91] Use the mgf to show that if X follows an exponential distribution,
cX(c > 0) does also.
Let the pdf of X be
(
f (X) =
λe−λx , x > 0
.
0,
otherwise
Then the mgf of X is
Z ∞
etx · λe−λx dx =
MX (t) =
0
λ
λ−t
Z
∞
(λ − t)e−(λ−t) dx =
0
λ
,
λ−t
for all t < λ. (The trick here is to identify a pdf for Exp(λ − t)). Let
Y = cX. Then the mgf of Y is
MY (t) = E[etY ] = E[et(cX) ] = E[e(ct)X ] = MX (ct) =
λ/c
λ
=
,
λ − ct
λ/c − t
for all t with ct < λ, that is, t < λ/c. Compare MY (t) to MX (t), we see
that Y is also exponentially distributed and the parameter is λ/c.
Note that the result
McX (t) = MX (ct)
is generally true as long as the mgf are defined (for example, we need to
make sure ct < λ). More generally, we have
MaX+b (t) = ebt MX (t).
4
Homework for 1/10
1. [§5-16] Suppose that X1 , . . . , X20 are independent random variables with
density functions
f (x) = 2x, 0 ≤ x ≤ 1
Let S = X1 + · · · + X20 . Use the central limit theorem to approximate
P(S ≤ 10).
From Problem 1 of Homeowrk 1/8, we have
µ = E[X] =
2
,
3
and
σ 2 = Var[X] =
1
.
18
Therefore, according the central limit theorem, we have


10 − 20 · 23
S − 20 · 23
10 − nµ
S − nµ

√
√
≤
= P q
P(S ≤ 10) = P
√ ≤q1 √
σ n
σ n
1
· 20
· 20
18
18
≈ P(Z ≤ −3.16) = 0.0008,
where Z ∼ N (0, 1).
2. [§5-17] Suppose that a measurement has mean µ and variance σ 2 = 25.
Let X be the average of n such independent measurements. How large
should n be so that P(|X − µ| < 1) = .95?
By applying central limit theorem, we have
|X − µ|
1
√ <
√
0.95 = P(|X − µ| < 1) = P
σ/ n
σ/ n
1
1
1
≈ P |Z| < √
=P − √ <Z< √
,
5/ n
5/ n
5/ n
where Z ∼ N (0, 1). Since 0.95 = P(−1.96 < Z < 1.96), we have
1
√ = 1.96,
5/ n
which implies that
n = (5 · 1.96)2 = 96.04 ≈ 96.
5
3. [§5-18] Suppose that a company ships packages that are variable in weight,
with an average weight of 15 lb and a standard deviation of 10. Assuming
that the packages come from a large number of different customers so that
it is reasonable to model their weights as independent random variables,
find the probability that 100 packages will have a total weight exceeding
1700 lb.
Let X1 , X2 , . . . , X100 denote the weight of these 100 packages, and S100 =
X1 +· · ·+X100 the total weight. Since µ = E[Xi ] = 15 and σ 2 = Var[Xi ] =
102 for 1 ≤ i ≤ 100, the central limit theorem gives
1700 − 100µ
S100 − 100µ
√
√
>
P(S100 > 1700) = P
σ 100
σ 100
S100 − 100 · 15
1700 − 100 · 15
√
√
=P
>
10 · 100
10 · 100
≈ P(Z > 2) = 0.0228.
6
Related documents