Download Homework 4 answers in pdf format

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Statistics wikipedia , lookup

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Transcript
STAT/MA 519 Answers
Homework 4
October 29, 2008
Solutions by Mark Daniel Ward
PROBLEMS
We always let “Z” denote a standard normal random variable in the computations below. We
simply use the chart from page 222 of the Ross book, when working with a standard normal
random variable; of course, a more accurate computation is possible using the computer.
3a. The function
(
C(2x − x3 ) if 0 < x < 5/2
f (x) =
0
otherwise
√
√
3
cannot be a probability density function.
To
see
this,
note
2x
−
x
=
−x(x
−
2)(x
+
2).
R∞
If C = 0, then f (x) = 0 for all x, so −∞ f (x) = 0, but every probability density function
√
R∞
has −∞ f (x) = 1. If C > 0, then f (x) < 0 on the range x ∈ (0, 2), but every probability
√
density function has f (x) ≥ 0 for all x. If C < 0, then f (x) < 0 on the range x ∈ ( 2, 5/2),
but every probability density function has f (x) ≥ 0 for all x. So no value of C will satisfy
all of the properties needed for a probability density function.
3b. The function
(
C(2x − x2 ) if 0 < x < 5/2
f (x) =
0
otherwise
2
cannot be a probability
R ∞density function. To see this, note 2x−x = −x(x−2). RIf∞C = 0, then
f (x) = 0 for all x, so −∞ f (x) = 0, but every probability density function has −∞ f (x) = 1.
If C > 0, then f (x) < 0 on the range x ∈ (0, 2), but every probability density function has
f (x) ≥ 0 for all x. If C < 0, then f (x) < 0 on the range x ∈ (2, 5/2), but every probability
density function has f (x) ≥ 0 for all x. So no value of C will satisfy all of the properties
needed for a probability density function.
5. Let X denote the volume of sales in a week, given in thousands of gallons. We have the
density of X, and we want a with P (X > a) = .01. Note that we need 0 < a < 1. We have
1
Z ∞
Z 1
Z ∞
(1 − x)5
4
.01 = P (X > a) =
f (x) dx =
5(1 − x) dx +
0 dx = 5
(−1)
= (1 − a)5
5
a
a
1
x=a
√
√
So .01 = (1 − a)5 , and thus 5 .01 = 1 − a, so a = 1 − 5 .01 ≈ .6019.
6a. We compute
Z ∞
Z
E[X] =
xf (x) dx =
−∞
0
−∞
∞
Z
(x)(0) dx +
0
1
Z
1
1 ∞ 2 −x/2
−x/2
(x)(e
) dx =
xe
dx
(x)
4
4 0
2
Z ∞
Z ∞
−x/2 ∞
e−x/2
1
2e
−x/2
x
2x
dx =
x
xe
dx
−
+4
−1/2 x=0
−1/2
4
−1/2 x=0
0
0
∞
Z ∞ −x/2 −x/2 ∞
1
e
e−x/2 2e
=
x
dx
+ 4x
−4
4
−1/2 x=0
−1/2 x=0
−1/2
0
−x/2 ∞
−x/2 ∞
−x/2 ∞
1
e
e
1
2e
=
x
+ 4x
+8
= (16) = 4
4
−1/2
−1/2
−1/2
4
1
=
4
2e
−x/2 ∞
x=0
6b. We compute
Z
Z ∞
f (x) dx =
1=
x=0
−1
Z
1
c(1 − x ) dx +
−1
1
and thus c = 3/4. Now we compute
Z ∞
Z −1
Z
E[X] =
xf (x) dx =
(x)(0) dx +
−∞
= (3/4)
−∞
1
4
x2 x
−
2
4
1
x3 0 dx = c x −
= c(4/3)
3 x=−1
∞
Z
2
0 dx +
−∞
−∞
x=0
1
2
x(3/4)(1 − x ) dx +
−1
= (3/4)
x=−1
Z
∞
(x)(0) dx
1
1 1
−
4 4
=0
6c. We compute
Z
∞
Z
5
Z
∞
5
dx = 5 ln x|∞
x=5 = +∞
2
x
−∞
−∞
5
10a. The times in which the passenger would go to destination A are 7:05–7:15, 7:20–7:30,
7:35–7:45, 7:50–8:00, i.e., a total of 40 out of 60 minutes. So 2/3 of the time, the passenger
goes to destination A.
10b. The times in which the passenger would go to destination A are 7:10–7:15, 7:20–7:30,
7:35–7:45, 7:50–8:00, 8:05–8:10, i.e., a total of 40 out of 60 minutes. So 2/3 of the time, the
passenger goes to destination A.
E[X] =
xf (x) dx =
(x)(0) dx +
x
11. To interpret this statement, we say that the location of the point is X inches from the
left-hand-edge of the line, with 0 ≤ X ≤ L. Since the point is chosen at random on the line,
with no further clarification, it is fair to assume that the distribution is uniform, i.e., X has
density function f (x) = L1 for 0 ≤ x ≤ L, and f (x) = 0 otherwise.
The ratio of the shorter to the longer segment is less than 1/4 if X ≤ 15 L or X ≥ 54 L. So
the ratio of the shorter to the longer segment is less than 1/4 with probability
Z 1L
Z L
5
1
1
dx +
dx = 1/5 + 1/5 = 2/5
4
L
0
L L
5
1
13a. Let X denote the time (in minutes) until the bus arrives. Then X has density f (x) = 30
for 0 ≤ x ≤ 30, and f (x) = 0 otherwise.
R∞
R 30 1
The probability that we wait longer than 10 minutes is 10 f (x) dx = 10 30
dx = 2/3.
13b. Let E denote the event that the bus arrives after 10:25; let F denote the event that
the bus arrives at 10:15. Then the desired probability is
R∞
R 30 1
f
(x)
dx
dx
1/6
P (E ∩ F )
30
P (E | F ) =
= R25
= R25
=
= 1/3
∞
30 1
P (F )
1/2
f (x) dx
dx
15
15 30
3
14. Using proposition 2.1, we compute
Z 0
Z 1
Z
Z ∞
n
n
n
n
x fX (x) dx =
(x )(0) dx +
(x )(1) dx +
E[X ] =
−∞
=
−∞
0
1
∞
n
Z
(x )(0) dx =
1
xn dx
0
n+1 1
x
1
=
n + 1 x=0 n + 1
To use the definition of expectation, we need to find the density of the random variable Y =
X n . We first find the cumulative distribution function of Y . We know that P (Y ≤ a) = 0
for a ≤ 0, and P (Y
≤ a) = 1 for a ≥ 1. For 0 < a < 1, we have P (Y ≤ a) = P (X n ≤ a) =
√
R
n
√
√
a
P (X ≤ n a) = 0 1 dx = n a. Therefore, the cumulative distribution function of Y is
(√
n
a if 0 < a < 1
FY (a) =
0
otherwise
So the density of Y is
(
fY (x) =
1
1 n
a −1
n
0
if 0 < x < 1
otherwise
So the expected value of Y is
Z ∞
Z 0
Z 1
Z ∞
Z 1
1 1 −1
1 1/n
x dx
E[Y ] =
xfY (x) dx =
(x)(0) dx +
(x) x n dx +
(x)(0) dx =
n
−∞
−∞
0
1
0 n
1
1
+1 n
1x
1
= 1
=
n n + 1
n+1
x=0
16. Let X denote the rainfall in a given year.
Then X has a uniform (µ = 40, σ = 4)
X−40
50−40
distribution, so P (X ≤ 50) = P
≤ 4
= P (Z ≤ 2.5) = Φ(2.5) ≈ .9938, where Z
4
has a standard normal distribution. If the rainfall in each year is independent of all other
years, then it follows that the desired probability is (P (X ≤ 50))10 ≈ (.9938)10 ≈ .9397.
18. We can write X = σZ + 5 where Z is standard normal. Thus .2 = P (X > 9) =
P (σZ + 5 > 9) = P (σZ > 4) = P (Z > 4/σ) = 1 − Φ(4/σ), so Φ(4/σ) = .8, and thus
4/σ ≈ .8416, so σ ≈ 4.75, so Var(X) = σ 2 ≈ 22.59.
20a. We have n = 100 people, each of which is in favor of a proposed rise in school taxes
with probability p = .65. The number of the 100 people who are in favor of the rise in taxes
is a Binomial (n = 100, p = .65) random variable, which is well-approximated by a normal
random variable X with mean np = 65 and variance np(1 − p) = 22.75. So the probability
that at least 50 are in favor of the proposition is approximately
49.5 − 65
X − 65
P (X > 49.5) = P √
> √
≈ P (Z > −3.25) = Φ(3.25) ≈ .9994
22.75
22.75
4
20b. The desired probability is approximately
59.5 − 65
X − 65
70.5 − 65
√
≈ P (−1.15 < Z < 1.15)
P (59.5 < X < 70.5) = P
<√
< √
22.75
22.75
22.75
= P (Z < 1.15) − P (Z < −1.15) = Φ(1.15) − (1 − Φ(1.15)) ≈ .7498
20c. The desired probability is approximately
X − 65
74.5 − 65
P (X < 74.5) = P √
≈ P (Z < 1.99) = P (Z < 1.99) ≈ .9767
< √
22.75
22.75
23. Let X denote the number of times “6” shows during 1000 independent rolls of a fair die.
Then X is a Binomial random variable with n = 1000 and p = 1/6. So X is approximately
normal with mean np = 1000/6 and variance np(1 − p) = 1000(1/6)(5/6). Thus
P (150 ≤ X ≤ 200) = P (149.5 ≤ X ≤ 200.5)
=P
149.5 − (1000/6)
X − (1000/6)
200.5 − (1000/6)
p
≤p
≤p
1000(1/6)(5/6)
1000(1/6)(5/6)
1000(1/6)(5/6)
!
≈ P (−1.46 ≤ Z ≤ 2.87) = P (Z ≤ 2.87) − P (Z ≤ −1.46)
= P (Z ≤ 2.87) − P (Z ≥ 1.46) = P (Z ≤ 2.87) − (1 − P (Z ≤ 1.46))
= Φ(2.87) − (1 − Φ(1.46)) ≈ .9979 − (1 − .9279) = .9258
Given that “6” shows exactly 200 times, then the remaining 800 rolls are all independent,
with possible outcomes 1, 2, 3, 4, 5, each appearing with probability 1/5 on each die. Let
Y denote the number of times “5” shows during the 800 rolls. Then Y is a Binomial
random variable with n = 800 and p = 1/5. So Y is approximately normal with mean
np = 800/5 = 160 and variance np(1 − p) = 800(1/5)(4/5) = 128. Thus
Y − 160
149.5 − 160
√
√
P (Y < 150) = P (Y ≤ 149.5) = P
≤
128
128
≈ P (Z ≤ −.93) = P (Z ≥ .93) = 1 − P (Z ≤ .93) = 1 − Φ(.93) ≈ 1 − .8238 = .1762
25. Let X denote the number of acceptable items. Then X is a Binomial random variable
with n = 150 and p = .95. So X is approximately normal with mean np = (150)(.95) = 142.5
and variance np(1 − p) = (150)(.95)(.05) = 7.125. Thus
139.5 − 142.5
X − 142.5
√
P (150 − X ≤ 10) = P (140 ≤ X) = P (139.5 ≤ X) = P
≤ √
7.125
7.125
≈ P (−1.12 ≤ Z) = P (Z ≤ 1.12) = Φ(1.12) = .8686
29. The number of times X that the stock prices increases is Binomial with parameters
n = 1000 and p = .52. If X = i then the final price of the stock is ui d1000−i s. The condition
on i such that the final price is at least 30% above the original price is ui d1000−i s ≥ 1.30s,
or equivalently, (u/d)i ≥ 1.30d−1000 , i.e., i ln(u/d) ≥ ln(1.30d−1000 ), i.e., i ≥ 469.21. So the
final price of the stock is at least 30% above the original price if X ≥ 469.21. So the desired
probability is
!
X − (1000)(.52)
469.21 − (1000)(.52)
≥ p
= P (Z ≥ −3.21) ≈ .9993.
P (X ≥ 469.21) = P p
(1000)(.52)(.48)
(1000)(.52)(.48)
5
31a. We note that
Z
E[|X − a|] =
0
a
1
(a − x) dx +
A
Z
A
(x − a)
a
1
1 A2 − 2aA + 2a2
dx =
A
2
A
Three possibilities exist for the minimum of E[|X − a|], namely, the left endpoint a = 0, or
d
E[|X − a|] = 0.
the right endpoint a = A, or the value of a such that da
Using a = 0 yields E[|X − a|] = A/2.
Using a = A yields E[|X − a|] = A/2.
d
d
Also, da
E[|X − a|] = 2a−A
, so da
E[|X − a|] = 0 when a = A/2, and this value of a yields
A
E[|X − a|] = A/4.
So the optimal location for the fire station is a = A/2, which yields E[|X − a|] = A/4.
31b. We note that
Z a
Z ∞
λa − 1 + 2e−λa
−λx
E[|X − a|] =
(a − x)λe
dx +
(x − a)λe−λx dx =
λ
0
a
Two possibilities exist for the minimum of E[|X − a|], namely, the left endpoint a = 0, or
d
the value of a such that da
E[|X − a|] = 0.
Using a = 0 yields E[|X − a|] = 1/λ.
−λa
d
d
Also, da
E[|X − a|] = λ−2λe
, so da
E[|X − a|] = 0 when a = ln(2)
, and this value of a
λ
λ
ln(2)
1
yields E[|X − a|] = λ , which is smaller than λ .
, which yields E[|X − a|] = ln(2)
.
So the optimal location for the fire station is a = ln(2)
λ
λ
34a. We write X for the total mileage of the car, given in thousands of miles. If X is
1
, then the desired conditional probability is
exponentially distributed with λ = 20
R ∞ 1 −(1/20)x
e
dx
P (X > 30 & X > 10)
P (X > 30)
e−3/2
20
P (X > 30 | X > 10) =
=
= R30
= −1/2 = e−1
∞ 1 −(1/20)x
P (X > 10)
P (X > 10)
e
e
dx
10 20
We could also have simply computed the line above by writing
1 − P (X ≤ 30)
1 − F (30)
1 − (1 − e−(1/20)30 )
e−3/2
P (X > 30)
=
=
=
= −1/2 = e−1
−(1/20)10
P (X > 10)
1 − P (X ≤ 10)
1 − F (10)
1 − (1 − e
)
e
An alternative method is to simply compute that
that the lifetime has at
R ∞ 1probability
least 20,000 more miles, which is P (X > 20) = 20 20
e−(1/20)x dx = e−1 (or equivalently,
P (X > 20) = 1 − P (X ≤ 20) = 1 − F (20) = 1 − (1 − e−(1/20)20 ) = e−1 ). Either way, the
answer is e−1 ≈ .368.
34b. If X is uniformly distributed on the interval (0, 40), then the desired probability is
R 40 1
dx
P (X > 30 & X > 10)
1/4
40
P (X > 30 | X > 10) =
= R30
=
= 1/3
40 1
P (X > 10)
3/4
dx
10 40
As an alternate method, we could think of the reduced lifetime (given that the lifetime is over
10,000 miles) as being between 10,000 and 40,000 miles, so the probability of the lifetime
being greater than 30,000 miles is 10/(40 − 10) = 10/30 = 1/3.
37a. We compute P (|X| > 1/2) = P (X > 1/2 or X < −1/2) = P (X > 1/2) + P (X <
−1/2) = 1/2
+ 1/2
= 1/2.
2
2
6
37b. We first compute the cumulative distribution function of Y = |X|. For a ≤ 0, we have
P (Y ≤ a) = 0, since |X| is never less than a in this case. For a ≥ 1, we have P (Y ≤ a) = 1,
since |X| is always less than a in this case. For 0 < a < 1, we compute
2a
=a
2
or equivalently, P (Y ≤ x) = x for 0 < x < 1. In summary, the cumulative distribution
function of Y = |X| is


0 x ≤ 0
FY (x) = x 0 < x < 1

1 x ≥ 1
P (Y ≤ a) = P (|X| ≤ a) = P (−a ≤ X ≤ a) =
Differentiating throughout with respect to x yields the density of Y = |X| is
(
1 0<x<1
fY (x) =
0 otherwise
Intuitively, if X is uniformly distributed on (−1, 1), then Y = |X| is uniformly distribution
on [0, 1).
(
e−x
39. Since X is exponentially distributed with λ = 1, then X has density fX (x) =
0
and cumulative distribution function
(
1 − e−x x ≥ 0
FX (x) =
0
otherwise
x≥0
otherwise
Next, we compute the cumulative distribution function of Y = ln X. We have
a
P (Y ≤ a) = P (log X ≤ a) = P (X ≤ ea ) = FX (ea ) = 1 − e−e
x
or equivalently, P (Y ≤ x) = 1 − e−e . Differentiating throughout with respect to x yields
the density of Y = ln X is
x
fY (x) = ex e−e
40. Since X is uniformly distributed on (0, 1), then X has density
(
1 0<x<1
fX (x) =
0 otherwise
and cumulative distribution function


0 x ≤ 0
FX (x) = x 0 < x < 1

1 x ≥ 1
Next, we compute the cumulative distribution function of Y = eX . For a ≤ 1, we have
P (Y ≤ a) = 0, since Y = eX is never less than a in this case (i.e., X is never less than 0
in this case). For a ≥ e, we have P (Y ≤ a) = 1, since Y = eX is always less than a in this
7
case (i.e., X is always less than 0 in this case). For 1 < a < e (and thus 0 < ln a < 1), we
compute
P (Y ≤ a) = P (eX ≤ a) = P (X ≤ ln a) = FX (ln a) = ln a
or equivalently, P (Y ≤ x) = ln x for 1 < x < e. In summary, the cumulative distribution
function of Y = eX is


x≤1
0
FY (x) = ln x 1 < x < e

1
x≥e
Differentiating throughout with respect to x yields the density of Y = eX is
(
1
1<x<e
fY (x) = x
0 otherwise
THEORETICAL EXERCISES
3. By theoretical exercise 2, we have
Z
Z ∞
P (g(X) > y) dy −
E[g(X)] =
∞
P (g(X) < −y) dy
0
0
or equivalently
Z
∞
∞
Z
Z
Z
f (x) dx dy −
E[g(X)] =
0
f (x) dx dy
0
x : g(x)>y
x : g(x)<−y
Switching the order of integration, we have
Z
Z g(x)
Z
E[g(X)] =
f (x) dy dx −
x : g(x)>0
0
−g(x)
Z
f (x) dy dx
x : g(x)<0
0
In the first integral, replace y by t and dy by dt (i.e., just change variables). In the second
integral, replace −y by t and −dy by dt and replace the limits of integration y = 0 by t = 0
and y = −g(x) by t = g(x), so we get
Z
Z g(x)
Z
Z g(x)
E[g(X)] =
f (x) dt dx +
f (x) dt dx
x : g(x)>0
0
x : g(x)<0
Combining integrals, we get
Z
Z g(x)
Z
E[g(X)] =
f (x) dt dx =
x : g(x)6=0
0
Z
0
g(x)
Z
dt f (x) dx =
x : g(x)6=0
0
g(x) f (x) dx
x : g(x)6=0
R
Without loss of generality, we can add on the integral x : g(x)=0 g(x) f (x) dx (which is just
equal to 0), so we conclude that
Z ∞
E[g(X)] =
g(x) f (x) dx
−∞
as desired.
8
6. Let X be uniform on the interval (0, 1). For each a in the range
T0 < a < 1, define
T
Ea = P (X 6= a). So P (Ea ) = 1 for each a. Also,
Ea = ∅, so P
Ea = P (∅) = 0.
0<a<1
0<a<1
8. We see that
2
Z
c
2
Z
x f (x) dx ≤
E[X ] =
c
cxf (x) dx = cE[X]
0
0
Thus
2
2
2
Var(X) = E[X ] − (E[X]) ≤ cE[X] − (E[X]) = E[X](c − E[X]) = c
or equivalently, writing α =
E[X]
,
c
2 E[X]
c
E[X]
1−
c
we have
Var(X) = c2 α(1 − α)
Rc
Rc
We notice that 0 ≤ E[X] = 0 xf (x)dx ≤ 0 cf (x)dx = c, so 0 ≤ E[X] ≤ c, so 0 ≤ E[X]
≤ 1,
c
or equivalently, 0 ≤ α ≤ 1.
The largest value that α(1 − α) can achieve for 0 ≤ α ≤ 1 is 1/4, which happens when
α = 1/2 (to see this, just differentiate α(1 − α) with respect to α, set the result equal to
0, and solve for α, which yields α = 1/2; don’t forget to also check the endpoints, namely
α = 0 and α = 1).
Since Var(X) ≤ c2 α(1 − α) and α(1 − α) ≤ 1/4, it follows that Var(X) ≤ c2/4, as desired.
12a. If X is uniform on the interval (a, b), then
is (a + b)/2. To see this, write
R mthe1 median m−a
m for the median, and solve 1/2 = F (m) = a b−a
dx = b−a which immediately yields
m = (a + b)/2.
12b. If X is normal with mean µ and variance σ 2 , then the median is µ.To see
this, write
m
X−µ
m−µ
m−µ
for the median, and solve 1/2 = F (m) = P (X ≤ m) = P σ ≤ σ
=P Z≤ σ
=
where Z is standard normal, and thus m−µ
= 0, so m = µ.
Φ m−µ
σ
σ
12c. If X is exponential with parameter λ, then the median is lnλ2 . To see this, write m
for the median, and solve 1/2 = F (m) = 1 − e−λm , so e−λm = 1/2, so −λm = ln(1/2), so
λm = ln 2, and thus m = lnλ2 .
13a. If X is uniform on the interval (a, b), then any value of m between a and b is equally
valid to be used as the mode, since the density is constant on the interval (a, b).
13b. If X is normal with mean µ and variance σ 2 , then the mode is µ. To see this, write
2
2
2
2
d √1
1
m for the mode, and solve 0 = dx
e−(x−µ) /(2σ ) , i.e., 0 = √2π
e−(x−µ) /(2σ ) 2σ1 2 (2)(x − µ),
2π σ
σ
so x = µ is the location of the mode.
13c. If X is exponential with parameter λ, then the mode is 0. To see this, note that
−λx
d
λe−λx = λe−λ = −e−λx is the slope of the density for all x > 0, so the density is always
dx
decreasing for x > 0. So the mode must occur on the boundary of the positive portion of
the density, i.e., at x = 0.
9
14. Consider an exponential random variable X with parameter λ. To show that Y = cX is
exponential with parameter λ/c, it suffices to show that Y has cumulative density function
(
1 − e−(λ/c)a a > 0
FY (a) =
0
otherwise
To see this, first we note that X is always nonnegative, so Y = cX is always nonnegative,
so FY (a) = 0 for a ≤ 0.
For a > 0, we check
FY (a) = P (Y ≤ a) = P (cX ≤ a) = P (X ≤ a/c) = 1 − e−λ(a/c) = 1 − e−(λ/c)a
as desired.
Thus, Y = cX has the cumulative distribution function of an exponential random variable
with parameter λ/c, so Y = cX must indeed be exponential with parameter λ/c.
18. We prove that, for an exponential random variable X with parameter λ,
k!
E[X k ] = k
for k ≥ 1
λ
To see this, we use proof by induction on k.
For k = 1, we use integration by parts with u = x and dv = λe−λx dx, and thus du = dx
−λx
and v = λe−λ , to see that
−λx ∞
Z ∞ −λx
Z ∞
λe
λe
−λx
−
dx
E[X] =
xλe
dx = (x)
−λ
−λ
0
0
x=0
or, more simply,
Z ∞
x ∞
E[X] = − λx +
e−λx dx
e x=0
0
∞
e−λx 1
We see that the integral evaluates to −λ = λ . We also see that, in the first part of the
x=0
expression, plugging in x = 0 yields 0; using L’Hospital’s rule as x → ∞ yields
1
x
= lim
=0
lim
x→∞ λeλx
x→∞ eλx
So we conclude that
1
E[X] =
λ
This completes the base case, i.e., the case k = 1.
Now we do the inductive step of the proof. For k ≥ 2, we assume that E[X k−1 ] = (k−1)!
λk−1
has already been proved, and we prove that E[X k ] = λk!k .
To do this, we use integration by parts with u = xk and dv = λe−λx dx, and thus du =
−λx
kxk−1 dx and v = λe−λ , to see that
−λx ∞
−λx Z ∞
Z ∞
λe
λe
k
k
−λx
k
k−1
E[X ] =
x λe
dx = (x )
−
(kx )
dx
−λ
−λ
0
0
x=0
or, more simply,
∞
Z
xk k ∞ k−1 −λx
E[X ] = − λx +
x λe
dx
e x=0 λ 0
k
10
We see that the integral is λk E[X k−1 ], which is (by the inductive assumption) equal to
k (k−1)!
= λk!k . We also see that, in the first part of the expression, plugging in x = 0 yields
λ λk−1
0; using L’Hospital’s rule k times as x → ∞ yields
xk
kxk−1
k(k − 1)xk−2
k!
=
lim
=
lim
= · · · = lim k λx = 0
λx
λx
2
λx
x→∞ e
x→∞ λe
x→∞
x→∞ λ e
λe
So we conclude that
k!
E[X k ] = k
λ
This completes the inductive case, which completes the proof by induction.
lim
β
.
24. Define Y = X−ν
α
We note that X is Weibull with parameters ν, α, β if and only if
(
β 1 − exp − x−ν
x>ν
α
P (X ≤ x) =
0
x≤ν
Equivalently,
P
X −ν
α
Equivalently, replacing
β
≤
X−ν β
α
x−ν
α
β !
=
(
1 − exp −
0
x−ν β
x−ν β
α
x−ν β
α
x−ν β
α
>0
≤0
by X and α
by t, we get
(
1 − exp (−t) t > 0
P (Y ≤ t) =
0
t≤0
Equivalently, Y is exponential with parameter λ = 1.
28. We note that F is always between 0 and 1, so Y is always between 0 and 1. Thus, to
show that Y is uniformly distributed on the interval (0, 1), it suffices to show that, for each
value of a with 0 < a < 1, we have P (Y ≤ a) = a. Consider the set {x | F (x) = a}. Since
X is continuous, then the set {x | F (x) = a} is either one point or a closed interval. Either
way, let x0 = max{x | F (x) = a}, and in particular, F (x0 ) = a too. Finally, we compute
P (Y ≤ a) = P (F (X) ≤ a) = P (X ≤ x0 ) = F (x0 ) = a.
Thus, Y is uniformly distributed on the interval (0, 1).