Download Here`s

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Fisher–Yates shuffle wikipedia , lookup

Transcript
ST521 – Review for Exam 02 Solutions
Fall 2016
R1
1. (a) First, the marginal PDF for X is 0 (x + y) dy = 0.5 + x. Therefore, P(X ≤
R 0.5
0.5) = 0 (0.5 + x) dx = 1/4 + 1/8 = 3/8.
(b) Integrate the joint PDF over
triangle {(x, y) : x ≥ 0, y ≥ 0, x + y ≤ 1}.
R 1 R the
1−x
That is, P(X + Y ≤ 1) = 0 0 (x + y) dy dx = 1/3.
R1R1
(c) E(XY 2 ) = 0 0 xy 2 (x + y) dx dy = 1/3.
(d) By linearity, E(7XY 2 + 5Y ) = 7E(XY 2 ) + 5E(Y ). The first expectation is just
1/3 from part (c). Since the joint PDF is symmetric in x and
R 1y, the marginal
PDF of Y is easily seen to be fY (y) = 0.5+y; hence, E(Y ) = 0 y(0.5+y) dy =
1/2. Finally, E(7XY 2 + 5Y ) = 7/3 + 5/2 = 29/6.
2. By writing out the PMF table, say, it’s easy to check that
E(XY ) =
X X
x=1,2 y=1,2
xy ·
x+y
15
= .
12
6
Similarly, from the table, one can find the marginal PDFs for X and Y —they’re
both the same. Therefore, E(X) = E(Y ) = 19/12. Clearly E(XY ) 6= E(X)E(Y ), so
X and Y are not independent; this can also be seen by the fact that the joint PDF
does not factor into a function of x times a function of y.
3. (a) To get the CDF FZ (z), find P(X/Y ≤ z) be integrating the joint PDF for
(X, Y ) over the triangle with vertices (0, 0), (0, 1), and (z, 1). Doing so gives
FZ (z) = z 2 , 0 ≤ z ≤ 1. Then the PDF of Z is the derivative fZ (z) = FZ0 (z) =
R1
2z. Finally, E(Z) = 0 z · 2z dz = 2/3.
R1R1
(b) E(X/Y ) = 0 x (x/y) · 8xy dy dx = 2/3; this solution is clearly easier.
4. (a) If we let Y1 = X1 /X2 and Y2 = X2 , we get the joint PDF of (Y1 , Y2 ) as
follows. The Jacobian of the transformation is J(y1 , y2 ) = y2 , and applying
the transformation formula we get
(
10y1 y24 if 0 < y1 < 1 and 0 < y2 < 1,
fY1 ,Y2 (y1 , y2 ) =
0
otherwise.
It is clear that Y1 and Y2 are independent, so the marginal PDF of Y1 must be
fY1 (y1 ) = 2y1 , 0 < y1 < 1.
R1
(b) Using integration-by-parts, MY1 (t) = 0 ety1 · 2y1 dy1 = 2t et − 1t (et − 1) .
R1
(c) It is easy to check that E(Y1 ) = 0 y1 ·2yy dy1 = 2/3. Unfortunately, evaluating
the derivative of MY1 (t) at t = 0 is really messy. But it’s fairly easy to check
numerically that MY0 1 (0) = 2/3. Here’s some R code:
M <- function(t) 2 * exp(t) / t - 2 * (exp(t) - 1) / t^2
(M(0.0001) - 1) / 0.0001
# returns 0.6666917
1
5. It’s easiest to write out the table entry-by-entry. First note that Y2 can take values
1, 2, 3, while Y1 can take values 1, 2, 3, 4, 6, 9.
1
2
3
4
Y2 ↓, Y1 →
1
1/36 2/36 3/36
0
2
0
2/36
0
4/36
0
0
3/36
0
3
6
0
6/36
6/36
9
0
0
9/36
The marginal PMF is found by summing over the rows.
6. (a) For (X1 , X2 ) as given, set Y1 = X1 + X2 and Y2 = X1 /(X1 + X2 ). The inverse
transformation, from (Y1 , Y2 ) to (X1 , X2 ), is given by
X 1 = Y1 Y2
and X2 = Y1 (1 − Y2 ).
The Jacobian term is J(y1 , y2 ) = y1 , so the joint PDF for (Y1 , Y2 ) is
fY1 ,Y2 (y1 , y2 ) =
1
1
(y1 y2 )α−1 e−y1 y2 ·
[y1 (1 − y2 )]β−1 e−y1 (1−y2 ) · y1
Γ(α)
Γ(β)
∝ y1α+β−1 e−y1 · y2α−1 (1 − y2 )β−1 .
Since the joint PDF of (Y1 , Y2 ) factors as a product of a function of y1 only
and a function of y2 only, then Y1 and Y2 must be independent.
(b) We can see that the two functions are, up to proportionality constants, a
Gamma(α + β, 1) PDF and a Beta(α, β) PDF, respectively; therefore, these
must be the corresponding marginal distributions of Y1 and Y2 , respectively.
7. (a) Let Z = X + Y . One way to get this is by creating another variable, say,
W = X, finding the joint PDF of (W, Z) and then integrating out W to get
the marginal PDF of Z. Another way to get it is to start by targeting the
CDF of Z. Let FY |X (y | x) and fY |X (y | x) denote the conditional CDF and
PDF of Y , given X = x, respectively. Using iterated expectation, we have
Z
P(X + Y ≤ z) = P(X + Y ≤ z | X = x)fX (x) dx
Z
= FY |X (z − x | x)fX (x) dx.
Then we can get the PDF for Z = X + Y by differentiating with respect to
z and interchanging the derivative and integral; this is justified in all “nice”
examples, e.g., if fY |X (y | x) is uniformly bounded in (x, y). The result is
Z
Z
fZ (z) = fY |X (z − x | x)fX (x) dx = fX,Y (x, z − x) dx.
(b) If X and Y are independent, then the formula simplifies:
Z
fZ (z) = fY (z − x)fX (x) dx.
2
(c) In the case where X and Y are iid Unif(0, 1), the PDFs are easy but we have
to be careful about the range of integration:
(R z
dx = z
if z ∈ [0, 1]
fZ (z) = R01
dx = 2 − z if x ∈ (1, 2].
z−1
The graph of this PDF looks like a triangle.
8. The inverse transformation, (R2 , Θ) in terms of (X, Y ), is
R2 = X 2 + Y 2
and Θ = arctan(Y /X).
I had to look up the formula for the derivative of arctan but, given that, it’s easy to
check that the Jacobian of the transformation is J(x, y) = 2. Therefore, the joint
PDF of (X, Y ) is
fX,Y (x, y) = 2 · 21 e−(x
2 +y 2 )/2
·
1
2π
= (2π)−1/2 e−x
2 /2
· (2π)−1/2 e−y
2 /2
.
This is a product of two standard normal PDFs, one for x and one for y, so we can
conclude that X and Y are independent N(0, 1).
9. As a first step, we can derive a general formula for the CDF and PDF of a ratio Z =
X1 /X2 of two independent positive random variables. Write Fi and fi , respectively,
for the CDF and PDF of Xi , i = 1, 2. Using iterated expectation as above,
Z ∞
P(X1 /X2 ≤ z | X2 = x2 )f2 (x2 ) dx2
P(X1 /X2 ≤ z) =
0
Z ∞
=
F1 (zx2 )f2 (x2 ) dx2 .
0
Differentiating with respect to z, interchanging derivative and integral, leads to
Z ∞
fZ (z) =
x2 f1 (zx2 )f2 (x2 ) dx2 .
0
In the special case of gamma random variables here, we have
Z ∞
1
z α1 −1 xα2 1 +α2 −1 e−(z+1)x2 /β dx2 .
fZ (z) = α1 +α2
β
Γ(α1 )Γ(α2 ) 0
Factoring out z α1 −1 , what remains looks a lot like a gamma PDF with shape parameter α1 + α2 and scale parameter β/(1 + z), except that it is missing the normalizing
constant. Putting the normalizing constant in, we have
fZ (z) =
Γ(α1 + α2 )
z α1 −1
,
Γ(α1 )Γ(α2 ) (1 + z)α1 +α2
z > 0.
This PDF does not depend on the original scale parameter β; this fact could have
been deduced from the very beginning, without doing any calculations, since the
ratio X1 /X2 will cancel out the common scale.
3
10. The marginal PDF of X, from Problem 1 above, is fX (x) = x + 1/2. Therefore,
the conditional PDF of Y given X = x is
fY |X (y|x) =
x+y
fX,Y (x, y)
=
.
fX (x)
x + 1/2
Then the conditional mean is
Z
1
yfY |X (y|x) dy = · · · =
E(Y | X = x) =
0
3x + 2
.
6x + 3
To get the conditional variance, first find E(Y 2 | X = x):
Z 1
4x + 3
2
y 2 fY |X (y|x) dy = · · · =
E(Y | X = x) =
.
12x + 6
0
Then the conditional variance is
V(Y | X = x) = E(Y 2 | X = x) − E2 (Y | X = x) =
3x + 2 2
4x + 3
−
.
12x + 6
6x + 3
11. Write ϕ(x) = E{g(X)h(Y ) | X = x}. Then
Z
ϕ(x) = g(x)h(y)fY |X (y | x) dy
Z
= g(x) h(y)fY |X (y | x) dy = g(x)E{h(Y ) | X = x}.
It follows immediately that ϕ(X) = g(X)E{h(Y ) | X}. The same argument holds
for the other conditional expectation.
12. The two marginal PDFs are the same: fX (x) = x + 1/2 and fY (y) = y + 1/2.
Therefore, V(X) = V(Y ) so we onlyR need to calculate one. From Problem 1 we
1
have E(Y ) = 1/2. Also, E(Y 2 ) = 0 y 2 (y + 1/2) dy = 5/12, so the variance is
V(Y ) = 5/12 − 1/4 = 1/6. For the covariance, we first evaluate E(XY ):
Z 1Z 1
E(XY ) =
xy(x + y) dx dy = · · · = 1/3.
0
0
Since C(X, Y ) = E(XY ) − E(X)E(Y ), and E(X) = E(Y ) = 1/2, we finally have
1/3 − 1/4
C(X, Y )
=
= 1/2.
ρX,Y = p
1/6
V(X)V(Y )
13. (a) By the theorem, E(XY ) = E(X)E(Y ), so C(X, Y ) = E(XY ) − E(X)E(Y ) = 0.
(b) Use the definition of variance:
V(X + Y ) = E{[(X + Y ) − E(X + Y )]2 } = E{[(X − E(X)) + (Y − E(Y ))]2 }
= E{(X − E(X))2 + (Y − E(Y ))2 + 2(X − E(X))(Y − E(Y ))}
= V(X) + V(Y ) + 2C(X, Y ).
Since X and Y are independent, by part (a) we know that C(X, Y ) = 0,
proving the claim.
4
14. The conditional PDF of Y given X = x is the ratio of fX,Y (x, y) and fX (x). Since
X and Y are independent, fX,Y (x, y) = fX (x)fY (y), so the fX (x) cancels in the
numerator and denominator, leaving just fY (y). We can interpret this as follows:
if X and Y are independent, then knowing the value of X shouldn’t influence our
uncertainty about Y and, therefore, fY |X (y | x) = fY (y) for all x.
15. Using iterated expectation, we can get to
Z ∞
P(X1 ≤ x2 | X2 = x2 )f (x2 ) dx2
P(X1 ≤ X2 ) =
−∞
Z ∞
Z ∞
=
P(X1 ≤ x2 )f (x2 ) dx2 =
F (x2 )f (x2 ) dx2 .
−∞
−∞
If we let u = F (x2 ), then du = f (x2 ) dx2 ; therefore, by substitution,
Z 1
Z ∞
F (x2 )f (x2 ) dx2 =
u du = 1/2.
P(X1 ≤ X2 ) =
−∞
0
16. Let Y = X 2 , so that X and Y are clearly not independent. By symmetry, E(X) = 0.
So to get
covariance we need to show that E(XY ) = E(X 3 ) = 0. This is easy
R 1zero
to do: −1 x3 dx = [14 − (−1)4 ]/4 = 0. Therefore, in general, C(X, Y ) = 0 is a
necessary but not sufficient condition for X and Y being independent.
17. For Y1 , Y2 , Y3 as given, the inverse transformation is X1 = Y1 , X2 = Y2 − Y1 , and
X3 = Y3 − Y2 . Clearly the Jacobian is a constant; the particular constant is 1. Then
the joint PDF for (Y1 , Y2 , Y3 ) is
fY1 ,Y2 ,Y3 (y1 , y2 , y3 ) = e−y1 e−(y2 −y1 ) e−(y3 −y2 ) = e−y3 .
The range for (Y1 , Y2 , Y3 ) is {(y1 , y2 , y3 ) : 0 < y1 < y2 < y3 < ∞}. And, incidentally,
the marginal PDF for Y3 is
Z y3 Z y3
fY3 (y3 ) =
e−y3 dy2 dy1 = · · · = y32 e−y3 /2.
0
y1
Careful inspection reveals that this is the PDF of a Gamma(α = 3, β = 1) random
variable. However, this should not be surprising since X1 , X2 , X3 are independent
Gamma(1, 1) random variables and Y3 is their sum. Another proof of this result
could be given based on MGFs.
18. If we denote by “success” the event that a single random point is inside the circle,
then X is counting the number of “successes” in ten Bernoulli trials. Therefore,
X is a binomial random variable with n = 10 and p to be determined. Since the
radius of the inscribed circle is 1, the area is π. Similarly, the area of the square is
4, so the probability of a random point in the square being inside the circle is π/4;
this is the parameter p. Thus, X ∼ Bin(n = 10, p = π/4), and
E(X) = 10π/4 and V(X) = 10π(4 − π)/16.
5
19. For the “middle value” Y to be less than or equal to some number y, at least two
of X1 , X2 , X3 must be less than or equal to y. The probability that a single X is
less than or equal to y is defined as G(y). If we let Z denote the number of Xi ’s
less than or equal to y, then Z ∼ Bin(n = 3, p = G(y)). Therefore,
P(Y ≤ y) = P(Z ≥ 2) = pZ (2) + pZ (3) = 3G(y)2 [1 − G(y)] + G(y)3 .
This is the CDF FY (y) of Y . Differentiating the CDF with respect to y gives
fY (y) = 6g(y)G(y)[1 − G(y)].
2
− x12 for −1 ≤ x ≤ 2, and
20. (a) The CDF is FX (x) = 0 for x < −1, FX (x) = 21 + 5x
12
FX (x) = 1 for x > 2.
(b) Let Y denote the number of Xi ’s greater than zero. Then Y is a binomial
random variable with parameters n = 4 and p = 1 − FX (0) = 1/2. The
probability that at least three Xi ’s are greater than zero is the probability
that Y is greater than or equal to three; that is
1
5
4 1
+ 4 = .
P(Y ≥ 3) = pY (3) + pY (4) =
4
2
16
3 2
(c) Let Y denote the number of Xi ’s before the first one greater than one. Then
Y is a geometric random variable with p = 1 − FX (1) = 5/6. The expected
value of Y is, therefore, E(Y ) = 1/p = 6/5.
21. Let X denote the number of patients arriving between 6:00 and 6:45pm; then X
is a Poisson random variable with µ = (3/4) · 8 = 6, where the factor 3/4 appears
because the interval of time is only 3/4-ths of a full hour. Then
P(X ≥ 2) = 1 − pX (0) − pX (1) = 1 − e−6 (60 + 61 ) = 1 − 0.017 = 0.983.
P
22. Since the Xi ’s are independent, the MGF for Y = ni=1 Xi is
MY (t) =
n
Y
i=1
MXi (t) =
n
Y
t
t
eµi (e −1) = e(e −1)
Pn
i=1
µi
t
= eµ(e −1) ,
i=1
where the last term is the MGF for a Pois(µ) random variable.
23. (a) Since the system can operate only when all its components are operational,
the system fails as soon as one of the components fails. Therefore, the failure
time for the system must be the shortest of the component failure times, i.e.,
Y = min{X1 , . . . , Xn }.
(b) We first find the complement of the CDF of Y ; that is,
P(Y > y) = P(min{X1 , . . . , Xn } > y) = P(all Xi ’s > y)
n
n
Y
Y
= P(X1 > y, . . . , Xn > y) =
P(Xi > y) =
e−λi y = e−λy ,
i=1
Pn
i=1
where λ = i=1 λi and independence was used in the fourth equality. Then
the CDF of Y is FY (y) = 1 − P(Y > y) = 1 − e−λy , which we easily recognize
as the CDF of an exponential random variable with parameter λ.
(c) Since λ = 0.2 + 0.3 + 0.5 = 1, P(Y ≤ 1) = 1 − e−1 = 0.63.
6