Download Problem Set #5

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Problem Set #5
Econ 103
Part I – Problems from the Textbook
Chapter 4: 1, 3, 5, 7, 9, 11, 13, 15, 25, 27, 29
Chapter 5: 1, 3, 5, 9, 11, 13, 17
Part II – Additional Problems
1. Suppose X is a random variable with support {−1, 0, 1} where p(−1) = q and p(1) = p.
(a) What is p(0)?
Solution: By the complement rule p(0) = 1 − p − q.
(b) Calculate the CDF, F (x0 ), of X.
Solution:


0, x0 < −1



q, −1 ≤ x0 < 0
F (x0 ) =

1 − p, 0 ≤ x0 < 1


 1, x ≥ 1
0
(c) Calculate E[X].
Solution: E[X] = −1 · q + 0 · (1 − p − q) + p · 1 = p − q
(d) What relationship must hold between p and q to ensure E[X] = 0?
Solution: p = q
2. Fill in the missing details from class to calculate the variance of a Bernoulli Random
Variable directly, that is without using the shortcut formula.
1
Solution:
X
σ 2 = V ar(X) =
(x − µ)2 p(x)
x∈{0,1}
=
X
(x − p)2 p(x)
x∈{0,1}
= (0 − p)2 (1 − p) + (1 − p)2 p
= p2 (1 − p) + (1 − p)2 p
= p2 − p3 + p − 2p2 + p3
= p − p2
= p(1 − p)
3. Prove that the Bernoulli Random Variable is a special case of the Binomial Random
variable for which n = 1. (Hint: compare pmfs.)
Solution: The pmf for a Binomial(n, p) random variable is
n x
p(x) =
p (1 − p)n−x
x
with support {0, 1, 2 . . . , n}. Setting n = 1 gives,
1 x
p(x) = p(x) =
p (1 − p)1−x
x
with support {0, 1}. Plugging in each realization in the support, and recalling that
0! = 1, we have
1!
p0 (1 − p)1−0 = 1 − p
p(0) =
0!(1 − 0)!
and
p(1) =
1!
p1 (1 − p)0 = p
1!(1 − 1)!
which is exactly how we defined the Bernoulli Random Variable.
4. Suppose that X is a random variable with support {1, 2} and Y is a random variable
with support {0, 1} where X and Y have the following joint distribution:
pXY (1, 0) = 0.20,
pXY (1, 1) = 0.30
pXY (2, 0) = 0.25,
pXY (2, 1) = 0.25
Page 2
(a) Express the joint distribution in a 2 × 2 table.
Solution:
X
Y
0
1
1
0.20
0.30
2
0.25
0.25
(b) Using the table, calculate the marginal probability distributions of X and Y .
Solution:
pX (1) = pXY (1, 0) + pXY (1, 1) = 0.20 + 0.30 = 0.50
pX (2) = pXY (2, 0) + pXY (2, 1) = 0.25 + 0.25 = 0.50
pY (0) = pXY (1, 0) + pXY (2, 0) = 0.20 + 0.25 = 0.45
pY (1) = pXY (1, 1) + pXY (2, 1) = 0.30 + 0.25 = 0.55
(c) Calculate the conditional probability distribution of Y |X = 1 and Y |X = 2.
Solution: The distribution of Y |X = 1 is
P (Y = 0|X = 1) =
pXY (1, 0)
0.2
=
= 0.4
pX (1)
0.5
P (Y = 1|X = 1) =
pXY (1, 1)
0.3
=
= 0.6
pX (1)
0.5
while the distribution of Y |X = 2 is
P (Y = 0|X = 2) =
pXY (2, 0)
0.25
=
= 0.5
pX (2)
0.5
P (Y = 1|X = 2) =
pXY (2, 1)
0.25
=
= 0.5
pX (2)
0.5
(d) Calculate E[Y |X].
Page 3
Solution:
E[Y |X = 1] = 0 × 0.4 + 1 × 0.6 = 0.6
E[Y |X = 2] = 0 × 0.5 + 1 × 0.5 = 0.5
Hence,
E[Y |X] =
0.6 with probability 0.5
0.5 with probability 0.5
since pX (1) = 0.5 and pX (2) = 0.5.
(e) What is E[E[Y |X]]?
Solution: E[E[Y |X]] = 0.5 × 0.6 + 0.5 × 0.5 = 0.3 + 0.25 = 0.55. Note that
this equals the expectation of Y calculated from its marginal distribution, since
E[Y ] = 0 × 0.45 + 1 × 0.55. This illustrates the so-called “Law of Iterated
Expectations.”
(f) Calculate the covariance between X and Y using the shortcut formula.
Solution: First, from the marginal distributions, E[X] = 1 · 0.5 + 2 · 0.5 = 1.5
and E[Y ] = 0 · 0.45 + 1 · 0.55 = 0.55. Hence E[X]E[Y ] = 1.5 · 0.55 = 0.825.
Second,
E[XY ] = (0 · 1) · 0.2 + (0 · 2) · 0.25 + (1 · 1) · 0.3 + (1 · 2)0.25
= 0.3 + 0.5 = 0.8
Finally Cov(X, Y ) = E[XY ] − E[X]E[Y ] = 0.8 − 0.825 = −0.025
5. Let X and Y be discrete random variables and a, b, c, d be constants. Prove the following:
(a) Cov(a + bX, c + dY ) = bdCov(X, Y )
Solution: Let µX = E[X] and µY = E[Y ]. By the linearity of expectation,
E[a + bX] = a + bµX
E[c + dY ] = c + dµY
Thus, we have
(a + bx) − E[a + bX] = b(x − µX )
(c + dy) − E[c + dY ] = d(y − µY )
Page 4
Substituting these into the formula for the covariance between two discrete
random variables,
XX
Cov(a + bX, c + dY ) =
[b(x − µX )] [d(y − µY )] p(x, y)
x
= bd
y
XX
x
(x − µX )(y − µY )p(x, y)
y
= bdCov(X, Y )
(b) Corr(a + bX, c + dY ) = Corr(X, Y ) provided that b, c are positive.
Solution:
Corr(a + bX, c + dY ) = p
= p
= p
Cov(a + bX, c + dY )
V ar(a + bX)V ar(c + dY )
bdCov(X, Y )
b2 V ar(X)d2 V ar(Y )
Cov(X, Y )
V ar(X)V ar(Y )
= Corr(X, Y )
6. Fill in the missing steps from lecture to prove the shortcut formula for covariance:
Cov(X, Y ) = E[XY ] − E[X]E[Y ]
Solution: By the Linearity of Expectation,
Cov(X, Y ) = E[(X − µX )(Y − µY )]
= E[XY − µX Y − µY X + µX µY ]
= E[XY ] − µx E[Y ] − µY E[X] + µX µY
= E[XY ] − µX µY − µY µX + µX µY
= E[XY ] − µX µY
= E[XY ] − E[X]E[Y ]
Page 5
7. Let X1 be a random variable denoting the returns of stock 1, and X2 be a random variable
denoting the returns of stock 2. Accordingly let µ1 = E[X1 ], µ2 = E[X2 ], σ12 = V ar(X1 ),
σ22 = V ar(X2 ) and ρ = Corr(X1 , X2 ). A portfolio, Π, is a linear combination of X1 and
X2 with weights that sum to one, that is Π(ω) = ωX1 + (1 − ω)X2 , indicating the
proportions of stock 1 and stock 2 that an investor holds. In this example, we require
ω ∈ [0, 1], so that negative weights are not allowed. (This rules out short-selling.)
(a) Calculate E[Π(ω)] in terms of ω, µ1 and µ2 .
Solution:
E[Π(ω)] = E[ωX1 + (1 − ω)X2 ] = ωE[X1 ] + (1 − ω)E[X2 ]
= ωµ1 + (1 − ω)µ2
(b) If ω ∈ [0, 1] is it possible to have E[Π(ω)] > µ1 and E[Π(ω)] > µ2 ? What about
E[Π(ω)] < µ1 and E[Π(ω)] < µ2 ? Explain.
Solution: No. If short-selling is disallowed, the portfolio expected return must
be between µ1 and µ2 .
(c) Express Cov(X1 , X2 ) in terms of ρ and σ1 , σ2 .
Solution: Cov(X, Y ) = ρσ1 σ2
(d) What is V ar[Π(ω)]? (Your answer should be in terms of ρ, σ12 and σ22 .)
Solution:
V ar[Π(ω)] = V ar[ωX1 + (1 − ω)X2 ]
= ω 2 V ar(X1 ) + (1 − ω)2 V ar(X2 ) + 2ω(1 − ω)Cov(X1 , X2 )
= ω 2 σ12 + (1 − ω)2 σ22 + 2ω(1 − ω)ρσ1 σ2
(e) Using part (d) show that the value of ω that minimizes V ar[Π(ω)] is
ω∗ =
σ22 − ρσ1 σ2
σ12 + σ22 − 2ρσ1 σ2
In other words, Π(ω ∗ ) is the minimum variance portfolio.
Page 6
Solution: The First Order Condition is:
2ωσ12 − 2(1 − ω)σ22 + (2 − 4ω)ρσ1 σ2 = 0
Dividing both sides by two and rearranging:
ωσ12 − (1 − ω)σ22 + (1 − 2ω)ρσ1 σ2 = 0
ωσ12 − σ22 + ωσ22 + ρσ1 σ2 − 2ωρσ1 σ2 = 0
ω(σ12 + σ22 − 2ρσ1 σ2 ) = σ22 − ρσ1 σ2
So we have
ω∗ =
σ22 − ρσ1 σ2
σ12 + σ22 − 2ρσ1 σ2
(f) If you want a challenge, check the second order condition from part (e).
Solution: The second derivative is
2σ12 − 2σ22 − 4ρσ1 σ2
and, since ρ = 1 is the largest possible value for ρ,
2σ12 − 2σ22 − 4ρσ1 σ2 ≥ 2σ12 − 2σ22 − 4σ1 σ2 = 2(σ1 − σ2 )2 ≥ 0
so the second derivative is positive, indicating a minimum. This is a global
minimum since the problem is quadratic in ω.
Page 7
Related documents