Download 2. Multivariate Continuous Random Variables

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Transcript
Mathematics 50
Probability and Statistical Inference
Winter 1999
2. Multivariate Continuous Random Variables
Week 4
January 25-29
1. Distribution and density functions
Two CRVs are considered, X and Y . The joint cumulative (bivariate) distribution function (df) of
(X; Y ) is
FX;Y (x; y) = F (x; y) = Pr(X · x; Y · y):
The joint density is de¯ned as
f (x; y) =
Back representation:
F (x; y) =
Z
@ 2 F (x; y)
:
@x@y
Z
x
y
f (x; y)dxdy;
¡1 ¡1
Main Theorem of Calculus.
What is the probability (X; Y ) 2 A? Answer,
Pr(A) =
How to ¯nd Pr(X + Y · c)?
Pr(X + Y · c) =
Z
Z
A
f (x; y)dxdy:
1
¡1
µZ
c¡x
¡1
¶
f (x; y)dy dx:
Properties of multivariate distributions:
1. F (¡1; ¡1) = 0; F (1; 1) = 1:
2. F (x; y) is an increasing function in both arguments: if x1 · x2 and y1 · y2 then F (x1 ; y1 ) ·
F (x2 ; y2 ):
3. f (x1 ; x2 ) ¸ 0:
4.
R1 R1
¡1 ¡1
f(x; y)dxdy = 1:
To prove that a certain function is a distribution function we must show that 1 and 2 hold. To
prove that a certain function is a density function we must prove that 3 and 4 hold.
Typical density function of a CRV
1
0.8
0.6
0.4
0.2
4
-4
-2
-2
0
00
x y
-4
2
2
4
Distribution function of a multivaraite logistic distribution exp(x + y)=(1 + exp(x + y))
Problem (Rice, p.74). The bivariate density function is de¯ned as
f (x; y) =
(
c(x2 + xy) if 0 · x · 1; 0 · y · 1
0
elsewhere
2
Find c and Pr(X > Y ):
Solution. We ¯nd c from the condition
Z
1
0
Z
1
0
f(x; y)dydx =
Z
µZ
1
0
1
0
¶
f (x; y)dy dx = 1:
We ¯rst evaluate the inner integral: if x is ¯xed then
Z
1
0
2
2
(x + xy)dy = x + x
Z
0
1
1
ydy = x2 + x:
2
Now we evaluate the outer integral
Z
0
1
µ
¶
1
1 1
7
x + x dx = + = :
2
3 4
12
2
Therefore,
12
:
7
c=
The needed probability is
Pr(X > Y ) =
Z
0
1
Z
x
0
f (x; y)dydx =
12 Z 1 Z x 2
12 Z 1
(x + xy)dydx =
7 0 0
7 0
µZ
0
x
¶
(x2 + xy)dy dx:
Again, calculate ¯rst the inner integral (x ¯xed)
Z
0
x
2
3
(x + xy)dy = x + x
Finally,
Pr(X > Y ) =
Z
x
0
3
1
ydy = x3 + x3 = x3 :
2
2
12 3 Z 1 3
9
12 3 1
£
£ £ = :
x dx =
7
2 0
7
2 4
14
2. Marginal distribution
If the joint distribution is known, what is the distribution of each component (regardless of the
other)? Reasonably, we de¯ne the marginal distribution function of X as
FX (x) = F (X · x; ¡1 < Y < 1)
= F (x; 1) = Pr(X · x):
or
FX (x) = y!1
lim F (x; y)
According to the above,
fX (x) =
Z
1
¡1
f (x; y)dy
i.e. y is integrated out.
Problem (continued). Find marginal densities of X and Y:
3
Solution. By de¯nition (x ¯xed and y integrates out),
Z
Z
12 1 2
fX (x) =
f (x; y)dy =
(x + xy)dy
7 0
0
µ
¶
12
1
=
x x+
; for 0 < x < 1:
7
2
1
Analogously (y ¯xed and x integrates out),
Z
12 Z 1 2
12 Z 1
12 1 2
(x + xy)dx =
x dx + y
fY (y) =
f(x; y)dx =
xdx
7 0
7 0
7 0
0
12 1 12
1
2
=
£ + y £ = (2 + 3y) for 0 < y < 1:
7
3
7
2
7
Z
1
3. Independence
is an important concept in probability and statistics.
Recall, two events A and B are independent if and only of
Pr(A \ B) = Pr(A) £ Pr(B):
Similarly, we say that X and Y are independent if
F (x; y) = Pr(X · x; Y · y) = Pr(X · x) £ Pr(Y · y)
= FX (x)FY (y):
(3.1)
Therefore, two RVs are independent if the joint d.f. is the product of the marginal d.f.
Take derivative and ¯nd the condition on independence expressed in densities
f (x; y) = fX (x)fY (y):
(3.2)
The conditions (3.1) and (3.2) are equivalent.
Problem (continued). Are X and Y independent?
Solution. X and Y are independent i®
f (x; y) = fX (x)fY (y):
We have
µ
¶
1
12
x x+
;
fX (x) =
7
2
2
fY (y) =
(2 + 3y);
7
12
f (x; y) =
x(x + y):
7
We see that
Answer:X are Y dependent.
f (x; y) 6= fX (x) £ fY (y):
The de¯nition with two RVs is easy to extend to the case of several RVs. A very important case
is when n RVs are independent and identically distributed (iid).
4
We write X1 ; X2 ; :::; Xn are iid.
Example. Uniform distribution on the plane (bivariate uniform distribution).
A pair of CRVs has multivariate uniform distribution on (0; a) £ (0; a) if the d.f. is speci¯ed as
1
F (x; y) = 2 £
a
(
xy if 0 < x < a; 0 < y < a
0 otherwise.
Find the joint density, marginal distributions and density functions. Are X and Y independent?
Solution. The joint density is
@ 2 F (x; y)
1
= 2 = const:
@x@y
a
Marginal distribution function of X is
FX (x) = y!1
lim F (x; y) = F (x; y = a) =
1
x
xa = :
2
a
a
Marginal distribution function of Y
FY (y) = x!1
lim F (x; y) = F (x = a; y) =
1
y
ay = :
2
a
a
Both have uniform distribution on (0; a): They are independent because F (x; y) = FX (x)FY (y) :
1
x y
xy = £ :
2
a
a a
Problem. Let f (x) be a density. A function of two arguments is de¯ned as
w(x; y) = f (x + y)
Is w a joint density function?
Solution. The marginal density is
fX (x) =
But
Z
1
¡1
Z
1
¡1
f (x + y)dy =
So we obtain, the marginal density is
f (x + y)dy:
Z
1
¡1
f (u)du = 1:
fX (x) = 1 for all x:
Therefore, fX (x) cannot be a density, and w is not a joint density function.
5
4. Conditional distribution
Given joint distribution for the pair RVs (X; Y ); what is the conditional distribution of Y when X
¯xed at the value X = x? Recall the conditional probability is de¯ned as
Pr(AjB) =
Pr(A \ B)
:
Pr(B)
We say "conditional probability of A given B"
Similarly, conditional density of Y given X = x is
fY jX (y) =
f (x; y)
:
fX (x)
Interpretation: we take a section of f (x; y) at x: Since
R
Z
f (x; y)dy 6= 1
we
divide by f (x; y)dy to make up a density function (area under the density must be 1). But
R
f (x; y)dy = fX (x):
A joint density, the conditional density is a section at one argument ¯xed.
If X and Y are independent then
conditional = marginal:
We can treat fY jX (y) as usual density, i.e. we can ¯nd expectation (mean), variance, etc.
6
Example. A two-dimensional (bivariate) distribution is de¯ned on the square [0; 1] £ [0; 1] as
f (x; y) =
(
c(x + y ¡ xy) if 0 < x < 1; 0 < y < 1
0 elsewhere.
Find appropriate constant c; marginal density fX (x) conditional density fY jX (y). Are X and Y
independent? What is the probability Pr(Y < 1=2jX = 2=3)?
Solution. We have
Z
(x + y ¡ xy)dxdy =
Z
(x + y ¡ xy)dx = y + (1 ¡ y)
Z
1
0
But
Z
0
Then,
1
Z
0
1
µZ
Z
1
0
1
0
µZ
1
0
¶
(x + y ¡ xy)dx dy =
0
1
0
1
¶
(x + y ¡ xy)dx dy
1
y+1
xdx = y + (1 ¡ y) =
:
2
2
1Z 1
1 3
3
(y + 1)dy = £ = :
2 0
2 2
4
Therefore, c must be 4/3 to satisfy
Z Z
f(x; y)dxdy = 1:
1.2
1
0.8
0.6
0.4
0.2
0.6 0.4
1 0.8 x
0
0.2 0
0.2
0.4
y
0.6
0.8
1
Joint density 43 (x + y ¡ xy)
Marginal distribution is
µ
Z 1
4Z 1
4Z 1
4
fX (x) =
x + (1 ¡ x)
f(x; y)dy =
(x + y ¡ xy)dy =
ydy
3 µ0
3
3
0
0
¶
4
1
2
=
x + (1 ¡ x) = (1 + x)
3
2
3
for 0 < x < 1:
7
¶
Conditional density is
fY jX (y) =
where x is ¯xed and 0 < y < 1:
To ¯nd
f (x; y)
x + y ¡ xy
=2
:
fX (x)
1+x
¯
µ
¶
1¯
2
Pr Y < ¯¯ X =
2
3
we need to ¯nd the conditional d. f. FY jX (y) which is (x ¯xed)
F (yjx) =
Z
0
y
fY jX (t)dt = 2
Z
y
0
If x = 2=3 we have
2
x + t ¡ xt
dt =
1+x
(1 + x)
Z
0
y
(x + t ¡ xt)dt = y
2x + y(1 ¡ x)
:
1+x
1
F (yjx = 2=3) = y(4 + y)
5
1
0.8
0.6
0.4
0.2
00
0.2
0.4
y
0.6
0.8
1
Conditional d.f. for x = 1:5
The needed probability is
Pr(Y < :5) =
1 1
1
£ (4 + ) = : 45
5 2
2
5. Distribution of maximum and minimum
Let X1 ; X2 ; :::; Xn be iid. It is easy to ¯nd the distribution function of minimum, min Xi and
maximum, max Xi .
Indeed, let F be the common d.f.. Then, the d.f. of the maximum is
Fmax (x) =
=
=
=
Pr(max Xi · x)
Pr(X1 · x; X2 · x; :::; Xn · x)
Pr(X1 · x) ¢ Pr(X2 · x) ¢ ¢ ¢ Pr(Xn · x)
F n (x):
The density is
fmax (x) = nF n¡1 (x)f (x):
8
Similarly, for the minimum
Fmin (x) =
=
=
=
Pr(min Xi · x)
1 ¡ Pr(min Xi > x)
1 ¡ Pr(X1 > x) ¢ Pr(X2 > x) ¢ ¢ ¢ Pr(Xn > x)
1 ¡ (1 ¡ F (x))n ;
with the density
fmin (x) = n(1 ¡ F (x))n¡1 f(x):
Problem. Xi , i = 1; 2; :::; 10 are iid uniformly distributed on (0,1). What is the density of
min Xi ?
Solution. Since F (x) = x we have
fmin (x) = 10(1 ¡ x)9 :
10
8
6
4
2
00
0.2
0.4
x
0.6
0.8
1
The density of the minimum, 10(1 ¡ x)9
Party problem. You are invited to a party held in Keene, NH (approximately 50 miles from
Hanover) starting at 7 P.M. Assume the speed of drivers on I91 follows the normal distribution
with 60 ml/hour and SD 10 ml/hour, and you cannot pass. You left Hanover at 6 P.M.. and there
are 5 drivers on I91 moving toward Keene. What is the probability you will be late to the party
assuming you go with the maximum speed?
Solution. Let Z be the maximum speed you can go without passing other cars. This speed is
the minimum of the speeds of others driver. Let Xi denote the speed of the ith driver. It is assumed
Xi » N(60; 102 ) and X1 ; :::; X5 are independent. Thus, your speed is
Z = min Xi :
i=1;::;5
You will be late if 50=Z > 1; i.e. if Z < z = 50: This probability is
FZ (z) = 1 ¡ (1 ¡ ©(z; 60; 100))5
where ©(z; 60; 100) denotes the normal d.f. with mean 60 and variance 102 : Use Table 2, A7 Rice,
to ¯nd
µ
¶
50 ¡ 60
= ©(¡1) = 1 ¡ :8413:
©(50; 60; 100) = ©
10
9
Hence, you will be late with probability
1 ¡ :84135 = :57854;
O-o.!
How many cars you should pass to be on time with probability .7?
Clearly if there are n cars ahead your probability to be on time is :8413n ; so that we have to
¯nd such n that
:8413n = :7
which gives
ln(:7)
= 2:064:
ln(:8413)
Answer: you have to pass 3 cars so that 2 cars will be ahead to be on time.
n=
6. Expectation of multivariate RVs
Let g be any function of two arguments, its expectation (or expected value) is de¯ned as
E(g(X; Y )) =
Z
1
Z
1
¡1 ¡1
g(x; y)f (x; y)dxdy:
The most important expected (marginal) values are mean and variance. Marginal mean is de¯ned
as
Z 1 Z 1
Z 1
E(X) = ¹X =
xf (x; y)dxdy =
xfX (x)dx
¡1 ¡1
and marginal variance is de¯ned as
2
var(X) = ¾X
=
Z
1
Z
1
¡1 ¡1
¡1
(x ¡ ¹X )2 f (x; y)dxdy =
Z
1
¡1
(x ¡ ¹X )2 fX (x)dx:
The same for ¹Y and ¾Y2 :
All properties of expectation remain (expectation behaves as linear function).
Theorem 6.1. If X and Y are independent and r and s are any functions then r(X) and s(Y ) are
independent:
Theorem 6.2. If X and Y are independent RVs then for any functions g and h we have
E(g(X) ¢ h(Y )) = E(g(X)) ¢ E(h(Y )):
For independent variables the expectation of the product is the product of the expectations:
E(XY ) = E(X)E(Y ):
We prove the latter:
E(XY ) =
=
Z Z
Z Z
xyf(x; y)dxdy
xyfX (x)fY (y)dxdy =
= E(Y )
Z
Z
x
µZ
xfX (x)dx = E(Y )E(X):
10
¶
yfY (y)dy fX (x)dx
De¯nition 6.3. Covariance of two RV X and Y is de¯ned as
cov(X; Y ) = E(X ¡ ¹X )(Y ¡ ¹Y )
where ¹X and ¹Y are means of X and Y respectively.
If means are zero,
cov(X; Y ) = E(XY ):
Also, it is easy to see that
cov(X; Y ) = cov(Y; X);
cov(®X; Y ) = ® ¢ cov(X; Y );
var(X) = cov(X; X):
Theorem 6.4. Covariance of two independent variables is zero:
cov(X; Y ) = 0:
In this case X and Y are called uncorrelated.
Proof. Since X and Y are independent, X ¡ ¹X and Y ¡ ¹y are independent as well. Thus,
cov(X; Y ) = E(X ¡ ¹x )(Y ¡ ¹y ) = E(X ¡ ¹x )E(Y ¡ ¹y ) = 0:
Theorem 6.5. For any RVs X; Y; Z we have
cov(X; Y + Z) = cov(X; Y ) + cov(X; Z)
i.e. covariance behaves as a linear function.
Proof.
cov(X; Y + Z) = E ((X ¡ ¹X )(Y + Z ¡ ¹Y +Z ))
But
¹Y +Z = E(Y + Z) = E(Y ) + E(Z) = ¹Y + ¹Z :
Thus,
E ((X ¡ ¹X )(Y + Z ¡ ¹Y +Z ))
= E ((X ¡ ¹X )((Y ¡ ¹Y ) + (Z ¡ ¹Z )))
= E ((X ¡ ¹X )(Y ¡ ¹Y )) + E ((X ¡ ¹X )(Z ¡ ¹Z ))
= cov(X; Y ) + cov(X; Z):
Problem. Find the variance of XY where X and Y are independent and E(X) = E(Y ) = 0
and var(X) = var(Y ) = ¾ 2 :
Solution. Let us denote Z = XY: We have
E(Z) = E(XY ) = E(X)E(Y ) = 0 ¢ 0 = 0:
11
Find the second (noncentral moment)
E(Z 2 ) = E(XY )2 = E(X 2 Y 2 ) = E(X 2 )E(Y 2 )
But
E(X 2 ) = var(X) + (E(X))2 = ¾ 2 + 02 = ¾ 2 :
Therefore,
E(Z 2 ) = ¾ 2 ¢ ¾ 2 = ¾ 4 ;
and
var(Z 2 ) = E(Z 2 ) ¡ (E(Z))2 = ¾ 4 ¡ 02 = ¾ 4 :
For any X and Y
var(X + Y ) = var(X) + 2cov(X; Y ) + var(Y ):
Proof. Let Z = X + Y: Then
¹Z = E(Z) = E(X + Y ) = E(X) + E(Y ) = ¹X + ¹Y :
Then,
var(X + Y ) = var(Z) = E(Z ¡ ¹Z )2 = E [(X + Y ) ¡ (¹X + ¹Y )]2
= E [(X ¡ ¹X ) + (Y ¡ ¹Y )]2
h
= E (X ¡ ¹X )2 + 2(X ¡ ¹X )(Y ¡ ¹Y ) + (Y ¡ ¹Y )2
i
= E(X ¡ ¹X )2 + 2E(X ¡ ¹X )(Y ¡ ¹Y ) + E(Y ¡ ¹Y )2
= var(X) + 2cov(X; Y ) + var(Y ):
Fact to remember: If X and Y are uncorrelated (or independent) then
var(X + Y ) = var(X) + var(Y ):
7. Application to ¯nance { optimal investment
Two independent stocks have the same expected performance (return) but di®erent risk. To diversify, what should be the proportion of each stock in your portfolio to minimize the overall risk.
We denote X { the ¯rst stock, and Y { the second stock. Thus, it is assumed that stock return
is a RV. Expected performance is expected value,
E(X) = E(Y ) = ¹X = ¹Y = ¹:
Then, the variance is associated with risk. Denote
2
var(X) = ¾X
;
var(Y ) = ¾Y2
The independence implies
cov(X; Y ) = 0:
We diversify, which is formalized as follows
Z = ®X + (1 ¡ ®)Y
12
a new stock where ® is the proportion to ¯nd, 0 < ® < 1:
Question #1. Does Z have the same expected return? Yes:
E(Z) = E (®X + (1 ¡ ®)Y ) = E (®X) + E ((1 ¡ ®)Y )
= ®E (X) + (1 ¡ a)E(Y ) = ®¹ + (1 ¡ ®)¹ = ¹:
What about Z's risk?
var(Z) = var(®X + (1 ¡ ®)Y ) = var (®X) + var ((1 ¡ ®)Y )
2
= ®2 var(X) + (1 ¡ ®)2 var(Y ) = ®2 ¾X
+ (1 ¡ ®)2 ¾Y2 :
We need to minimize the risk, i.e.
var(Z) ) min
®
i.e.,
2
® 2 ¾X
+ (1 ¡ ®)2 ¾Y2 ) min
®
Denote
2
R(®) = ®2 ¾X
+ (1 ¡ ®)2 ¾Y2
3
2.5
2
1.5
1
0.5
0
0.2
0.4
0.6
0.8
1
Function R(®) for di®erent stock variances.
2
Solid: ¾X
= ¾Y2 = 1;
2
Dash: ¾X = 1; ¾Y2 = 2;
2
Dot: ¾X
= 3; ¾Y2 = 1
Find the minimum of R(®) :
dR
2
= 2®¾X
¡ 2(1 ¡ ®)¾Y2 = 0
d®
that gives
®opt =
¾Y2
:
2
¾X
+ ¾Y2
13
(7.1)
Minimum risk is provided by (7.1), and is equal to
Rmin
4
¾Y4
¾X
2
=
¾
+
¾2
2
2
(¾X
+ ¾Y2 )2 X (¾X
+ ¾Y2 )2 Y
2 2
2 2
³
´
¾X
¾Y
¾X
¾Y
2
2
¾
+
¾
=
=
X
Y
2
2 2
2
(¾X + ¾Y )
¾X + ¾Y2
Show that
Rmin < RX and Rmin < RY :
The best proportion is
2
®opt
1=¾X
=
:
1 ¡ ®opt
1=¾Y2
Rules:
1. Always diversify.
2. If stocks have the same performance, independent, and equally risky use the 50/50 rule.
3. The optimal stock proportion is reciprocal of variances.
8. Homework (due February 3)
Maximum number of point.
1. (5 points). Solve 3.8.6 from Rice (p. 104).
Solution. The joint density of (X; Y ) is constant (c) within the ellips
x2 y 2
+ 2 ·1
a2
b
and zero outside.
1
0.5
-2
-1
00
1
x
2
-0.5
-1
Ellips for a = 2 and b = 1
The constant c must satisfy
1=
Z Z
x2 =a2 +y2 =b2 ·1
cdxdy = c
Z Z
x2 =a2 +y2 =b2 ·1
14
b Z ap 2
dxdy = c4
a ¡ x2 dx:
a 0
To ¯nd the integral we change the variable: x = apsin t: The upper
p limit of t is ¼=2 and the lower
limit is 0. Then we notice that dx = a cos tdt and a2 ¡ x2 = a 1 ¡ sin2 t = a cos t: Hence,
Z
Z
p
2
2
2
a ¡ x dx = a
a
0
a2 Z ¼=2
a2 ¼
cos tdt =
(1 + cos 2t)dt =
:
2 0
2 2
¼=2
2
0
Thus, the area of the ellips is ¼ab and we obtain c = 1=(¼ab):
p
First, we ¯nd the marginal density for X: When X = x is ¯xed them y runs from ¡ ab a2 ¡ x2
p
to ab a2 ¡ x2 : Hence, to ¯nd the marginal density we integrate out y as follows
p
p
p
b
2 ab a2 ¡ x2
1 Z a a2 ¡x2
2 a2 ¡ x2
fX (x) =
=
dy =
p
¼ab ¡ ab a2 ¡x2
¼ab
¼a2
for ¡a < x < a and zero outside (¡a; a): Similarly,
p
p
p
2 ab b2 ¡ y 2
1 Z ab b2 ¡y2
2 b2 ¡ y 2
fY (y) =
dx =
=
;
p
¼ab ¡ ab a2 ¡y2
¼ab
¼b2
¡b < y < b:
2. (6 points). Solve 3.8.19 from Rice (p.105).
Solution. RV T1 has the density function ®e¡®x and T2 has the density ¯e¡¯y : The joint density
is the product, f (x; y) = ®be¡®x e¡¯y : The ¯rst probability is
Pr(T1 > T2 ) =
Z
1
¡1
µZ
x
¡1
¶
f (x; y)dy dx:
Since exponential distribution is de¯ned for positive values we obtain
Z
Pr(T1 > T2 ) =
Z
=
1
0
1
0
µ Z
® ¯
³
x
0
e
¡®x ¡¯y
e
´
®e¡®x 1 ¡ e¡¯x dx =
¶
dy dx =
Z
1
0
Z
1
0
µ Z
¡®x
¯
®e
®e¡®x dx ¡ ®
Z
1
0
x
0
¡¯y
e
¶
dy dx
e¡(®+¯)x dx = 1 ¡
®
¯
=
:
®+¯
®+¯
Similarly,
Pr(T1 > 2T2 ) =
=
Z
0
1
Z
0
1
à Z
® ¯
³
x=2
0
¡®x ¡¯y
e
e
´
®e¡®x 1 ¡ e¡¯x=2 dx =
Z
!
dy dx =
0
1
Z
0
1
®e
®e¡®x dx ¡ ®
¡®x
Z
1
0
à Z
¯
0
x=2
¡¯y
e
!
dy dx
e¡(®+¯=2)xdx = 1 ¡
®
¯
=
:
® + ¯=2
2® + ¯
3. (7 points). If X is the amount of money (in dollars) that a salesperson spends on gasoline
during a day, and Y is the corresponding amount of money for which he/she is reimbursed, the
joint density of these two random variables is given by
f (x; y) =
(
1
25
0
³
20¡x
x
´
f or
10 < x < 20; :5x < y < x
elsewhere
¯nd: (a) the marginal density of X; (b) the conditional density of Y given X = 12; (c) the probability
that the salesperson will be reimbursed at least $8 when spending $12.
15
Solution. To calculate marginal density we ¯x x and then y belongs to the interval (x=2; x):
Thus, the marginal density is
µ
¶
µ
¶
1 20 ¡ x
1 20 ¡ x Z x
fX (x) =
f(x; y)dy =
dy =
dy
x
25
x
x=2
x=2 25
x=2
µ
¶
1 20 ¡ x x
20 ¡ x
=
=
for 10 < x < 20:
25
x
2
50
Z
x
Z
x
The conditional density is f (x; y)=fX (x) which leads to a uniform distribution with the density
fY jX (y) =
2
for :5x < y < x:
x
If X = 12 then the conditional density becomes 1/6 for 6 < y < 12. The needed probability is
Pr(Y ¸ 8jX = 12). Since fY jX (y) = 1=6 this probability is (12-8)/6=4/6=2/3.
2
4. (4 points). Let X and Y are two independent RVs with means and variance ¹X ; ¾X
and
2
¹Y ; ¾Y respectively. Find the variance of the product, XY:
Solution. By th de¯nition, var(XY ) = E(XY )2 ¡ E 2 (XY ): But due to independence
2
E(XY )2 = E(X 2 Y 2 ) = E(X 2 )E(Y 2 ) = (¾X
+ ¹2X )(¾Y2 + ¹2Y ):
But again due to independence E(XY ) = ¹X ¹Y so that we obtain
2
2 2
2 2
var(XY ) = (¾X
+ ¹2X )(¾Y2 + ¹2Y ) ¡ ¹2X ¹2Y = ¾X
¾Y + ¾X
¹Y + ¾Y2 ¹2X :
5. (5 points). You are taking part in the auction and there are 10 other people. Each person is
willing to pay from $1,000 to $2,000 with equal preference (uniform distribution). You can spend
maximum $1,990. What is the probability to get the object of the auction?
Solution. For each participant Xi » U (1000; 2000) with the distribution function F (x) =
1000¡1 (x ¡ 1000) = (x=1000 ¡ 1): You get the object of the auction if max Xi < 1990: But the
distribution of the maximum is
F 10 (x) = (x=1000 ¡ 1)10 :
Substituting x = 1990 we obtain the asked probability,
(1990=1000 ¡ 1)10 = :9
6. (3 points). Prove that the minimum of iid exponentially distributed RVs is again an exponentially distributed RV. Is this true for the maximum?
Solution. The distribution function of each Xi is 1¡exp(¸x): The distribution function of min Xi
is
1 ¡ (1 ¡ F (x))n = 1 ¡ exp(¡¸nx);
which is again an exponential distribution with the rate ¸n: It is not true for the maximum because
(1 ¡ exp(¡¸x))n is not of the form 1 ¡ exp(¡®x):
7. (6 points). How to minimize risk in optimal investment problem if the stock returns are not
the same?
Solution. The optimal rule remains the same as for equal returns because var(®X + (1 ¡ ®)Y ) =
2
® var(X) + (1 ¡ ®)2 var(Y ) which does not contain means of X and Y:
8. (3 points). There are three independent RVs X; Y; Z: One generates a couple of new RVs as
follows, U = X + Y and V = Y + Z: Are U and V independent? Show the work.
16
Solution. No, U and V are dependent. To prove this we calculate the covariance,
cov(U; V ) = cov(X + Y; Y + Z) = cov(X; Y ) + cov(X; Z) + cov(Y; Y ) + cov(Y; Z)
= 0 + 0 + var(Y ) + 0 = var(Y ) > 0
Therefore, cov > 0 and U and V cannot be independent.
9. (3 points). RVs X and Y are independent, are X ¡ Y and X + Y independent? Show the
work.
Solution. Again, we calculate covariance
cov(X ¡ Y; X + Y ) = cov(X; X) + cov(X; Y ) ¡ cov(Y; X) ¡ cov(Y; Y ) = var(X) ¡ var(Y ):
Generally, X ¡ Y and X + Y are dependent. They are uncorrelated only if var(X) = var(Y ).
10. (4 points). Prove that if two RVs are uncorrelated then linear functions of them are
uncorrelated as well (if X and Y are uncorrelated then aX + b and cY + d are uncorrelated).
Solution. Calculate the covariance
cov(aX + b; cY + d) = cov(aX; cY ) + cov(aX; d) + cov(b; cY ) + cov(b; d):
But covariance between RV and constant is zero. It implies that
cov(aX + b; cY + d) = cov(aX; cY ) = ac £ cov(X; Y ) = 0
11. (5 points). Xi are iid with variance ¾ 2 : Find the variance of the average,
Solution. Since Xi are iid
var(
n
n
n
1X
1 X
1 X
Xi ) =
var(X
)
=
¾2
i
n i=1
n2 i=1
n2 i=1
=
¾2
:
n
17
Pn
i=1
Xi =n: