Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
Student Number
Queen’s University
Department of Mathematics and Statistics
STAT 353
Final Examination April 21, 2011
Instructor: G. Takahara
• “Proctors are unable to respond to queries about the interpretation of exam questions. Do your best to answer exam questions as written.”
• “The candidate is urged to submit with the answer paper a clear statement of
any assumptions made if doubt exists as to the interpretation of any question that
requires a written answer.”
• Formulas and tables are attached.
• An 8.5 × 11 inch sheet of notes (both sides) is permitted.
• Simple calculators are permitted. HOWEVER, do reasonable simplifications.
• Write the answers in the space provided, continue on the backs of pages if needed.
• SHOW YOUR WORK CLEARLY. Correct answers without clear work showing
how you got there will not receive full marks.
• Marks per part question are shown in brackets at the right margin.
Marks: Please do not write in the space below.
Problem 1 [10]
Problem 4 [10]
Problem 2 [10]
Problem 5 [10]
Problem 3 [10]
Problem 6 [10]
Total: [60]
STAT 353 -- Final Exam, April 24, 2010
1.
Page 2 of 9
Let X and Y be independent Exponential random variables with mean 1. Let > 0
be a given positive number. Compute P (|X − Y | > ) by setting up and evaluating the
appropriate double integration.
[10]
Solution: The appropriate double integral would be
Z ∞Z
P (|X − Y | > ) =
e−(x+y) dxdy
0
Z
∞
{x>0:|x−y|>}
∞
−(x+y)
Z
Z
∞
Z
y−
dxdy +
e−(x+y) dxdy
0
Z0 ∞ y+
Z ∞ =
e−y−(y+) dy +
e−y (1 − e−(y− )dy
=
e
0
1 −
1
=
e + e− − e e−2
2
2
= e− .
STAT 353 -- Final Exam, April 24, 2010
Page 3 of 9
2. (a) Let X and Y be two zero mean random variables, each with the same variance σ 2 .
Note that we are not assuming that X and Y are independent or identically distributed.
Let ρ(X, Y ) denote the correlation coefficient between X and Y . Show that
ρ(X, Y ) ≥ −1 + P (|X + Y | ≥
√
Hint: Use Chebyshev’s inequality to bound P (|X + Y | ≥
2σ).
√
2σ).
Solution: Since the mean of X + Y is 0, by Chebyshev’s inequality, we have
P (|X + Y | ≥
√
Var(X + Y )
2σ 2
Var(X) + Var(Y ) + 2Cov(X + Y )
=
2σ 2
2
2
2σ + 2σ ρ(X, Y )
=
2σ 2
= 1 + ρ(X, Y ).
2σ) ≤
Therefore,
ρ(X, Y ) ≥ −1 + P (|X + Y | ≥
√
2σ).
[5]
STAT 353 -- Final Exam, April 24, 2010
Page 4 of 9
(b) As a generalization of part(a), suppose that X1 , . . . , Xn are zero mean random
variables, each with the same variance σ 2 . Also assume that the correlation coefficient
between any pair of the X’s is the same, and given by ρ. Show that
p
1
+ P |X| ≥ σ 1 − 1/n ,
ρ≥−
n−1
P
[5]
where X = n1 ni=1 Xi .
Solution: Proceeding as in part(a), since X has mean 0 we have again by Chebyshev’s
inequality that
P (|X| ≥ σ
p
1 − 1/n) ≤
=
=
=
=
=
=
Var(X)
− 1/n)
P
P
Cov( n1 ni=1 Xi , n1 ni=1 Xi )
σ 2 (1 − 1/n)
P
n Pn
1
i=1
j=1 Cov(Xi , Xj )
n2
σ 2 (1 − 1/n)
1
(nσ 2 + n(n − 1)σ 2 ρ)
n2
σ 2 (1 − 1/n)
1
(1 + (n − 1)ρ)
n
1 − 1/n
1 + (n − 1)ρ
n−1
1
+ ρ.
n−1
σ 2 (1
Therefore, we obtain
ρ≥−
p
1
+ P (|X| ≥ σ 1 − 1/n).
n−1
STAT 353 -- Final Exam, April 24, 2010
Page 5 of 9
3. Let X be a random variable with 0 mean and let Y be a random variable with mean µ
and variance σ 2 . Assume that X and Y are independent.
(a) If µ = 0 show that X and XY are uncorrelated.
[3]
Solution: We compute
Cov(X, XY ) = E[X 2 Y ] − E[X]E[XY ]
= E[X 2 ]E[Y ] − E[X]E[X]E[Y ]
(by independence)
= 0−0=0
if E[Y ] = µ = 0. Hence, X and XY are uncorrelated.
(b) In general, show that the square of the correlation coefficient between X and XY
is ρ2 (X, XY ) = µ2 /(σ 2 + µ2 ).
[7]
Solution: Computing Cov(X, XY ) and Var(XY ), again using independence, we have
Cov(X, XY ) = E[X 2 Y ] − E[X]E[XY ] = E[X 2 ]µ − E[X]2 µ = µVar(X)
and
Var(XY ) = E[X 2 Y 2 ] − E[XY ]2 = E[X 2 ]E[Y 2 ] − E[X]2 µ2
= E[X 2 ](σ 2 + µ2 ) − E[X]2 µ2
= Var(X)(σ 2 + µ2 ),
since E[X] = 0 (and so Var(X) = E[X 2 ]). Therefore,
Cov2 (X, XY )
µ2
µ2 Var2 (X)
ρ (X, XY ) =
=
=
.
Var(X)Var(XY )
σ 2 + µ2
Var2 (X)(σ 2 + µ2 )
2
STAT 353 -- Final Exam, April 24, 2010
Page 6 of 9
4. We have 2 coins and n flips are performed as follows. For the first flip we pick a coin at
random and flip it. For i = 2, . . . , n we flip coin 1 if the (i − 1)st flip was heads and we
flip coin 2 if the (i − 1)st flip was tails. Suppose the probability of heads for coins 1 and
2 are, respectively, p1 and p2 . Let X denote the number of heads that are flipped in the
n flips. Find E[X].
[10]
Solution: Let Xi = 1 if the ith flip is heads and Xi = 0 if the ith flip is tails, for
i = 1, . . . , n. Then we have X = X1 + . . . + Xn and E[X] = E[X1 ] + . . . + E[Xn ]. Since
we pick the first coin at random we have (using the law of total probability) E[X1 ] =
P (X1 = 1) = 21 (p1 + p2 ). For i = 2, . . . , n we may condition on Xi−1 to obtain
E[Xi ] = P (Xi = 1)
= P (Xi = 1 Xi−1 = 1)P (Xi−1 = 1) + P (Xi = 1 Xi−1 = 0)P (Xi−1 = 0)
= p1 E[Xi−1 ] + p2 (1 − E[Xi−1 ])
= p2 + (p1 − p2 )E[Xi−1 ].
Recursively, then, we obtain
E[Xi ] = p2 + (p1 − p2 )E[Xi−1 ]
= p2 + p2 (p1 − p2 ) + (p1 − p2 )2 E[Xi−2 ]
..
.
= p2 + p2 (p1 − p2 ) + . . . + p2 (p1 − p2 )i−2 + (p1 − p2 )i−1 E[X1 ]
1
= p2 + p2 (p1 − p2 ) + . . . + p2 (p1 − p2 )i−2 + (p1 − p2 )i−1 (p1 + p2 )
2
1
1 − (p1 − p2 )i−1
= p2
+ (p1 − p2 )i−1 (p1 + p2 )
1 − (p1 − p2 )
2
p2
p1 + p2
p2
i−1
− (p1 − p2 )
−
=
.
1 − (p1 − p2 )
1 − (p1 − p2 )
2
The above holds also for i = 1. Summing over i from 1 to n we obtain
p2
np2
1 − (p1 − p2 )n
p1 + p2
E[X] =
−
−
.
1 − (p1 − p2 )
1 − (p1 − p2 ) 1 − (p1 − p2 )
2
The answer is fine in this form.
STAT 353 -- Final Exam, April 24, 2010
Page 7 of 9
5. Let X1 , X2 , . . . be a sequence of random variables satisfying E[Xn ] = µn and Var(Xn ) ≤
M/np for n ≥ 1, where M and p are positive constants. Suppose that µn → µ as n → ∞.
(a) If p > 0 show that Xn → µ in probability as n → ∞. Hint: Use the triangle
inequality |a + b| ≤ |a| + |b| and Chebyshev’s inequality.
[7]
Solution: Let > 0 be given. Then
P (|Xn − µ| > ) = P (|Xn − µn + µn − µ| > ) ≤ P (|Xn − µn | + |µn − µ| > ),
since by the triangle inequality |Xn − µn + µn − µ| > implies |Xn − µn | + |µn − µ| > .
Since µn → µ as n → ∞, choose N so that |µn − µ| < for all n ≥ N . For such n, using
Chebyshev’s inequality, we have
P (|Xn − µ| > ) ≤ P (|Xn − µn | + |µn − µ| > )
= P (|Xn − µn | > − |µn − µ|)
Var(Xn )
≤
( − |µn − µ|)2
M
,
≤
np ( − |µn − µ|)2
(1)
which converges to 0 as n → ∞ if p > 0. Hence Xn → µ in probability.
(b) If p = 2 show that Xn → µ almost surely.
[3]
Solution: If p = 2 then the last term in Eq.(1) above is convergent when summed over n
and since this upper bounds P (|Xn − µ| > ), we have
∞
X
P (|Xn − µ| > ) < ∞.
n=1
This implies that Xn → µ almost surely (from a theorem in class).
STAT 353 -- Final Exam, April 24, 2010
Page 8 of 9
6. Let X1 , X2 , . . . be independent and identically distributed random variables with mean
P
√
µ and variance σ 2 < ∞. Let Zn = (X n − µ)/(σ/ n) for n ≥ 1, where X n = n1 ni=1 Xi .
Let Z be another random variable with a N (0, 1) distribution and suppose that Z is
independent of Zn for all n. Show that Zn does not converge to Z in probability. [10]
Solution: Let > 0 be given. We need to show that P (|Zn − Z| > ) does not converge
to 0 as n → ∞. One way to see this is to first condition on Z, giving
Z ∞
1
2
P (|Zn − Z| > Z = z) √ e−z /2 dz
P (|Zn − Z| > ) =
2π
Z−∞
∞
1 −z2 /2
dz
P (|Zn − z| > ) √ e
=
2π
−∞
Z ∞h
i 1
2
P (Zn < z − ) + P (Zn > z + ) √ e−z /2 dz
=
2π
Z−∞
i 1
∞h
2
P (Zn ≤ z − ) − P (Zn = z − ) + P (Zn > z + ) √ e−z /2 dz,
=
2π
−∞
where the second inequality loses the conditioning on Z because Zn is assumed to be
independent of Z. By the central limit theorem, as n → ∞ we have P (Zn ≤ z − ) →
Φ(z − ) and P (Zn > z + ) → 1 − Φ(z + ) (and P (Zn = z − ) → 0), where Φ is the
standard normal cdf. Hence we obtain
Z ∞h
i 1
2
Φ(z − ) + 1 − Φ(z + ) √ e−z /2 dz
P (|Zn − Z| > ) →
2π
Z−∞
∞
1
2
=
P (|Y − z| > ) √ e−z /2 dz,
2π
−∞
where Y has a N (0, 1) distribution. The above is clearly nonzero.
STAT 353 -- Final Exam, April 24, 2010
Page 9 of 9
Formula Sheet
Special Distributions
Exponential with parameter λ:
λe−λx if x > 0
f (x) =
0
otherwise.
E[X] =
1
,
λ
Var(X) =
1
.
λ2
Normal (Gaussian) with mean µ and variance σ 2 :
Z x
(x−µ)2
(t−µ)2
1
1
−
2
2σ
f (x) = √ e
e− 2σ2 dt.
and F (x) = √
σ 2π
σ 2π −∞