Download Week 6

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
170B Note
Sangchul Lee
November 3, 2015
1
1.1
Week 6
Summary
• (Various modes of convergence) Let (X n ) be a sequence of RVs.
(a) We say X n → c in probability if
∀ε > 0,
lim P(|X n − c| > ε) = 0.
n→∞
(b) We say X n → c almost surely (a.k.a. with probability 1) if
P lim X n = c = 1.
n→∞
• Convergence with probability 1 implies convergence in probability, but not conversely.
• (Law of large numbers) If (X n ) is i.i.d. sequence of RVs and EX n = µ exists, then
(a) (Weak LLN) (X1 + · · · + X n )/n → µ in probability.
(b) (Strong LLN) (X1 + · · · + X n )/n → µ almost surely.
Remark 1.1. Here is a general remark on establishing convergence.
• Establishing convergence in probability is in general not hard. You can appeal to the definition and estimate
the probability P(|X n − c| > ε). Even the WLLN is proved directly from the definition, and there is no
doubt that it is a very versatile approach.
– You may want to identify the limit value c first. In many cases, although not always, c is the limit of
EX n . (See Exercise 1)
– Next we need to estimate P(|X n − c| > ε). Often we use the Markov inequality or Chebyshev
inequality, but much less often we make a direct estimation.
• Establishing convergence with probability 1 is usually much harder. Except for some lucky cases, the only
handy tool is the SLLN. Consequently many problems are designed to utilize it.
1.2
Problems
1
Exercise 1.1 (A useful observation). Let (X n ) be such that
lim EX n = c
n→∞
lim Var(X n ) = 0.
and
n→∞
Show that X n → c in probability.
Solution. From the Markov’s inequality,
P(|X n − c| > ε) = P(|X n − c| 2 > ε 2 ) ≤
1
E[(X n − c) 2 ].
ε2
Now using the identity
E[(X n − c) 2 ] = Var(X n ) + (EX n − c) 2
and the assumptions, we find that limn→∞ P(|X n − c| > ε) = 0. Therefore X n → c in probability.
Remark 1.2. X n → c in probability implies neither EX n → c nor Var(X n ) → 0. Consider an i.i.d. sequence
(Un ) having Uniform([0, 1]) distribution, and define

 n, Un ∈ [0, n−1 ],
Xn = 
 0, otherwise.

Then X n → 0 in probability since P(|X n | > ε) ≤ n−1 . On the other hand, EX n → 1 and Var(X n ) → ∞.
Exercise 1.2. Let (X n ) be a sequence of independent RVs satisfying X n ∼ Exponential(3). Show that
lim
n→∞
e X1 + · · · + e X n
n
converges almost surely and compute its limit.
Solution. Since (e X n ) are also i.i.d., by the SLLN we have
lim
e X1 + · · · + e X n
= E[e X1 ].
n
3
3−1
= 32 . Therefore the limit is 32 .
n→∞
almost surely. But E[e X1 ] = MX1 (1) =
Exercise 1.3. Let (X n ) and (Yn ) be two i.i.d sequences of RVs such that EX n = α, Yn > 0 and EYn = β > 0.
Show that
X1 + · · · + X n
Y1 + · · · + Yn
2
converges almost surely and find the limit.
Solution. Divide both the numerator and denominator by n. Then by SLLN,
X1 + · · · + X n
(X1 + · · · + X n )/n
α
=
−−−−−−−−−→
n→∞
Y1 + · · · + Yn
(Y1 + · · · + Yn )/n
β
almost surely.
Exercise 1.4. Let 0 < p < 1 and (X n ) be a sequence of i.i.d. RVs satisfying X n ∼ Bernoulli(p). In each
case, show that Z n converges with probability 1 and compute the limit.
(a) Z n =
1
n Xn .
(b) Z n = X1 · · · X n .
(d) Z n =
1
n (X1 + · · · + X n ).
1 P
1≤i < j ≤n X i X j .
n2
(e) Z n =
1
n (X1 X2
(c) Z n =
Solution.
+ X2 X3 + · · · + X n X n+1 ).
(a) Since 0 ≤ X n ≤ 1, we have
0 ≤ Zn =
1
1
Xn ≤ .
n
n
By the squeezing lemma, we have limn→∞ Z n = 0 almost surely.
(b) Notice that

 1,
Z n = X1 · · · X n = 
 0,

if X1 = 1, · · · , X n = 1,
otherwise.
Consequently, we have
if X1 = 1, X2 = 1, · · ·
otherwise.

 1,
lim Z n = 
 0,
n→∞

(For meticulous readers, I remark that 1 ≥ Z1 ≥ Z2 ≥ · · · ≥ 0 and hence (Z n ) is monotone and bounded.
Hence this sequence converges by the completeness of R.) By independence, we find that
P(X1 = 1, X2 = 1, · · · ) = P(X1 = 1)P(X2 = 1) · · · = p × p × · · · = 0.
Therefore limn→∞ Z n = 0 with probability 1.
(c) By the SLLN, Z n =
1
n (X1
+ · · · + X n ) converges to EX1 = p.
3
(d) Notice that
2Z n =
!2
!
X1 + · · · + X n
1 X12 + · · · + X n2
1 X
X
X
=
−
.
i
j
n
n
n
n2 i, j
By the SLLN, we find that
X1 + · · · + X n
→ EX1 = p
n
Therefore we have limn→∞ Z n =
p2
2
and
X12 + · · · + X n2
→ E[X12 ] = p.
n
almost surely.
(e) Group odd terms and even terms:
n+1
n
b 2 c
b2c
1 X
1X
Zn =
X2k−1 X2k +
X2k X2k+1 .
n k=1
n k=1
Notice that both the portion of odd terms and that of even terms converge to 1/2:
b n+1
b n+ c 1
1
2 c
= ,
lim 2 = .
n→∞
n→∞ n
n
2
2
Since (X1 X2, X3 X4, · · · ) and (X2 X3, X4 X5, · · · ) are i.i.d. sequences, by the SLLN we have
lim
n
n+1
b2c
bX
2 c
b n+1 c
bnc
1
1 X
Z n = 2 · n+1
X2k−1 X2k + 2 · n
X2k X2k+1
n
n
b 2 c k=1
b 2 c k=1
−−−−−−−−−→
n→∞
1
1
E[X1 X2 ] + E[X2 X3 ] = p2 .
2
2
Exercise 1.5. Let (X n ) be a sequence of independent RVs satisfying X i ∼ Poisson(2i − 1). Let
Mn =
X1 + · · · + X n
.
n2
(a) Show that (Mn ) converges in probability and find its limit.
(b*) Show that (Mn ) converges almost surely and find its limit.
Solution. (a) As usual, computing EMn helps us anticipate the limit in probability. Indeed,
EX1 + · · · + E X n
1 + 3 + · · · + (2n − 1)
=
=1
2
n
n2
and we expect that Mn → 1 in probability. Then by the Chebyshev’s inequality,
EMn =
1
Var(Mn )
ε2
1
1
= 2 4 (Var(X1 ) + · · · + Var(X n )) = 2 2 ,
ε n
ε n
P(|Mn − 1| > ε) ≤
4
which converges to 0. Therefore Mn → 1 in probability.
(b*) We utilize the SLLN. Recall that the sum of two independent Poisson RVs is again Poisson. Using this,
we may find an i.i.d. sequence (Yn ) of Poisson(1) RVs and write
X1 = Y1,
X2 = Y2 + Y3 + Y4,
··· ,
X n = Y(n−1)2 +1 + · · · + Yn 2 .
Then it follows that
Mn =
Y1 + · · · + Yn 2
.
n2
By the SLLN, Mn → 1 almost surely.
1.3
Hard problems*
If you feel that the previous problems are easy enough, you may try the following problems:
Exercise 1.6 (Law of large numbers may fail if EX does not exist). The Cauchy distribution of scale γ,
denoted as Cauchy(0, γ), is defined by the PDF
f (x) =
γ
.
π(γ 2 + x 2 )
Show that
(a) If X ∼ Cauchy(0, γ), then cX ∼ Cauchy(0, cγ) for c > 0.
(b) If X ∼ Cauchy(0, α) and Y ∼ Cauchy(0, β) are independent, then X + Y ∼ Cauchy(α + β).
(c) Let X n be a sequence of i.i.d. RVs having Cauchy(0, 1) distribution. Then
X1 + · · · + X n
∼ Cauchy(0, 1).
n
Consequently, the law of large numbers does not hold in this case. (This does not contradicts LLN,
since EX n does not exist.)
Exercise 1.7 (Limit laws for convergence in probability). Let (X n ), (Yn ) be sequences of RVs. If X n → α
and Yn → β in probability, then show that
(a) X n + Yn → α + β in probability.
(b) X n − Yn → α − β in probability.
(c) X n Yn → α β in probability.
(d) X n /Yn → α/ β in probability, if P(Yn = 0) = 0 and β , 0.
5
Exercise 1.8 (Squeezing lemma for convergence in probability). Let (X n ), (Yn ), (Z n ) be sequences of RVs.
If
• X n ≤ Yn ≤ Z n ,
• X n → c and Z n → c in probability,
then show that Yn → c in probability as well.
6
Related documents