Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
資格考科目:機率論
國立中山大學應用數學系博士班統計組
編號:
問題
得分
問題
得分
1: 10分
2: 10分
3: 10分
4: 10分
5: 10分
6: 10分
7: 10分
8: 10分
9: 10分
10: 10分
總分: 100分
1. Let X1 , X2 , . . . be a sequence of random variable that converges in probability to a
constant a. Assume that P (Xi > 0) = 1 for all i.
√
0
(a) Verify that the sequences defined by Yi = Xi and Yi = a/Xi converge in
probability.
(b) If Sn2 → σ 2 in probability,
then prove that σ/Sn converges in probability to 1
Pn
1
(X
−
X̄n )2 .
where Sn2 = n−1
i
i=1
解答:
(a) For any > 0,
p
p
p
√
√ p
√
√
P (| Xn − a| > ) = P (| Xn − a|| Xn + a| > | Xn + a|)
p
√
= P (|Xn − a| > | Xn + a|)
√
≤ P (|Xn − a| > a) → 0,
as n → ∞, since Xn → a in probability.
√
√
Thus Xn → a in probability.
For any > 0,
a
a
a
≤ Xn ≤
P − 1 ≤ = P
Xn
1+
1−
a
a
= P a−
≤ Xn ≤ a +
1+
1−
a
a
≥ P a−
≤ Xn ≤ a +
1+
1+
a
= P |Xn − a| ≤
→ 1, as n → ∞,
1+
since Xn → a in probability.
Thus a/Xn → 1 in probability.
√
p
(b) Sn2 → σ 2 in probability. By (a), Sn = Sn2 → σ 2 = σ in probability, σ/Sn → 1
in probability.
2. Let X1 , . . . , Xn be a sequence of random variables that converges in distribution to
a random variable X. Let Yn be a sequence of random variables with the property
that for any finite number c,
lim P (Yn > c) = 1.
n→∞
Show that for any finite number c,
lim P (Xn + Yn > c) = 1.
n→∞
2007-09-12
5-1
資格考科目:機率論
國立中山大學應用數學系博士班統計組
解答: For all > 0 there exist N such that if n > N , then P (Xn + Yn > c) > 1 − .
Choose N1 such that P (Xn > −m) > 1−/2 and N2 such that P (Yn > c+m) >
1 − /2. Then
P (Xn + Yn > c) ≥ P (Xn > −m, Yn > c + m)
≥ P (Xn > −m) + P (Yn > c + m) − 1 = 1 − .
3. Given that N = n, the conditional distribution of Y is χ22n . The unconditional
distribution of N is Poisson(θ).
(a) Calculate E[Y ] and Var(Y ) (unconditional moments).
p
(b) Show that, as θ → ∞, (Y − E[Y ])/ Var(Y ) → N (0, 1) in distribution.
解答:
(a) EY = E(E(Y |N )) = E(2N ) = 2θ and Var Y = Var (E(Y |N ))+E(Var (Y |N )) =
Var (2N ) + E(4N ) = 8θ.
(b) The moment generating function of Y :
MY (t) = E(E(etY |N )) = E(1/(1 + 2t)N ) = e2θt/(1−2t)
Note that MaY +b (t) = ebt MY (at). Thus
√
√
√
M(Y −E(Y ))/√Var(Y ) (t) = e− θ/2t e2θt/ 8θ/(1+2t/ 8θ)
√
√
√
2
2
= e− θ/2t e θ/2t(1+t/ 2θ+o(t/θ )) → e−t /2
as θ → ∞.
Then the desired property is proved.
4. Let f (t) be a real-valued and continuous function which is defined for all real t and
which satisfies the following conditions:
(i) f (0) = 1
(ii) f (−t) = f (t)
(iii) f (t) convex for t > 0
(iv) limt→∞ f (t) = 0.
Then f (t) is the characteristic function of an absolutely continuous distribution function.
解答: Since f (t) is a convex function it has everywhere a right-hand derivative which
we denote by f 0 (t). The function f 0 (t) is non-decreasing for t > 0. It follow
from (iv) that f 0 (t) ≤ 0 for t > 0 and that
lim f 0 (t) = 0.
t→∞
It is easily seen that the integral
R∞
−∞
1
p(x) =
2π
2007-09-12
e−itx f (t)dt exists for all x 6= 0. We write
Z
∞
e−itx f (t) dt.
(4.3.1)
−∞
5-2
資格考科目:機率論
國立中山大學應用數學系博士班統計組
We see from (ii) and (4.3.1) that
∞
Z
1
p(x) =
π
f (t) cos tx dt.
0
The condition of Fourier’s inversion theorem are satisfied and we obtain
Z ∞
eitx p(x) dx.
f (t) =
−∞
R∞
If follows from (i) that ∞ p(x) dx = 1 and the proof of theorem 4.3.1 is completed as soon as we show that p(x) is non-negative.
Integrating by parts and writing g(t) = −f 0 (t) we get
Z ∞
1
p(x) =
g(t) sin xt dt
(4.3.3)
πx 0
where g(t) is a non-increasing, non-negative function for t > 0 while
lim g(t) = 0.
t→∞
Then
1
p(x) =
πx
Z
Let x > 0; the series
π/x
"
0
∞
X
jπ
(−1)j g t +
x
j=0
∞
X
jπ
(−1) g t +
x
j=0
j
#
sin tx dt.
is an alternating series whose terms are non-increasing is absolute value; since
the first term of the series is non-negative one sees that the integrand id nonnegative. Thus p(x) ≥ 0 for x > 0. Formula (4.3.2) indicates that p(x) is an even
function of x so that p(x) ≥ 0 if x 6= 0. Therefore p(x) is a frequency function
and f (t)R is the characteristic function of the absolutely continuous distribution
x
F (x) = −∞ p(y) dy.
5. Let X be a nonnegative random variable. Prove that
E[X] ≤ (E[X 2 ])1/2 ≤ (E[X 3 ])1/3 ≤ · · ·
解答: g(x) = xn/(n−1) is convex. Hence, by Jensen’s Inequality
E[Y n/(n−1) ] ≥ (E[Y ])n/(n−1) .
Now set Y = X n−1 and so
E[X n ] ≥ (E[X n−1 ])n/(n−1)
or (E[X n ])1/n ≥ (E[X n−1 ])1/(n−1)
6. Evaluate
−n
lim e
n→∞
2007-09-12
n
X
nk
k=0
k!
5-3
資格考科目:機率論
國立中山大學應用數學系博士班統計組
解答: Let Xi be Poisson with mean 1. Then
( n
)
Pn
n
X
X
nk
−n
1 Xi − n
√
.
P
Xi ≤ n = P
≤0 =e
k!
n
1
k=0
But for n large
P
n
Xi −n
1√
has approximately a standard normal distribution with
P
k
mean 0 by central limit theorem, and so limn→∞ e−n nk=0 nk! = 12 .
n
7. Suppose that a population consists of a fixed number, say, m, of genes in any generation. Each gene is one of two possible genetic types. If any generation has exactly i
(of its m) genes being type 1, then the next generation will have j type 1 (and m − j
type 2) genes with probability
j m−j
m
i
m−i
,
j
m
m
j = 0, 1, . . . , m
Let Xn denote the number of type 1 genes in the nth generation, and assume that
X0 = i.
(a) Find E[Xn ].
(b) What is the probability that eventually all the genes will be type 1?
解答:
(a) Given Xn , Xn=1 is binomial with parameters m and p = Xn /m. Hence,
E[Xn+1 |Xn ] = m(Xn /m) = Xn , and so E[Xn+1 ] = E[Xn ]. So E[Xn ] = E[X0 ] =
i for all n.
(b) Note that as all states but 0 and m are transient, it follows that Xn will converge
to either 0 or m. Hence, for n large
E[Xn ] = m · P {hits m} + 0 · P {hits 0} = m · P {hits m}
But E[Xn ] = i and thus P {hits m} = i/m.
8. Consider a Markov chain with states 0, 1, 2, 3, 4. Suppose p0,4 = 1; and suppose that
when the chain is in states i, i > 0 the next state is equally likely to be any of the
states 0, 1, . . . , i − 1. Find the limiting probabilities of this Markov chain.
解答: The equations are
π0
π1
π2
π3
π4
π0 + π1 + π2 + π3 + π4
=
=
=
=
=
=
π1 + 1/2π2 + 1/3π3 + 1/4π4
1/2π2 + 1/3π3 + 1/4π4
1/3π3 + 1/4π4
1/4π4
π0
1
The solution is
π0 = π4 = 12/37,
2007-09-12
π1 = 6/37,
π2 = 4/37,
π3 = 3/37
5-4
資格考科目:機率論
國立中山大學應用數學系博士班統計組
9. Let X1 , X2 , . . . , Xn be independent and identically distributed exponential random
variables. Show that the probability that the largest of them is greater than the sum
of the others is n/2n−1 . That is, if
M = max Xj
j
then show
(
P
M>
n
X
)
Xi − M
=
i=1
Hint: What is P {X1 >
Pn
i=2
n
2n−1
Xi }?
解答: To begin, note that
(
)
n
X
P X1 >
Xi
= P {X1 > X2 }P {X1 − X2 > X3 |X1 > X2 }
2
×P {X1 − X2 − X3 > X4 |X1 > X2 + X3 } · · ·
×P {X1 − X2 − · · · − Xn−1 > Xn |X1 > X2 + · · · + Xn−1 }
= (1/2)n−1 . (memoryless property)
Hence,
(
P
M>
n
X
)
Xi − M
=
i=1
n
X
(
P
X1 >
n
X
i=1
)
Xi
= n/2n−1 .
j6=i
10. Let {N (t), t ≥ 0} be a Poisson process with rate λ, that is independent of the sequence
X1 , X2 , . . . of independent and identically distributed random variables with mean µ
and variance σ 2 . Find
N (t)
X
Cov N (t),
Xi
i=1
解答:
N (t)
N (t)
X
X
E
Xi = E E
Xi |N (t) = E[µN (t)] = µλt
i=1
E N (t)
N (t)
X
i=1
Xi = E E N (t)
i=1
N (t)
X
Xi |N (t) = E[µN 2 (t)] = µ(λt + λ2 t2 )
i=1
Therefore,
Cov N (t),
N (t)
X
Xi = µ(λt + λ2 t2 ) − λt(µλt) = µλt.
i=1
2007-09-12
5-5