Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
STAT 5352
Probability and Statistics II
Spring 2012
Homework #1
Due on Thursday, February 9, 2012
Instructor: Larry Ammann
CourseBook: John E. Freund’s Mathematical Statistics with Applications (7th Edition)
Emrah Cem(exc103320)
1
Emrah Cem
STAT 5352 (Larry Ammann): Homework #1
Contents
Exercise 8.2
(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
3
3
Exercise 8.3
4
Exercise 8.21
4
Exercise 8.22
5
Exercise 8.23
6
Exercise 8.24
6
Exercise 8.31
7
Exercise 8.37
7
Exercise 8.64
(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
8
8
Page 2 of 9
Emrah Cem
STAT 5352 (Larry Ammann): Homework #1
Exercise 8.2
Use Theorem 4.14 and its corollary to show that if X11 , X12 , ..., X1n1 , X21 , X22 , ..., X2n2 are independent
random variables, with the first n1 constituting a random sample from an infinite population with the mean
µ1 and the variance σ12 and the other n2 constituting a random sample from an infinite population with the
mean µ2 and the variance σ22 , then
(a)
E(X 1 − X 2 ) = µ1 − µ2 ;
Answer:
E(X 1 − X 2 )
= E(
n1
n2
1 X
1 X
X1i −
X2i )
n1 i=1
n2 i=1
=
n1
n2
1 X
1 X
E(X1i ) −
E(X2i )
n1 i=1
n2 i=1
=
n1
n2
1 X
1 X
µ1 −
µ2
n1 i=1
n2 i=1
(a)
1
1
n1 µ1 −
n2 µ2
n1
n2
= µ1 − µ2
=
(b)
var(X 1 − X 2 ) =
σ12
n1
+
σ22
n2
Answer: If X1 , . . . , Xn are independent random variables, then
n
n
X
X
var(
ai Xi ) =
a2i var(Xi )
i=1
(1)
i=1
Since X 1 and X 2 are independent,
var(X 1 − X 2 )
(1)2 var(X 1 ) + (−1)2 var(X 2 )
(2)
= var(X 1 ) + var(X 2 )
n1
n2
1 X
1 X
= var(
X1i ) + var(
X2i )
n1 i=1
n2 i=1
(3)
=
=
(
=
(
=
n1
n2
1 2X
1 X
)
var(X1i ) + ( )2
var(X2i )
n1 i=1
n2 i=1
1
1 2
) n1 σ12 + ( )2 n2 σ22
n1
n2
σ12
σ2
+ 2
n1
n2
(b)
(4)
(5)
(6)
(7)
Note that (4) is obtained from (3) using the equality (1)
Page 3 of 9
Emrah Cem
STAT 5352 (Larry Ammann): Homework #1
Exercise 8.2
Exercise 8.3
With reference to Exercise 8.2, show that if the two samples come from normal populations, then X 1 − X 2
σ2
σ2
is a random variable having a normal distribution with the mean µ1 − µ2 and the variance n11 + n22 . (Hint:
Proceed as in the proof of Theorem 8.4.)
Answer: Moment generating functions (MGF) of random variables X11 , X12 , . . . , X1n1 are
1
2 2
MX1i (θ) = eµ1 θ+ 2 σ1 θ , i = {1, 2, . . . , n1 }
Similarly, MGF of random variables X11 , X12 , . . . , X1n1 are
1
2 2
MX2i (θ) = eµ2 θ+ 2 σ2 θ , i = {1, 2, . . . , n2 }
Then, MGF of X 1 and X 2 are
2
1 σ1
2
1 σ2
2
MX 1 (θ) = eµ1 θ+ 2 n1 θ , MX 2 (θ) = eµ2 θ+ 2 n2 θ
2
Theorem1: If X1 , X2 , . . . , Xn is a sequence of independent (and not necessarily identically distributed)
random variables, and
X
Sn =
ai Xi
i=1n
where the ai are constants, then the probability density function for Sn is the convolution of the probability
density functions of each of the Xi , and the moment-generating function for Sn is given by
MSn (t) = MX1 (a1 t)MX2 (a2 t) · · · MXn (an t) .
Using the above theorem :
MX 1 −X 2 (θ)
= MX 1 (θ)MX 2 (−θ)
2
1 σ1
2
1 σ2
2
2
= eµ1 θ+ 2 n1 θ e−µ2 θ+ 2 n2 (−θ)
1
2
σ1
2
σ2
= e(µ1 −µ2 )θ+ 2 ( n1 + n2 )θ
2
Therefore, X 1 − X 2 has a normal distribution with mean µ1 − µ2 and variance
σ12
n1
+
σ22
n2 .
Exercise 8.21
Use Theorem 8.11 to show that, for random samples of size n from a population with the variance σ 2 , the
2σ 4
.
sampling distribution of S 2 has mean σ 2 and the variance n−1
Exercise 8.21 continued on next page. . .
Page 4 of 9
Emrah Cem
STAT 5352 (Larry Ammann): Homework #1
Exercise 8.21 (continued)
2
. Then by theorem 8.11, Y has a chi-square distribution with n − 1 degrees
Answer: Let Y be (n−1)S
σ2
of freedom. We know that chi-square distribution with n − 1 degrees of freedom has the mean n − 1 and
variance 2(n − 1). Lets write the sampling distribution in terms of Y which has a known mean and variance.
S2 =
σ2
Y
n−1
(8)
By using the linearity of expectation,
E[S 2 ]
=
=
=
σ2
E[Y ]
n−1
σ2
(n − 1)
n−1
σ2
Similarly, for variance,
var(S 2 )
=
=
=
σ2 2
) var(Y )
n−1
σ4
2(n − 1)
(n − 1)2
2σ 4
n−1
(
Exercise 8.22
Show that if X1 , X2 , . . . , Xn are independent random variables having a chi-square distribution with v = 1
and Yn = X1 + X2 + . . . + Xn , then the distribution of
Yn
n −1
Z= p
2/n
(9)
as n → ∞ is the standart normal distribution.
Exercise 8.22 continued on next page. . .
Page 5 of 9
Emrah Cem
STAT 5352 (Larry Ammann): Homework #1
Exercise 8.22 (continued)
Answer:
Z
=
=
Yn
n
−1
p
2/n
Yn − n
√ √
2 n
(10)
(11)
(12)
√
Chi-square distribution with 1 degree of freedom
has µ = 1 and σ 2 = 2 (hence σ = 2). Yn is the sum of
√
i.i.d random variables with µ = 1 and σ = 2. Hence by central limit theorem (as shown below)
Yn − n
lim P ( √ √ ≤ x) = lim P (Z ≤ x) = Φ(x)
n→∞
2 n
(13)
n→∞
Central Limit Theorem: Let X1 , X2 , . . . , Xn be i.i.d random variables having mean µ and finite nonzero
variance σ 2 . Let Yn = X1 + X2 + . . . + Xn . Then
lim P (
n→∞
Yn − n
√ ≤ x) = Φ(x)
σ n
(14)
where Φ(x) is the probability that a standart normal random variable is less than x. Note that (11) Therefore,
by CTL, Z is the standart normal distribution as n → ∞.
Exercise 8.23
Based on the result of Exercise 8.23,show that if X is a random variable having chi-square distribution with
√
v degrees of freedom and v is large, then distribution of X−v
can be approximated with the standart normal
2v
distribution.
Answer: In the previus question Yn is the sum of i.i.d random variables with chi-square distribution with
µ = 1 and σ 2 = 2. The sum of chi-square distributions has also a chi-square distribution with degree as the
sum of the degrees of each random variable. Therefore, if X1 , X2 , . . . , Xv are i.i.d random variables having
chi-square distribution with 1 degrees of freedom and X = X1 + X2 + . . . + Xv , then X has a chi-square
distribution with v degrees of freedom. We also know that X has the mean v and variance 2v. By central
√
is approxiamtely N(0,1).
limit theorem, X−v
2v
Exercise 8.24
Use the method of Exercise 8.23 to find the approximate value of that random variable having a chi-square
distribution with v = 50 having a value greater than 68.
Answer: Let Y be the variable having a chi-square distribution with 50 degrees of freedom. Then,
P (Y > 68)
Exercise 8.24 continued on next page. . .
Y − 50
68 − 50
= P( √
> √
)
100
1 100
≈ P (Z > 1.8)
(15)
(16)
=
1 − Φ(1.8)
(17)
=
1 − 0.9641
(18)
=
0.0359
(19)
Page 6 of 9
Emrah Cem
STAT 5352 (Larry Ammann): Homework #1
Exercise 8.24 (continued)
Exercise 8.31
Show that for V > 2 the variance of the t distribution woth v degrees of freedom is
Answer:
E[X 2 ].
v
v−2 .
We can use the following variance formula: var(X) = E[X 2 ] − (E[X])2 . Lets first compute
2
E[X ]
Z
∞
=
−∞
Z 0
=
t2 fX (t)dt
t2 fX (t)dt +
−∞
Z 0
(20)
Z
∞
t2 fX (t)dt
(21)
0
Z ∞
t2 fX (t)dt
t2 fX (−t)dt +
= −
0
Z∞
∞
2
= 2
t fX (t)dt
0
Z ∞
t2 v+1
t2 (1 + )− 2 dt
= 2c
v
0
√
Z ∞
v
= 2c
vu(1 + u)−v/2−1/2 √ du
2 u
0
Z ∞
= cn3/2
u3/2−1 (1 + u)−3/2−(v/2−1) du
(22)
(23)
(24)
(25)
(26)
0
= cn3/2 B(3/2, v/2 − 1)
1
1
v 3/2 B(1/2 + 1, v/2 − 1)
= √
v B( v2 , 12 )
Γ(v/2 + 1/2) Γ(1/2 + 1)Γ(v/2 − 1)
= n
Γ(v/2)Γ(1/2)
Γ(v/2 + 1/2)
Γ(1/2 + 1)Γ(v/2 − 1)
= n
Γ(v/2)Γ(1/2)
Γ(1/2)1/2Γ(v/2)2/(v − 2)
=
Γ(v/2)Γ(1/2)
v
=
v−2
(27)
(28)
(29)
(30)
(31)
(32)
Exercise 8.37
Verify that if X has an F distribution with ν1 and ν2 degree of freedom and ν2 → ∞, the distribution of
Y = ν1 X approaches the chi-square distribution with ν1 degrees of freedom.
Exercise 8.37 continued on next page. . .
Page 7 of 9
Emrah Cem
STAT 5352 (Larry Ammann): Homework #1
Exercise 8.37 (continued)
Answer: Note that for large enough degrees of freedom v,
vs2
≈ χ2v
σs
(33)
By Weak Law of Large Numbers
n
s2 =
1X
p
(xk − x)2 −
→ σ2
n
(34)
k=1
combining this with Slutsky’s theorem, we get
s2 p
−
→1
σ2
Limiting distribution of chi-square divided it by its degree of freedom converges to 1 in probability by
Slutsky’s theorem (by (33) and (34)). Therefore, X will have approximately a chi-square distribution divided
by its degree of freedom v1 . Then, Y = v1 X will have approximately a chi-square distribution, by Slutsky’s
theorem as shown below.
Slutsky’s theorem:
d
d
Xn −
→ X, Yn −
→c
(35)
Then,
d
(1) Xn Yn −
→ cX
Exercise 8.64
A random sample of size n= 81 is taken from an infinite population with the mean µ = 128 amd the standart
deviation σ = 6.3. With what probability can we assert that the value we obtain for X will not fall between
126.6 and 129.4?
(a)
Chebyshev’s theorem;
Answer: Chebyshev’s theorem states that
Pr(|X − µ| ≥ kσ) ≤
1
.
k2
(a)
However, note that Chebyshev’s inequality does not give useful information when k < 1. In our case,
1.6
k = 6.3
< 1, so we can not use Chebyshev’s theorem.
(b)
the central limit theorem?
Exercise 8.64 [(b)] continued on next page. . .
Page 8 of 9
Emrah Cem
STAT 5352 (Larry Ammann): Homework #1
Exercise 8.64
Answer: Central limit theorem states that if a random sample of n observations is selected from
a population (any population), then when n is sufficiently large, the sampling distribution of X will
be approximately normal. (The larger the sample size, the better will be the normal approximation
X−128
√ . By CLT, Z has approximately standart normal
to the sampling distribution of X.) Let Z = 6.3/
81
distribution.
P (126.6 < X < 129.4)
=
P (126.6 − 128 < X − 128 < 129.4 − 128)
=
P (−1.4 < X − 128 < 1.4)
=
P(
=
X − 128
1.4
−1.4
√ <
√ <
√ )
6.3/ 81
6.3/ 81
6.3/ 81
P (−2 < Z < 2)
=
Φ(2) − Φ(−2)
=
0.9772 − 0.0228
=
0.9544
(b)
Question asks the probability that value will not fall between 126.6 and 129.4. So the answer is
1-0.9544=0.0456
Page 9 of 9