Download Lab 2 Solutions

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
Stat 366 Lab 2 Solutions (September 21, 2006)
page 1
TA: Yury Petrachenko, CAB 484, [email protected], http://www.ualberta.ca/∼yuryp/
Review Questions, Chapters 8, 9
8.15 Suppose that Y1 , Y2 , . . . , Yn denote a random sample of size n from a population with an
exponential distribution whose density is given by

 (1/θ)e−y/θ , y > 0
f (y) =

0, elsewhere.
If Y(1) = min(Y1 , Y2 , . . . , Yn ) denotes the smallest-order statistic, show that θ̂ = nY(1) is an
unbiased estimator for θ and find MSE(θ̂).
Solution. Let’s find the distribution function of Y :

 1 − e−y/θ , y > 0
F (y) =

0, elsewhere.
£
¤n
¡
¢n−1
Now we can use the formula FY(1) (y) = 1 − 1 − F (y) or fY(1) = n 1 − F (y)
f (y) to find
the the density function for Y(1) : for y > 0,
¡
¢n−1 1 −y/θ n −yn
e
= e θ .
fY(1) = n e−y/θ
θ
θ
We can recognize this density function to be the density of the exponential distribution with
±
¡ ¢
parameter θ n, Y(1) ∼ Exp nθ .
Knowing the distribution of Y(1) allows us to compute the expectation of θ̂ = nY(1) :
E[θ̂] = nE[Y(1) ] =
nθ
= θ.
n
So, E[θ̂] = θ, and θ̂ is an unbiased estimator of θ.
¡
¢2
To find MSE(θ̂), use the formula MSE(θ̂) = V [θ̂] + B(θ̂) . Since the estimator is unbiased,
its bias B(θ̂) equals zero. For the variance, remember that Y(1) is exponential. We have
£
¤
θ2
MSE(θ̂) = V [θ̂] + 0 = n2 V Y(1) = n2 2 = θ2 . ¤
n
Stat 366 Lab 2 Solutions (September 21, 2006)
page 2
9.7 Suppose that Y1 , Y2 , . . . , Yn denote a random sample of size n from an exponential distribution
with density function given by

 (1/θ)e−y/θ , y > 0
f (y) =

0, elsewhere.
In Exercise 8.15 we determined that θ̂1 = nY(1) is an unbiased estimator of θ with MSE(θ̂)= θ2 .
Consider the estimator θ̂2 = Ȳ , and find the efficiency of θ̂1 relative to θ̂2 .
Solution. First compute the variance of θ̂2 :
·
¸
Y1 + · · · + Yn
V [θ̂2 ] = V [Ȳ ] = V
=
n
¢
1¡
= 2 |θ2 + ·{z
· · + θ}2 =
n
n times
¢
1
1¡
V
[Y
+
·
·
·
+
Y
]
=
V
[Y
]
+
·
·
·
+
V
[Y
]
1
n
1
n
n2
n2
nθ2
θ2
=
.
n2
n
To find the relative efficiency, we need to find the ratio of two variances:
eff(θ̂1 , θ̂2 ) =
We conclude that θ̂2 is preferable to θ̂1 .
V (θ̂2 )
V (θ̂1 )
=
θ2 1
1
· 2 = .
n θ
n
¤
9.61 Let Y1 , Y2 , . . . , Yn denote a random sample from the probability density function

 (θ + 1)y θ , 0 < y < 1; θ > −1
f (y) =

0, elsewhere.
Find an estimator for θ by the method of moments.
Solution. Let’s find the first moment of this distribution:
Z
1
µ=
(θ + 1) y
0
θ+1
(θ + 1) y θ+2 ¯¯1 θ + 1
dy =
.
¯ =
θ+2
θ+2
0
The method of moments implies
Ȳ =
θ̂ + 1
θ̂ + 2
∴
θ̂ =
2Ȳ − 1
. ¤
1 − Ȳ
Stat 366 Lab 2 Solutions (September 21, 2006)
page 3
9.72 Suppose that Y1 , Y2 , . . . , Yn denote a random sample from the Poisson distribution with
mean λ.
(a) Find the maximum-likelihood estimator λ̂ for λ.
(d) What is the MLE for P (Y = 0) = e−λ ?
Solution. Let’s define the likelihood function L(λ | y1 , y2 , . . . , yn ):
L=
n
Y
p(yi ) =
i=1
n
Y
λyi e−λ
i=1
yi !
=
λ
Pn
e−nλ
.
i=1 yi !
i=1
Qn
yi
The problem now is to find the maximum value of this function of λ. Let’s make a simplifying
transformation:
ln L =
n
n
³X
´
X
yi ln λ − nλ −
ln(yi !).
i=1
i=1
Differentiation with respect to λ yields:
d
1
ln L = yi − n = 0.
dx
λ
Solving this equation:
Pn
λ=
i=1
yi
n
Pn
,
or λ̂ =
i=1
Yi
n
= Ȳ .
The latter is the MLE for λ.
To answer (b), recall the invariance principle for MLEs: if t(θ̂) is a one-to-one function, then
d = t(θ̂).
t(θ)
In our case t(λ) = e−λ , so
−λ = e−λ̂ = e−Ȳ . ¤
ed
9.75a Suppose that Y1 , Y2 , . . . , Yn constitute a random sample from a uniform distribution with
probability density function


f (y) =

1
, 0 ≤ y ≤ 2θ + 1
2θ + 1
0, elsewhere.
Obtain the maximum-likelihood estimator of θ.
Stat 366 Lab 2 Solutions (September 21, 2006)
page 4
Solution. This is a somewhat different problem from the previous one because the support
of the density function depends on θ. Recall the indicator function I(A). It is equal to one
when A is true, and zero if A is false.
We can write the likelihood function in the following way:
L=
n
Y
f (yi ) =
i=1
n
Y
i=1
n
Y
1
1
I(0 ≤ yi ≤ 2θ + 1) =
I(0 ≤ yi ≤ 2θ + 1).
2θ + 1
(2θ + 1)n i=1
We can simplify this even further if we note that the product of indicator is non-zero only
when all of the underlying conditions fulfill. That is, all yi are less that 2θ + 1 and positive.
Notice that this statement is equivalent to the following: 0 ≤ y(1) and y(n) ≤ 2θ + 1. (We use
order statistics y(1) = min(y1 , . . . , yn ) and y(n) = max(y1 , . . . , yn ).) We have
L=
1
I(0 ≤ y(1) ) · I(y(n) ≤ 2θ + 1).
(2θ + 1)n
Now look at the first part of the likelihood function L, (2θ + 1)−n . Notice that this is a
decreasing (and continuous) function of θ. If we want to maximize L, we should choose the
value of θ as small as possible. Notice that if 2θ + 1 is smaller than y(n) , then the value of L(θ)
is zero. So, the minimum of 2θ + 1 is y(n) . This gives the minimum value for θ and maximizes
the likelihood L(θ). We conclude (provided at least one observation in the sample is positive)
Y(n) = 2θ̂ + 1
∴
θ̂ =
¢
1¡
Y(n) − 1 . ¤
2
9.80 Let Y1 , Y2 , . . . , Yn denote a random sample from the probability density function

 (θ + 1)y θ , 0 < y < 1; θ > −1
f (y) =

0, elsewhere.
Find the maximum-likelihood estimator for θ. Compare your answer to the method of moments estimator found in Exercise 9.61.
Solution. Define the likelihood function:
L=
n
Y
(θ + 1)yiθ = (θ + 1)n
n
³Y
´θ
yi .
i=1
Take the logarithms:
ln L = n ln(θ + 1) + θ
i=1
n
X
i=1
ln yi .
Stat 366 Lab 2 Solutions (September 21, 2006)
Find critical points:
n
X
d
n
ln L =
+
ln yi = 0,
dθ
θ + 1 i=1
so
θ = − Pn
n
i=1
and finally
page 5
ln yi
− 1,
n
− 1.
i=1 ln Yi
θ̂ = − Pn
This is quite different from the method of moments estimator found in Exercise 9.61.
¤