Download 1. It is known that the probability p of tossing heads on

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Statistics wikipedia , lookup

Transcript
1. It is known that the probability p of tossing heads on an unbalanced coin is either 1/4 or 3/4. The
coin is tossed twice and a value for Y , the number of heads, is observed.
(a) What are the possible values of Y ? Y can be either 0, 1, or 2.
(b) For each possible value of Y , which of the two values for p, 1/4 or 3/4, maximizes the probability
that Y = y? For Y = 0, Pr[Y = 0|p = .25] > Pr[Y = 0|p = .75], so p = .25 is the maximizer. For
Y = 1, both values of p yield the same probability; there is not a unique maximizer. For Y = 2,
p = .75 is the maximizer.
(c) Depending on the value of y actually observed, and of the two possible values for p, which is the
MLE? Note: for Y = 1, the MLE is not unique. For Y = 0, p̂ = .25 is the MLE. For Y = 1, p̂ is
either .25 or .75; the MLE is not unique. For Y = 2, p̂ = .75.
2. Let X1 , X2 ,..., Xn denote n independent and identically distributed Bernoulli random variables such
that P (Xi = 1) = p and P (Xi = 0) = 1 − p, for each i = 1, 2, ..., n.
P
P
Pn
Xi
(a) Show that i=1 XP
(1 − p)n− Xi , which shows by
i is sufficient for p. p(X1 , . . . , Xn |p) = p
n
Theorem 9.4 that i=1 Xi is sufficient for p.
P
P
Xi
(b) Find p̂, the MLE of p. The MLE is the value of p that maximizes p PXi (1 − p)n−
. Take the
P
logarithm, take the derivative w.r.t. p, and set it to zero to get p̂ = Xi /(n − Xi ).
(c) Find the MLE of the variance of Xi . The variance of Xi is p(1 − p). We get the MLE of the
P
P
variance by plugging in the MLE of p. (See page 480.) V d
(Xi ) = p̂(1 − p̂) = Xi (n − Xi )/n2 .
3. Let Y1 , Y2 ,..., Yn denote a random sample from a Poisson distribution with parameter λ.
P
Pn
Qn
Q
(a) Show that i=1 Yi is sufficient forPλ. p(Y1 , . . . , Yn |λ) = i=1 λYi e−λ /Yi ! = λ Yi e−nλ / Yi !,
which shows by Theorem 9.4 that
Yi is sufficient for λ.
(b) Find the MLE λ̂. Take logs, take the derivative w.r.t. λ, and set it to zero to get λ̂ = Ȳ .
(c) Show that λ̂ is unbiased. E(λ̂) = E(Ȳ ) = E(Yi ) = λ, which means that λ̂ is unbiased.
4. Let X1 , . . . , Xn be a random sample from the Uniform(0, θ) distribution.
(a) Show that X(n) , the largest Xi , is sufficient for θ. Let 1(0,θ) (Xi ) be the function that is 1 if
Xi is in the interval (0, θ)
and 0 otherwise.
1(0,θ) (Xi ) is called the indicator function of (0, θ).
Qn p(X1 , . . . , Xn |θ) = i=1 θ1 1(0,θ) (Xi ) = ( θ1 )n 1(0,θ) (X(n) ). (I assume all Xi s are positive, because
that’s implied by all values of θ, so I don’t need to check whether X(1) > 0. But X(n) > θ varies
from θ to θ, so I do need to check it.) We’ve written the joint density as a function of θ and X(n) ,
which shows by Theorem 9.4 that X(n) is sufficient for θ.
(b) Find the MLE θ̂. For θ < X(n) , p(X1 , . . . , Xn |θ) = 0. For θ > X(n) , p(X1 , . . . , Xn |θ) = ( θ1 )n ,
which is maximized by θ̂ = X(n) .
(c) Show that the MLE is a biased estimator of θ. From an earlier problem we had E(X(n) =
θ.
(d) Show that the bias decreases as n → ∞. The bias is
1
n+1 θ,
n
n+1 θ
6=
which goes to 0 as n → ∞.
5. Let X1 , X2 , X3 , . . . be independent Bernoulli random variables such that P (Xi = 1) = p and P (Xi =
0) = 1 − p for each i = 1, 2, 3, . . . . Suppose we don’t observe the Xi s. Instead we observe the random
variable Y , which is the number of trials necessary to obtain the first success, that is, the value of i
for which Xi = 1 first occurs. Then Y has a geometric distribution with P (Y = y) = p(1 − p)y−1 , for
y = 1, 2, 3, . . . .
(a) Find the MLE p̂. For a given y, we want to find the p that maximizes p(1 − p)y−1 . Take logs to
1
get log(p) + (y − 1) log(1 − p). Differentiate w.r.t. p and set to 0 to get 0 = p1 − y−1
1−p , or p̂ = y .
1
(b) Is the MLE biased? Explain. Take an example, say p = .5. Then E(p̂) = E( y1 ) = 1 Pr[Y =
1] + 21 Pr[Y = 2] + other terms = 1 12 + 12 14 + other terms > 12 . That’s bias.
6. Suppose that Y1 , Y2 , ..., Yn denote a random sample from an exponentially distributed population with
mean θ. Find the MLE of the population variance θ2 . The exponential distribution has mean θ and
variance θ2 , so if we
θ, we can just square it to get the MLE of the variance θ2 .
Qncan find the MLE of P
p(Y1 , . . . , Yn |θ) = i=1 θ1 e−Yi /θ = ( θ1 )n e− Yi /θ . Take logs, differentiate, set to 0, to get θ̂ = Ȳ and,
therefore θb2 = (Ȳ )2 .
2