Download Solutions to MAS Theory Exam 2014

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Karhunen–Loève theorem wikipedia, lookup

Normal distribution wikipedia, lookup

Student's t-distribution wikipedia, lookup

Law of large numbers wikipedia, lookup

Tweedie distribution wikipedia, lookup

Negative binomial distribution wikipedia, lookup

Transcript
1
Solutions to 2014 MAS exam : Theory
1.
a)
2
P (gray | litter 2) = .
5
b)
P (brown) = P (brown | litter 1)P (litter 1) + P (brown | litter 2)P (litter 2)
2 1
3 1
19
= ( )( ) + ( )( ) = .
3 2
5 2
30
c)
P (brown | litter 1)P (litter 1)
P (brown)
2 1
( )( )
10
= 3 19 2 = .
19
30
P (litter 1 | brown) =
2.
A negative binomial random variable has mean
r
p
and variance
r(1−p)
.
p2
a) By the weak law of large numbers (Mean of a sequence of independent random variables, each with mean µ and variance σ 2 , converges in probability to the mean µ), we have
X̄ =
Pn
i=1
n
Xi
converges in probability to pr . That means, for any given small value , the
probability that X̄ is beyond the given distance of
r
p
goes to zero as n increases to ∞.
b) Central limit theorem tells “For a sequence of independent and identically distributed
random variables, each with mean µ and variance σ 2 ,
X̄
√
σ/ n
√
converges in distribution to
N (0, 1).” By the Central limit theorem, we have √n(X̄−r/p)2 converges in distribution to
r(1−p)/p
N (0, 1).
2
c) For a negative binomial random variable with n=30, r=10, p=.5, mean is r/p = 20
and variance is r(1 − p)/p2 = 20.
√
30(X̄ − 20)
√
P (X̄ ≤ 11) = P (
≤
20
√
30(19 − 20)
√
) ≈ P (Z ≤ −1.225) = 0.1103
20
3.
a) The marginal density of X is
fX (x) =
=
Z 1
0
Z 1
0
fX,Y (x, y)dy
1
(x + 2y)dy
4
1
(xy + y 2 )|y=1
=
y=0
4
1
=
(x + 1), 0 ≤ x < 2.
4
b) The conditional density of Y given X is
fX,Y (x, y)
fX (x)
1
(x + 2y)
= 41
(x + 1)
4
x + 2y
=
, 0 ≤ x < 2, 0 ≤ y < 1.
x+1
fY |X (y|x) =
X and Y are not independent because the conditional density of Y given X depends on X.
c) Z =
9
(X+1)
⇒X=
9
Z
− 1. So, 3 < z ≤ 9 and J =
dx
dz
= − z92 . Therefore, the density of
Z is
1 9
81
fZ (z) = ( − 1 + 1)|J| = z −3 , 3 < z ≤ 9.
4 z
4
3
4.
a) By looking at the density function, it is actually a beta distribution with parameters
a = θ and b = 1. Therefore, mean of the distribution is
θ
.
θ+1
Solving
θ̂
θ̂+1
= X̄, we have the
method of moments estimator of θ is
X̄
.
1 − X̄
θ̂ =
b) The log likelihood function of θ is
n
Y
l(θ) = log(
f (xi |θ))
i=1
n
Y
= log(
θxθ−1
)
i
i=1
= log(θn (
n
Y
xi )θ−1
i=1
= n log θ + (θ − 1)
n
X
log xi .
i=1
Then
∂l(θ)
∂θ
=
n
θ
+
Pn
i=1
log xi . Solving
∂l(θ)
∂θ
θ̂ =
= 0, we have
n
−
Pn
i=1
log xi
.
Because the second derivative of l(θ) is − θn2 , which is always negative, the solution above is
a maximizer of l(θ). Therefore θ̂ =
−
Pn n
i=1
log xi
is the MLE of θ.
c) By the large sample theory, we know the asymptotic variance of the MLE is the inverse
of the fisher information, which is
i−1
∂2
l(θ)
∂θ2
n −1
=
− E(− 2 )
θ
n −1 θ2
= ( 2) =
θ
n
I(θ)−1 =
−E
h
4
5.
a) The joint probability mass function of the six random variables, X1 , . . . , X6 , is given
by
6
P
f (x1 , x2 , x3 , x4 , x5 , x6 |θ) =
e−6θ · θi=1
6
Q
xi
.
xi !
i=1
The prior distribution of θ is given by Gamma(2, 1.5),
Γ2 2−1 −1.5θ
θ e
.
1.52
π(θ) =
Thus for the posterior distribution we have the following:
π(θ|x) ∝ f (x|θ) · π(θ)
6
P
∝ e−6θ · θi=1
xi
· θ2−1 e−1.5θ
6
P
∝ θi=1
P
6
which is the kernel of the Gamma
P
6
distribution of θ is Gamma
xi +2−1
e−7.5θ ,
xi + 2 , 7.5 distribution. Therefore, the posterior
i=1
xi + 2 , 7.5 .
i=1
b) A Bayes estimator is the mean of the posterior distribution. Therefore, a Bayes
estimator of θ can be
6
P
θ̂ =
Xi + 2
i=1
7.5
c) A 90% credible interval for θ can be given by the 5th and the 95th percentile of the
P
6
Gamma
i=1
xi + 2 , 7.5 distribution.
5
6.
a) The common probability density function of X1 , X2 , · · · , X5 is given by
f (x) = θe−θx , x ≥ 0.
To derive the distribution of Y , we calculate
P (Y > y) = P (X1 > y, X2 > y, · · · , X5 > y)
= [P (Xi > y)]5
"
=
R∞
#5
θe−θx dx
= e−5θy
y
Hence fY (y) = 5θe−5θy .
b) The power function of the test is given as follows:
π(θ) = Pθ (rejecting H0 )
= Pθ (Y > 18)
= e−5θ·18 = e−90θ
Note that
θ1 > θ2 ⇒ −90θ1 < −90θ2
⇒ e−90θ1 < e−90θ2
Therefor the power function is a decreasing function of θ.
c)The size (the significance level) of the test is:
sup π(θ) = sup e−90θ = e−90/20 = 0.011109
θ∈Ω0
1
θ≥ 20