Download ST2334 PROBLEM SHEET 10 Question 1 Let X1,...,Xn be

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Computer simulation wikipedia , lookup

Generalized linear model wikipedia , lookup

General circulation model wikipedia , lookup

Pattern recognition wikipedia , lookup

Inverse problem wikipedia , lookup

Regression analysis wikipedia , lookup

Data assimilation wikipedia , lookup

Randomness wikipedia , lookup

Hardware random number generator wikipedia , lookup

Least squares wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Predictive analytics wikipedia , lookup

Transcript
ST2334 PROBLEM SHEET 10
Question 1
Let X1 , . . . , Xn be independent identically distributed random variables. Suppose
we observe realizations x1 , . . . , xn . Find the MLE of θ assuming the following PDFs:
(1)
(2)
(3)
(4)
fθ (x) = θxθ−1 , x ∈ X = [0, 1], θ > 0.
fθ (x) = (θ + 1)x−θ−2 , x ∈ X = (1, ∞), θ > 0.
fθ (x) = θ2 x exp{−θx}, x ∈ X = R+ , θ > 0.
fθ (x) = θ/2 exp{−θ|x|}, x ∈ X = R, θ > 0.
Question 2
Let X1 , . . . , Xn be independent identically distributed random variables, with Xi ∈
X = (η, ∞), where η > 0. We observe realizations x1 , . . . , xn . The PDF is
fθ (x) = λe−λ(x−η)
x∈X
and θ = (λ, η). Find the MLE of θ. For this question you need not deal with
the hessian matrix and assume any (well-constructed) estimators are maximizers
of the likelihood. [Hint: Think very carefully about X and its relationship to the
likelihood].
Question 3
Suppose one observes n independent and identically distributed binary trials. Denote the number of 1’s as x, with the random variable written as X; it is assumed
that X ∼ B(n, θ). Show that the MLE of θ is:
x
θ̂n = .
n
It is now supposed that a priori to observing the data, that θ ∼ Be(a, b), a, b > 0.
That is:
π(θ) = B(a, b)−1 θa−1 (1 − θ)b−1 θ ∈ [0, 1]
where
B(a, b)−1 =
Γ(a + b)
.
Γ(a)Γ(b)
Note also that
Z
B(a, b) =
1
ua−1 (1 − u)b−1 du.
0
Show that the posterior mean is:
E[Θ|x] =
x+a
.
n+a+b
Compare this estimator with the MLE.
1
2
ST2334 PROBLEM SHEET
Question 4
Let X1 , . . . , Xn be independent identically distributed random variables, with Xi ∈
X = R. The PDF is
θ
f (x|θ) = e−θ|x| x ∈ X
2
with θ > 0. A priori to observing the data, we assume that
θ ∼ G(α, β)
that is
π(θ) =
β α α−1 −βθ
θ
e
Γ(α)
θ ∈ R+ .
Find the posterior PDF of θ.
The posterior predictive is the defined to be the PDF:
Z ∞
f (xn+1 |x1 , . . . , xn ) =
f (xn+1 |θ)π(θ|x1 , . . . , xn )dθ.
0
Show that the posterior predictive PDF is:
α+n+1
n
X
1
(α + n)
Pn
|xi |)α+n
(β+
f (xn+1 |x1 , . . . , xn ) =
2
β + i=1 |xi | + |xn+1 |
i=1
xn+1 ∈ X.
Question 5
Let X1 , . . . , Xn be independent identically distributed random variables, with Xi ∈
X = R+ . The Bayes factor is a way to decide between using one of two models.
Suppose we assume that the data can come from two competing models with equal
probability and denote the joint PDF of the data under the two different models as
Z hY
n
i
f (xi |θ) π(θ)dθ
f (x1 , . . . , xn ) =
Θ
and
g(x1 , . . . , xn ) =
i=1
Z hY
n
Θ
i
g(xi |θ) π(θ)dθ.
i=1
Note that the unknown parameters are the same for each model (as is the prior on
the parameters) and it is only the functional form of the PDFs f (·|θ) and g(·|θ)
which differ. The Bayes factor is then:
f (x1 , . . . , xn )
.
g(x1 , . . . , xn )
There is then a well-defined rule for whether one chooses model f or model g, which
we shall not discuss.
Suppose that
f (x|θ) = θe−θx x ∈ X
and
g(x|θ) = θ2 xe−θx x ∈ X.
Show that the Bayes factor, when a priori θ ∼ G(α, β), is:
n
n
X
Γ(α + n)
Qn
β+
xi .
Γ(α + 2n)[ i=1 xi ]
i=1