Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
Midterm #1 β Practice Exam
Question #1: Let π1 and π2 be independent and identically distributed random variables
with probability density function ππ (π₯) = {
π βπ₯ ππ π₯ > 0
such that they come from the
0
ππ π₯ β€ 0
distribution π~πΈππ(1). Compute the joint density function of π = π1 and π = π1 π2.
ο·
Since π1 and π2 are independent, their joint density function is given by
ππ1 π2 (π₯1 , π₯2 ) = ππ1 (π₯1 )ππ2 (π₯2 ) = π βπ₯1 π βπ₯2 = π β(π₯1 +π₯2 ) whenever π₯1 > 0, π₯2 > 0. We
π£
π£
have the transformation π’ = π₯1 and π£ = π₯1 π₯2 so π₯1 = π’ and π₯2 = π₯ = π’, so we can
1
ππ₯1
compute the Jacobian as π½ =
ππ’
πππ‘ [ππ₯
2
ππ’
ππ₯1
ππ£
]
ππ₯2
1
= πππ‘ [β π£
π’2
ππ£
π£
0
1
1 ] = . This implies that the
π’
π’
π£
1
joint density πππ (π’, π£) = ππ1 π2 (π’, π’) |π½| = π’ π β(π’+π’) whenever π’ > 0 and π£ > 0.
Question #2: Let π be a standard normal random variable. Compute the Moment Generating
Function (MGF) of π = |π| using the standard normal distribution function Ξ¦(z).
ο·
For some continuous random variable π with density ππ (π₯), its moment generating
β
function is given by ππ (π‘) = πΈ(π π‘π ) = β«ββ π π‘π₯ ππ (π₯) ππ₯. Since π~π(0,1), we know
that its density function is given by ππ (π₯) =
π|π| (π‘) = πΈ(π
β
2
β« π
β2π 0
2π‘π₯βπ₯2
2
π‘|π|
)=
ππ₯ =
π₯2
β
1
β
β«ββ π π‘|π₯| β2π π 2
2
β
β« π
β2π 0
π‘2 β(π₯βπ‘)2
2
ππ₯ =
ππ₯ =
2
1
β2π
π₯2
π β 2 . We can therefore compute
2
β π‘|π₯|βπ₯
2
π
β«
β2π ββ
1
β
π‘2
π 2 πβ
β«
0
β2π
(π₯βπ‘)2
2
ππ₯ =
2
β π‘π₯βπ₯
2
π
β«
β2π 0
2
ππ₯ =
ππ₯. Separating out the π‘
variable and making the substitution π’ = π₯ β π‘ and ππ’ = ππ₯ then yields
π‘2
β 1
π’2
π‘2
π‘2
π‘2
2π 2 β«βπ‘
π β 2 ππ’ = 2π 2 [Ξ¦(β) β Ξ¦(βπ‘)] = 2π 2 [1 β Ξ¦(βπ‘)] = 2π 2 Ξ¦(π‘). Thus, the
β2π
π‘2
2
Moment Generating Function of π = |π| is given by π|π| (π‘) = 2Ξ¦(π‘)π .
Question #3: Let π1 and π2 be independent random variables with respective probability
1 β
π βπ₯ ππ π₯ > 0
π
(π₯)
(π₯)
density functions ππ1
={
and ππ2
= {2
0
ππ π₯ β€ 0
0
(π₯β1)
2
ππ π₯ > 1
. Calculate the
ππ π₯ β€ 1
probability density function of the transformed random variable π = π1 + π2.
ο·
Since π1 and π2 are independent random variables, their joint density is
1
ππ1 π2 (π₯1 , π₯2 ) = ππ1 (π₯1 )ππ2 (π₯2 ) = 2 π βπ₯1 π β
(π₯2 β1)
2
1
= 2 πβ
(2π₯1 +π₯2 β1)
2
if π₯1 > 0 and π₯2 > 1.
We have π¦ = π₯1 + π₯2 and generate π§ = π₯1 so π₯1 = π§ and π₯2 = π¦ β π₯1 = π¦ β π§, so we
ππ₯1
can compute the Jacobian as π½ =
ππ₯1
ππ¦
πππ‘ [ππ₯
2
ππ§
]
ππ₯2
ππ¦
0
= πππ‘ [
1
1
] = β1. This implies
β1
ππ§
1
that the joint density πππ (π¦, π§) = ππ1 π2 (π§, π¦ β π§)|π½| = 2 π β
(2π§+π¦βπ§β1)
2
1
= 2 πβ
(π§+π¦β1)
2
whenever π§ > 0 and π¦ β π§ > 1 β π¦ > 1 + π§ or π§ < π¦ β 1. We then integrate out π§ to
β
1
π¦β1 β(π§+π¦β1)
find the PDF ππ (π¦) = β«ββ πππ (π¦, π§) ππ§ = 2 β«0
1
πβ
2
(π¦β1)
2
π§
[β2π β2 ]
π¦β1
0
= βπ β
(π¦β1)
2
Thus, we find that ππ (π¦) = π
π§
[π β2 ]
π¦β1
0
1βπ¦
2
(1 β π
π
= βπ β
1βπ¦
2
(π¦β1)
2
2
(π β
1
ππ§ = 2 π β
(π¦β1)
2
(π¦β1)
2
β 1) = π
π¦β1 βπ§
β«0
1βπ¦
2
π
2
(1 β π
ππ§ =
1βπ¦
2
).
) whenever π¦ > 0 and zero otherwise.
Question #4: Let π1 , β¦ , ππ be independent and identically distributed random variables
1
from a cumulative distribution function πΉπ (π₯) = {
1 β π₯ ππ π₯ > 1
0
ππ π₯ β€ 1
. Find the limiting
π
distribution of the transformed first order statistic given by π1:π
.
ο·
1
1
π
π (π₯) = π(π1:π β€ π₯) = π (π1:π β€ π₯ π ) = 1 β π (π1:π > π₯ π ) =
We can calculate that πΉπ1:π
1
π
1
π
1 β π (ππ > π₯ π ) = 1 β [1 β πΉπ (π₯ π )] = 1 β [1 β (1 β
1
1
π₯π
π
1
)] = 1 β π₯ when π₯ > 1.
Since there is no dependence on π, we may simply compute the desired limiting
1
lim (1 β π₯) ππ π₯ > 1
π (π₯) = {πββ
distribution as lim πΉπ1:π
πββ
lim (0)
πββ
ππ π₯ β€ 1
1
={
1 β π₯ ππ π₯ > 1
0
ππ π₯ β€ 1
= πΉπ (π₯).
Question #5: Let π1 , β¦ , ππ be independent and identically distributed random variables
with density function ππ (π₯) = {
1 ππ π₯ β (0,1)
. That is, they are sampled from a random
0 ππ π₯ β (0,1)
variable distributed as π~πππΌπΉ(0,1). Approximate π(β90
π=1 ππ β€ 77) using π~π(0,1).
ο·
Since π~πππΌπΉ(0,1), we know that πΈ(π) =
0+1
2
1
= 2 and πππ(π) =
(1β0)2
12
1
90
90
90
we let π = β90
π=1 ππ , we have πΈ(π) = πΈ(βπ=1 ππ ) = βπ=1 πΈ(ππ ) = βπ=1 2 =
1
1
= 12. Then if
90
2
= 45 and
90
90
90
πππ(π) = πππ(β90
π=1 ππ ) = βπ=1 πππ(ππ ) = βπ=1 12 = 12 = 7.5 so that π π(π) = β7.5.
πβπΈ(π)
π(β90
π=1 ππ β€ 77) = π(π β€ 77) = π (
Thus
π (π β€
32
π π(π)
β€
77βπΈ(π)
π π(π)
) = π(
πβ45
β7.5
77β45
β€
β7.5
)β
) = π(π β€ 11.68) = Ξ¦(11.68) β 0.99 where π~π(0,1).
β7.5
Question #6: If π~πΈππ(1) such that ππ (π₯) = {
π βπ₯ ππ π₯ > 0
, what transformation of π will
0
ππ π₯ β€ 0
yield a random variable π~πππΌπΉ(0,1)? What about a random variable π~πππΌπΉ(β1,1)?
ο·
Let π = 1 β π βπ , so we can compute that πΉπ (π¦) = π(π β€ π¦) = π(1 β π βπ β€ π¦) =
π(π β€ βππ(1 β π¦)) = πΉπ (βπ π(1 β π¦)) and then by differentiating we obtain ππ (π¦) =
π
ππ¦
ο·
[πΉπ (βπ π(1 β π¦))] = ππ (βπ π(1 β π¦)) (β
Since we need to end up with
1βπ¦
2(1βπ¦)
β1
1βπ¦
1βπ¦
) = 1βπ¦ = 1 so that π~πππΌπΉ(0,1).
1
= 2 to conclude that π~πππΌπΉ(β1,1), it is clear
that we must make the transformation π = 1 β 2π βπ to obtain this result. We can
1βπ§
therefore compute πΉπ (π§) = π(π β€ π§) = π(1 β 2π βπ β€ π§) = π (π β€ βπ π (
1βπ§
πΉπ (βπ π (
2
1βπ§
ππ (βπ π (
2
π
1βπ§
)) so after differentiating we have ππ (π§) = ππ¦ [πΉπ (βπ π (
2
1
1βπ§
1
2
2
)) (β 1βπ¦ (β 2) ) = 2(1βπ§) = 2 so that we conclude π~πππΌπΉ(β1,1).
)) =
))] =
Question #7: Does convergence in probability imply convergence in distribution? Does
convergence in distribution imply convergence in probability? Give a counterexample if not.
ο·
We have that πΆπ β πΆπ·, but that πΆπ· β πΆπ. As a counterexample, consider ππ = βπ
where π~π(0,1). Then ππ ~π(0,1) so ππ βπ π. However, we have π(|ππ β π| > π) =
π(|βπ β π| > Ο΅) = π(2|π| > Ο΅) = π(π) > 0. This expression wonβt converge to zero
as π β β since there is no dependence on π in the probability expression. This
implies that there is no convergence in probability, that is ππ βπ π.
Question #8: State the law of large numbers (LLN), including its assumptions. Then state
the central limit theorem (CLT), including its assumptions.
ο·
Law of Large Numbers: If π1 , β¦ , ππ are independent and identically distributed with
finite mean π and variance π 2 , then the sample mean πΜ
βπ π.
ο·
Central Limit Theorem: If π1 , β¦ , ππ are independent and identically distributed with
finite mean π and variance π 2 , then
[βπ
π=1 ππ ]βππ
π βπ
πΜ
βπ
βπ π(0,1) and π/
βπ
βπ π(0,1).
Question #10: Let π1 and π2 be independent random variables with Cumulative Distribution
Function (CDF) given by πΉπ (π₯) = 1 β π βπ₯
2 /2
for π₯ > 0. Find the joint probability density
function ππ1 π2 (π¦1 , π¦2 ) of the random variables π1 = βπ1 + π2 and π2 = π
π2
1 +π2
ο·
π₯2
π
.
π₯2
We first find the density ππ (π₯) = ππ₯ πΉπ (π₯) = βπ β 2 (βπ₯) = π₯π β 2 for π₯ > 0. Then since
π1 and π2 are independent random variables, their joint density function is
2
ππ1 π2 (π₯1 , π₯2 ) = ππ1 (π₯1 )ππ2 (π₯2 ) = [π₯1 π
π₯
β 1
2
2
] [π₯2 π
We then have that π¦1 = βπ₯1 + π₯2 and π¦2 = π₯
π₯
β 2
2
π₯2
1 +π₯2
2
] = π₯1 π₯2 π
2
π₯ +π₯
β 1 2
2
for π₯1 > 0, π₯2 > 0.
so that π₯2 = π¦2 (π₯1 + π₯2 ) = π¦2 π¦12 and
π₯1 = π¦12 β π₯2 = π¦12 β π¦2 π¦12 . These computations allow us to compute the Jacobian as
ππ₯1
ππ₯1
ππ¦
ππ¦2
]
ππ₯2
π½ = πππ‘ [ππ₯1
2
ππ¦1
= πππ‘ [
ππ¦2
2π¦1 β 2π¦1 π¦2
2π¦1 π¦2
βπ¦12
3
3
3
3
2 ] = 2π¦1 β 2π¦1 π¦2 + 2π¦1 π¦2 = 2π¦1 .
π¦1
This
implies that the joint density function is ππ1 π2 (π¦1 , π¦2 ) = ππ1 π2 (π¦12 β π¦2 π¦12 , π¦2 π¦12 )|π½| =
(π¦12 β π¦2 π¦12 )(π¦2 π¦12 )π β
2 2
2 2
(π¦2
1 βπ¦2 π¦1 ) +(π¦2 π¦1 )
2
2π¦13 whenever π¦12 β π¦2 π¦12 > 0, π¦2 π¦12 > 0. This
work also reveals that π1 and π2 are not independent since their joint probability
density function ππ1 π2 (π¦1 , π¦2 ) cannot be factored into two functions of π¦1 and π¦2 .
Question #11: Let π1 , π2 , π3 be independent random variables and suppose that we know
π1 ~π(2,4), π2 ~π(3,9), and π1 + π2 + π3 ~π(4,16). What is the distribution of π3?
ο·
We use the Moment Generating Function (MGF) technique to note that since the three
random variables are independent, we have ππ1 +π2 +π3 (π‘) = ππ1 (π‘)ππ2 (π‘)ππ3 (π‘) β
ππ3 (π‘) =
ππ1 +π2 +π3 (π‘)
ππ1 (π‘)ππ2 (π‘)
=
2
π 4π‘+16π‘ /2
2
2
(π 2π‘+4π‘ /2 )(π 3π‘+9π‘ /2 )
=
2
π 4π‘+16π‘ /2
2
π 5π‘+13π‘ /2
= π βπ‘+3π‘
the random variable π3 ~π(β1,3) since ππ΄ (π‘) = π ππ‘+π
2 π‘ 2 /2
2 /2
. This reveals that
when π΄~π(π, π 2 ).
Question #12: Let π1 , β¦ , π5 be independent π(π, π 2 ) random variables. Give examples of
random variables π1 , β¦ , π4 defined in terms of the random variables π1 , β¦ , π5 such that we
have π1 ~π(0,1), π2 ~π 2 (3), π3 ~πΉ(1,3), and π4 ~π‘(3).
ο·
We have that π1 =
π1 βπ
π
ππ βπ 2
~π(0,1), π2 = β3π=1 ππ2 = β3π=1 (
2
π βπ
( 1 ) /1
π
π βπ 2
3
[βπ=1( π ) ]/3
π
~πΉ(1,3), and π4 =
π1
βπ2 /3
π βπ
( 1 )
π
=
2
π
) ~π 2 (3), π3 =
π2 /3
=
~π‘(3). These random variables
π βπ
β[β3π=1( π ) ]/3
π
are constructed since if some π~π(π, π 2 ), then π =
πβπ
π
~π(0,1). This implies that
π = π 2 ~π 2 (1), so that π = βππ=1 ππ ~π 2 (π). Finally, we note that π =
where π~π 2 (π) and that π =
π12 /1
π
βπ/π
~π‘(π).
π/π
π/π
~πΉ(π, π)
Question #13: If π1 and π2 are independent πΈππ(1) random variables such that their
π
densities are π(π₯) = π βπ₯ 1{π₯ > 0}, then find the joint density of π = π1 and π = π1 + π2.
2
ο·
By independence, ππ1 π2 (π₯1 , π₯2 ) = ππ1 (π₯1 )ππ2 (π₯2 ) = [π βπ₯1 ][π βπ₯2 ] = π β(π₯1 +π₯2 ) if π₯1 > 0
π₯
and π₯2 > 0. We have π’ = π₯1 and π£ = π₯1 + π₯2 so π₯1 = π’π₯2 = π’(π£ β π₯1 ), which implies
2
π’π£
π’π£
that π₯1 + π’π₯1 = π’π£ so π₯1 = 1+π’. Then π₯2 = π£ β π₯1 = π£ β 1+π’ =
ππ₯1
allows us to compute the Jacobian as π½ =
ππ₯1
ππ’
πππ‘ [ππ₯
2
ππ£
]
ππ₯2
ππ’
π£(1+π’)βπ’π£
1
(1+π’)
(1+π’)2
π’
π£
π£
π’π£
+ 1+π’ ((1+π’)2 ) = (1+π’)3 + (1+π’)3 =
π’π£
π£
ππ£
π£(1+π’)
(1+π’)3
π£
π£
1+π’
π£
= 1+π’. This
π£(1+π’)βπ’π£
(1+π’)2
πππ‘ [ βπ£
(1+π’)2
= (1+π’)2 .
π’π£
π£
=
π£+π’π£βπ’π£
Thus,
π’
1+π’
]
1
=
1+π’
the
joint
π£
density is πππ (π’, π£) = ππ1 π2 (1+π’ , 1+π’) |π½| = (1+π’)2 π β(1+π’+1+π’) = (1+π’)2 π βπ£ whenever
we have that π’ > 0, π£ > 0 and zero otherwise.
Question #14: If π1 , π2 , π3 are a random sample of size π = 2 from the πππΌπΉ(0,1)
distribution, then a) find the joint density of the order statistics π1 , π2 , π3 , b) find the marginal
densities of π1 and π3 , and c) find the mean of the sample range π
= π3 β π1 .
a) We have ππ1 π2 π3 (π¦1 , π¦2 , π¦3 ) = 3! β3π=1 ππ (π¦π ) = 6 β3π=1 1 = 6 if 0 < π¦1 < π¦2 < π¦3 < 1.
b) We have ππ1 (π¦1 ) = 3ππ (π¦1 )[1 β πΉπ (π¦1 )]2 = 3(1 β π¦1 )2 if 0 < π¦1 < 1 and that
ππ3 (π¦3 ) = 3ππ (π¦3 )[πΉπ (π¦3 )]2 = 3π¦32 whenever 0 < π¦3 < 1.
1
3
3
c) We have πΈ(π
) = πΈ(π3 ) β πΈ(π1 ), so we compute πΈ(π3 ) = β«0 π¦3 3π¦32 ππ¦3 = 4 [π¦33 ]10 = 4
1
1
π¦
and πΈ(π1 ) = β«0 π¦1 3(1 β π¦1 )2 ππ¦1 = 3 β«0 π¦1 β 2π¦12 + π¦13 ππ¦1 = 3 [ 21 β
3
1
1
2π¦13
3
+
π¦14
1
1
] = 4.
4
0
Then, we have that πΈ(π
) = πΈ(π3 ) β πΈ(π1 ) = 4 β 4 = 2. Alternatively, we could have
found ππ1 π3 (π¦1 , π¦3 ) and computed πΈ(π
) as a double integral.