Download Lecture 7

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
The Trick of One-Step Conditioning
STAT253/317 Winter 2014 Lecture 7
Many Markov chains {Xn } have some iterative relationships
between consecutive terms, e.g.,
Xn+1 = g (Xn , ξn+1 )
Yibi Huang
for all n
where {ξn , n = 0, 1, 2, . . .} are some i.i.d. random variables and Xn
is independent of {ξk : k > n}.
January 24, 2014
In many cases, we can use the iterative relationship to find E[Xn ]
and Var[Xn ] without knowing the distribution of Xn .
•
4.5.3
4.7
E[Xn+1 ] = E[E[Xn+1 |Xn ]]
The Trick of One-Step Conditioning
Random Walk w/ Reflective Boundary at 0
Branching Processes
Var(Xn+1 ) = E[Var(Xn+1 |Xn )] + Var(E[Xn+1 |Xn ])
Lecture 7 - 1
Lecture 7 - 2
Example 1: Simple Random Walk
Xn+1 =
(
Example 2: Ehrenfest Urn Model with M Balls
Recall that
Xn + 1 with prob p
Xn − 1 with prob q = 1 − p
Xn+1 =
So
(
Xn + 1 with probability
Xn − 1 with probability
We have
E[Xn+1 |Xn ] = p(Xn + 1) + q(Xn − 1) = Xn + p − q
E[Xn+1 |Xn ] = (Xn +1)×
Var[Xn+1 |Xn ] = 4pq
Thus
Then
M − Xn
Xn
2
+(Xn −1)×
= 1+ 1 −
Xn .
M
M
M
2
E[Xn+1 ] = E[E[Xn+1 |Xn ]] = 1 + 1 −
E[Xn ]
M
E[Xn+1 ] = E[E[Xn+1 |Xn ]] = E[Xn ] + p − q
Subtracting M/2 from both sided of the equation in (a), we have
M
M
2
E[Xn+1 ] −
= 1−
(E[Xn ] − )
2
M
2
Var(Xn+1 ) = E[Var(Xn+1 |Xn )] + Var(E[Xn+1 |Xn ])
= E[4pq] + Var(Xn + p − q) = 4pq + Var(Xn )
So
Thus
E[Xn ] = n(p − q) + E[X0 ],
Var(Xn ) = 4npq + Var(X0 )
E[Xn ] −
1−
2
M
n
(E[X0 ] −
M
)
2
Lecture 7 - 4
Example 3: Branching Processes (Section 4.7)
Mean of a Branching Process
Consider a population of individuals.
◮
M
=
2
Lecture 7 - 3
◮
M−Xn
M
Xn
M
Let µ = E[Zn,i ] =
All individuals have the same lifetime
Each individual will produce a random number of offsprings at
the end of its life
Let Xn = size of the n-th generation, n = 0, 1, 2, . . ..
If Xn−1 = k, the k individuals in the (n − 1)-th generation will
independently produce Zn,1 , Zn,2 , . . . , Zn,k new offsprings, and
Zn,1 , Zn,2 , . . . , Zn,Xn−1 are i.i.d such that
P∞
E[Xn |Xn−1 ] = E
So
j=0 jPj .
X
Xn−1
i=1
Since Xn =
PXn−1
i=1
Zn,i , we have
Zn,i Xn−1 = Xn−1 E[Zn,i ] = Xn−1 µ
E[Xn ] = E[E[Xn |Xn−1 ]] = E[Xn−1 µ] = µE[Xn−1 ]
If X0 = 1, then
E[Xn ] = µE[Xn−1 ] = µ2 E[Xn−2 ] = . . . = µn E[X0 ] = µn
P(Zn,i = j) = Pj , j ≥ 0.
We suppose that Pj < 1 for all j ≥ 0.
Xn =
XXn−1
i=1
Zn,i
◮
If µ < 1 ⇒ E[Xn ] → 0 as n → ∞ ⇒ limn→∞ P(Xn ≥ 1) = 0
the branching processes will eventually die out.
◮
What if µ = 1 or µ > 1?
(1)
{Xn } is a Markov chain with state space = {0, 1, 2, . . .} .
Lecture 7 - 5
Lecture 7 - 6
Variance of a Branching
Process
P
4.5.3 Random Walk w/ Reflective Boundary at 0
2
Let σ 2 = Var[Zn,i ] = ∞
j=0 (j − µ) Pj . Var(Xn ) may be obtained
using the conditional variance formula
Var(Xn ) = E[Var(Xn |Xn−1 )] + Var(E[Xn |Xn−1 ]).
PXn−1
Zn,i , we have
Again from that Xn = i=1
E[Xn |Xn−1 ] = Xn−1 µ,
Var(Xn |Xn−1 ) = Xn−1 σ
◮
◮
◮
◮
State Space = {0, 1, 2, . . .}
P01 = 1, Pi,i+1 = p, Pi,i−1 = 1 − p = q, for i = 1, 2, 3 . . .
Only one class, irreducible
For i < j, define
2
Nij = min{m > 0 : Xm = j|X0 = i}
and hence
= time to reach state j starting in state i
Var(E[Xn |Xn−1 ]) = Var(Xn−1 µ) = µ2 Var(Xn−1 )
◮
E[Var(Xn |Xn−1 )] = σ 2 E[Xn−1 ] = σ 2 µn−1 .
◮
So
2 n
2
Var(Xn ) = σ µ + µ Var(Xn−1 )
= σ 2 (µn−1 + µn + . . . + µ2n−2 ) + µ2n Var(X0 )
(
n
σ 2 µn−1 1−µ
+ µ2n Var(X0 ) if µ 6= 1
1−µ
=
nσ 2 + µ2n Var(X0 )
if µ = 1
Observe that N0n = N01 + N12 + . . . + Nn−1,n
By the Markov property, N01 , N12 , . . . , Nn−1,n are indep.
Given X0 = i
(
1
if X1 = i + 1
Ni,i+1 =
∗
∗
1 + Ni−1,i + Ni,i+1 if X1 = i − 1
∗
∗
∗
∗
where both Ni−1,i
and Ni,i+1
∼ Ni,i+1 , and Ni−1,i
, Ni,i+1
are
independent.
Lecture 7 - 7
Lecture 7 - 8
4.5.3 Random Walk w/ Reflective Boundary at 0 (Cont’d)
Let mi = E(Ni,i+1 ). Taking expected value on Equation (2), we
get
∗
∗
mi = E[Ni,i+1 ] = 1 + qE[Ni−1,i
] + qE[Ni,i+1
] = 1 + q(mi−1 + mi )
Mean of N0,n
Recall that N0n = N01 + N12 + . . . + Nn−1,n
E[N0n ] = m0 + m1 + . . . + mn−1
(
2pq
q n
n
− (p−q)
2 [1 − ( p ) ]
= p−q
n2
Rearrange terms we get pmi = 1 + qmi−1 or
1
p
1
=
p
1
=
p
mi =
q
mi−1
p
q 1 q
+ ( + mi−2 )
p p p
q
q
q
q
1 + + ( )2 + . . . + ( )i−1 + ( )i m0
p
p
p
p
+
Since N01 = 1, which implies m0 = 1.
(
1−(q/p)i
+ ( qp )i
p−q
mi =
2i + 1
Lecture 7 - 9
(2)
if p 6= 0.5
if p = 0.5
When
p > 0.5 E[N0n ] ≈
p = 0.5 E[N0n ] =
n
p−q
n2
−
2pq
(p−q)2
linear in n
quadratic in n
2pq
q n
exponential in n
p < 0.5 E[N0n ] = O( (p−q)
2 (p) )
if p 6= 0.5
if p = 0.5
Lecture 7 - 10
Related documents