Download Markov Chain

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Tutorial 8
Markov Chains
Markov Chains
 Consider a sequence of random variables
X0, X1, …, and the set of possible values of
these random variables is {0, 1, …, M}.
 Xn : the state of some system at time n
Xn = i , where 0  i  M
 the system is in state i at time n
2
Markov Chains
 X0, X1, … form a Markov Chain if
P{ X n 1  j | X n  i, X n 1  in 1 ,..., X 1  i1 , X 0  i0 }
 P{ X n 1  j | X n  i}
 Pij
 Pij = transition prob.
= prob. that the system is in state i
and it will next be in state j
3
Transition Matrix
 Transition prob., Pij
Pij  0 &
M
P
j 0
ij
 1,
i  0,1,..., M
 Transition matrix, P
 P00 P01 ... P0 M 
 P P ... P 
1M 
P   10 11
...



 PM 0 PM 1 ... PMM 
4
Example 1
 Suppose that whether or not it rains tomorrow
depends on previous weather conditions only
through whether or not it is raining today.
If it rain today, then it will rain tomorrow with
prob 0.7; and if it does not rain today, then it
will not rain tomorrow with prob 0.6.
5
Example 1
 Let state 0 be the rainy day
state 1 be the sunny day
 The above is a two-state Markov chain having
transition probability matrix, P  0.7 0.3
0.4 0.6


 If the starting distributi on in Day 0 : u  [1 0],
0.7 0.3
distributi on in Day 1 : u  uP  [1 0]
 [0.7 0.3]

0.4 0.6
distributi on in Day 2 : u (2)  u (1) P  uP 2  [0.61 0.39]
(1)
6
Transition matrix
 The probability that the chain is in state i
after n steps is the ith entry in the vector
u
(n)
 uP
n
 where
P: transition matrix of a Markov chain
u: probability vector representing the
starting distribution.
7
Ergodic Markov Chains
 A Markov chain is called an ergodic chain
(irreducible chain) if it is possible to go
from every state to every state (not
necessarily in one move).
 A Markov chain is called a regular chain if
some power of the transition matrix has
only positive elements.
8
Regular Markov Chains
 For a regular Markov chain with transition
n
matrix, P and W  lim P ,
  P
n
where  is the common row of W
and   [ 0  1 ...]
 ith entry in the vector  is the long run
probability of state i.
9
Example 2
 From example 1,
0.7 0.3
the transition matrix P  

0
.
4
0
.
6


0.7 0.3
  [ 1  2 ]  P  [ 1  2 ]

0
.
4
0
.
6


 1  0.7 1  0.4 2
 1  4 / 7


 2  0.3 1  0.6 2
 2  3 / 7
where  1   2  1
 The long run prob. for rainy day is 4/7.
10
Markov chain with
absorption state
0
0
0 
 1


 0.1 0.3 0.2 0.4 
transition matrix  
0.4 0.3 0.1 0.2 


 0

0
0
1


 Example:
 Calculate
 (i) the expect time to absorption
 (ii) the absorption prob.
11
MC with absorption state
 First rewrite the transition matrix to
 0.3 0.2 0.1 0.4 


 0.3 0.1 0.4 0.2   Q R 

P
 

0
0
1
0
 I

 
 0

0
0
1


 N=(I-Q)-1 is called a fundamental matrix for P
 Entries of N,
n ij = E(time in transient state j|start at transient state i)
12
MC with absorption state
1.5789 0.3509 

N  ( I  Q)  
 0.5263 1.2281 
1
1
 
1
 (i) E(time to absorb |start at i)= (( I  Q)  ...)i
1
 
1  1.5789 0.3509 1
 
( I  Q)    
1  0.5263 1.2281 1
1.9298 

 
1.7544 
1
13
MC with absorption state
 (ii) Absorption prob. B=NR
bij = P( absorbed in absorption state j |
start at transient state i)
 1.5789
( I  Q) R  
 0.5263
 0.2983
 
 0.5439
1
0.3509  0.1 0.4 


1.2281  0.4 0.2 
0.7017 

0.4561 
14
Related documents