# Download MARKOV CHAIN

Survey
Was this document useful for you?
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Nonlinear dimensionality reduction wikipedia, lookup

Transcript
```DISCRETE -TIME
MARKOV CHAIN
( C O N T I N U AT I O N )
PROBABILITY OF ABSORPTION
If state j is an absorbing state, what is the probability of going
from state i to state j?
Let us denote the probability as 𝐴𝑖𝑗 .
Finding the probabilities is not straightforward, especially
when there are two or more absorbing states in a Markov
chain.
PROBABILITY OF ABSORPTION
What we can do is to consider all the possibilities for the first
transition and then, given the first transition, we consider the
conditional probability of absorption into state j.
𝑀
𝐴𝑖𝑗 =
𝑝𝑖𝑘 𝐴𝑘𝑗
𝑘=0
PROBABILITY OF ABSORPTION
We can obtain the probabilities by solving a system of linear equations
𝑀
𝐴𝑖𝑗 =
𝑝𝑖𝑘 𝐴𝑘𝑗 for 𝑖 = 0,1, … , 𝑀
𝑘=0
subject to
𝐴𝑗𝑗 = 1
𝐴𝑘𝑗 = 0 if state 𝑘 is recurrent and 𝑘 ≠ 𝑗
1
EXERCISE: FIND 𝑨 𝟏𝟑
State
0
.3
1
State
3
p=0.7
0.3
State
2
p=0.7
State
1
ENDING SLIDES
ABOUT MARKOV
CHAINS
TIME REVERSIBLE MARKOV CHAINS
Consider a stationary (i.e., has been in operation for a long
time) ergodic Markov Chain having transition probabilities 𝑝𝑖𝑗
and stationary probabilities 𝜋𝑖 . Suppose that starting at some
time we trace the sequence of states going backward in time.
TIME REVERSIBLE MARKOV CHAINS
Starting at time n, the stochastic process
𝑋𝑛 , 𝑋𝑛−1 , 𝑋𝑛−2 , … , 𝑋0 is also a Markov Chain! The transition
probabilities are
𝜋𝑗 𝑝𝑗𝑖
𝑞𝑖𝑗 =
.
𝜋𝑖
If 𝑞𝑖𝑗 = 𝑝𝑖𝑗 for all 𝑖, 𝑗 then the Markov Chain is time
reversible. Or 𝜋𝑗 𝑝𝑗𝑖 = 𝜋𝑖 𝑝𝑖𝑗 which means the rate at which
the process goes from i to j is equal to the rate at which it
goes from j to i.
FOR REPORTING:
•Hidden Markov Chains applied to data
analytics/mining
•Markov Chain Monte Carlo in data fitting
```