Download Continuous Time Markov Chains (CTMCs)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Continuous Time Markov Chains (CTMCs)
Continuous Time Markov
Chains (CTMCs)
Stella Kapodistria and Jacques Resing
September 10th, 2012
ISP
Continuous Time Markov Chains (CTMCs)
Continuous Time Markov Chains (CTMCs)
In analogy with the definition of a discrete-time Markov chain, given in
Chapter 4, we say that the process {X (t) : t ≥ 0}, with state space S, is
a continuous-time Markov chain if for all s, t ≥ 0 and nonnegative
integers i, j, x(u), 0 ≤ u < s
P[X (t + s) = j|X (s) = i, X (u) = x(u), 0 ≤ u < s]
= P[X (t + s) = j|X (s) = i]
In other words, a continuous-time Markov chain is a stochastic process
having the Markovian property that the conditional distribution of the
future X (t + s) given the present X (s) and the past X (u), 0 ≤ u < s,
depends only on the present and is independent of the past.
If, in addition,
P[X (t + s) = j|X (s) = i]
is independent of s, then the continuous-time Markov chain is said to
have stationary or homogeneous transition probabilities.
Continuous Time Markov Chains (CTMCs)
Continuous Time Markov Chains (CTMCs)
To what follows, we will restrict our attention to time-homogeneous
Markov processes, i.e., continuous-time Markov chains with the property
that, for all s, t ≥ 0,
P[X (s + t) = j | X (s) = i] = P[X (t) = j | X (0) = i] = Pij (t).
The probabilities Pij (t) are called transition probabilities and the |S| × |S|
matrix


P00 (t) P01 (t) . . .


P(t) =  P10 (t) P11 (t) . . . 
..
..
..
.
.
.
is called the transition probability matrix.
The matrix P(t) is for all t a stochastic matrix.
Continuous Time Markov Chains (CTMCs)
Memoryless property
Continuous Time Markov Chains (CTMCs)
Memoryless property
Suppose that a continuous-time Markov chain enters state i at some
time, say, time 0, and suppose that the process does not leave state i
(that is, a transition does not occur) during the next 10min.
What is the probability that the process will not leave state i during the
following 5min?
Now since the process is in state i at time 10 it follows, by the Markovian
property, that the probability that it remains in that state during the
interval [10, 15] is just the (unconditional) probability that it stays in
state i for at least 5min. That is, if we let
Ti : the amount of time that the process stays in state i before making a
transition into a different state,
then
P[Ti > 15|Ti > 10] = P[Ti > 5].
Continuous Time Markov Chains (CTMCs)
Memoryless property
Continuous Time Markov Chains (CTMCs)
Memoryless property
Suppose that a continuous-time Markov chain enters state i at some
time, say, time s, and suppose that the process does not leave state i
(that is, a transition does not occur) during the next tmin.
What is the probability that the process will not leave state i during the
following tmin?
With the same reasoning as before, if we let
Ti : the amount of time that the process stays in state i before making a
transition into a different state,
then
P[Ti > s + t|Ti > s] = P[Ti > t]
for all s, t ≥ 0. Hence, the random variable Ti is memoryless and must
thus (see Section 5.2.2) be exponentially distributed!
Continuous Time Markov Chains (CTMCs)
Memoryless property
Continuous Time Markov Chains (CTMCs)
Memoryless property
In fact, the preceding gives us another way of defining a continuous-time
Markov chain. Namely, it is a stochastic process having the properties
that each time it enters state i
(i) the amount of time it spends in that state before making a
transition into a different state is exponentially distributed with
mean, say, E [Ti ] = 1/vi , and
(ii) when the process leaves state i, it next enters state j with some
probability, say, Pij . Of course, the Pij must satisfy
Pii = 0, all i
X
j
Pij = 1, all i.
Continuous Time Markov Chains (CTMCs)
Poisson process
Continuous Time Markov Chains (CTMCs)
Poisson process i → i + 1
X (t) = # of arrivals in (0, t]
State space {0, 1, 2, . . .}
0
λ
/1
λ
/2
λ
/ ···
Figure: Transition rate diagram of the Poisson Process
Ti ∼ Exp(λ), so vi = λ, i ≥ 0
Pij = 1, j = i + i
Pij = 0, j 6= i + 1.
Continuous Time Markov Chains (CTMCs)
Birth-Death Process
Continuous Time Markov Chains (CTMCs)
Birth-Death Process i → i + 1 and i → i − 1
X (t) = population size at time t
State space {0, 1, 2, . . .}
λ0
λ1
%
0e
µ1
λ2
%
1e
µ2
2e
&
···
µ3
Figure: Transition rate diagram of the Birth-Death Process
Time till next ‘birth’ : Bi ∼ Exp(λi ), i ≥ 0
Time till next ‘death’ : Di ∼ Exp(µi ), i ≥ 1
Time till next transition : Ti = min{Bi , Di } ∼ Exp((λi + µi ) = vi ), i ≥ 1
and T0 ∼ Exp(λ0 )
λi
µi
, Pi,i−1 =
, Pij = 0, j 6= i ± 1, i ≥ 1
λi + µi
λi + µi
=1
Pi,i+1 =
P01
Continuous Time Markov Chains (CTMCs)
Birth-Death Process
Determine M(t) = E [X (t)]
Birth-Death (B-D) Process : Determine M(t)
X (t) = population size at time t, state space {0, 1, 2, . . .}
λ+α
α
%
0e
µ
2λ+α
%
1e
2µ
&
2e
···
3µ
Figure: Transition rate diagram of the Birth-Death Process
Suppose that X (0) = i and let M(t) = E [X (t)]
M(t + h) = E [X (t + h)] = E [E [X (t + h)|X (t)]]


X (t) + 1, with probability [α + X (t)λ]h + o(h)
X (t + h) = X (t) − 1, with probability X (t)µh + o(h)


X (t),
otherwise
Continuous Time Markov Chains (CTMCs)
Birth-Death Process
Determine M(t) = E [X (t)]
Birth-Death (B-D) Process : Determine M(t)
X (t) = population size at time t, state space {0, 1, 2, . . .}
λ+α
α
%
0e
µ
2λ+α
%
1e
2µ
&
2e
···
3µ
Figure: Transition rate diagram of the Birth-Death Process
Suppose that X (0) = i and let M(t) = E [X (t)]
M(t + h) = E [E [X (t + h)|X (t)]]
E [X (t + h)|X (t)] = (X (t) + 1)[αh + X (t)λh] + (X (t) − 1)[X (t)µh]
+ X (t)[1 − αh − X (t)λh − X (t)µh] + o(h)
= X (t) + αh + X (t)λh − X (t)µh + o(h)
Continuous Time Markov Chains (CTMCs)
Birth-Death Process
Determine M(t) = E [X (t)]
Birth-Death (B-D) Process : Determine M(t)
X (t) = population size at time t, state space {0, 1, 2, . . .}
λ+α
α
%
0e
µ
2λ+α
%
1e
2µ
&
2e
···
3µ
Figure: Transition rate diagram of the Birth-Death Process
Suppose that X (0) = i and let M(t) = E [X (t)]
M(t + h) = E [E [X (t + h)|X (t)]]
= M(t) + αh + M(t)λh − M(t)µh + o(h)
E [X (t + h)|X (t)] = X (t) + αh + X (t)λh − X (t)µh + o(h)
Continuous Time Markov Chains (CTMCs)
Birth-Death Process
Determine M(t) = E [X (t)]
Birth-Death (B-D) Process : Determine M(t)
X (t) = population size at time t, state space {0, 1, 2, . . .}
λ+α
α
%
0e
µ
2λ+α
%
1e
2µ
&
2e
···
3µ
Figure: Transition rate diagram of the Birth-Death Process
Suppose that X (0) = i and let M(t) = E [X (t)]
M(t + h) = E [E [X (t + h)|X (t)]]
= M(t) + αh + M(t)λh − M(t)µh + o(h)
Taking the limit as h → 0 yields a differential equation
α
α
+ i)e (λ−µ)t −
M 0 (t) = (λ − µ)M(t) + α =⇒ M(t) = (
λ−µ
λ−µ
Continuous Time Markov Chains (CTMCs)
Birth-Death Process
First step analysis
Birth-Death (B-D) Process: First step analysis
Let Tij be the time to reach j for the first time starting from i. Then for
the B-D process
1
+ Pi,i+1 × E [Ti+1,j ] + Pi,i−1 × E [Ti−1,j ], i, j ≥ 1
E [Ti,j ] =
λi + µi
This can be solved, starting from E [T0,1 ] = 1/λ0 .
Example
For the B-D process with λi = λ and µi = µ we obtain recursively
E [T0,1 ] =
1
λ
1
+ P1,2 × 0 + P1,0 × E [T0,2 ]
λ+µ
1
µ
1
µ
=
+
(E [T0,1 ] + E [T1,2 ]) = [1 + ]
λ+µ λ+µ
λ
λ
1
µ µ2
µi
E [Ti,i+1 ] = [1 + + 2 + · · · + i ]
λ
λ λ
λ
E [T1,2 ] =
Continuous Time Markov Chains (CTMCs)
Birth-Death Process
First step analysis
Birth-Death Process: First step analysis
Let Tij be the time to reach j for the first time starting from i. Then for
the B-D process
λi + µi
[Pi,i+1 × E [e −sTi+1,j ] + Pi,i−1 × E [e −sTi−1,j ]], i, j ≥ 1
E [e −sTi,j ] =
λi + µi + s
This can be solved, starting from E [e −sT0,1 ] = λ0 /(λ0 + s).
Example
For the B-D process with λi = λ and µi = µ we are interested in T1,0
(time to extinction):
E [e −sTi,0 ] =
1
[λE [e −sTi+1,0 ] + µE [e −sTi−1,0 ]], i ≥ 1
λ+µ+s
and E [e −sT0,0 ] = 1. For a fixed s this equation can be seen as a second
order difference equation yielding
E [e −sTi,0 ] =
p
1
[λ + µ + s − (λ + µ + s)2 − 4λµ]i , i ≥ 1.
i
(2λ)
Continuous Time Markov Chains (CTMCs)
The Transition Probability Function Pij (t)
The Transition Probability Function Pij (t)
Let
Pij (t) = P[X (t + s) = j|X (s) = i]
denote the transition probabilities of the continuous-time Markov chain.
How do we calculate them?
Example
For the Poisson process with rate λ
Pij (t) = P[j − i jumps in (0, t]|X (0) = i]
= P[j − i jumps in (0, t]]
= e −λt
(λt)j−i
.
(j − i)!
Continuous Time Markov Chains (CTMCs)
The Transition Probability Function Pij (t)
The Transition Probability Function Pij (t)
Let
Pij (t) = P[X (t + s) = j|X (s) = i]
denote the transition probabilities of the continuous-time Markov chain.
How do we calculate them?
Example
For the Birth process with rates λi , i ≥ 0
Pij (t) = P[X (t) = j|X (0) = i]
= P[X (t) ≤ j|X (0) = i] − P[X (t) ≤ j − 1|X (0) = i]
= P[Xi + · · · + Xj > t] − P[Xi + · · · + Xj−1 > t],
with Xk ∼ Exp(λk ), k=1,2,. . . . Hence, for λk all dirreferent
P[Xi + · · · + Xj > t] =
j
X
k=i
e −λk t
Y
r 6=k
λr
.
λr − λk
Continuous Time Markov Chains (CTMCs)
The Transition Probability Function Pij (t)
Instantaneous Transition Rates
The Transition Probability Function Pij (t)
Transition Rates
We shall derive a set of differential equations that the transition
probabilities Pij (t) satisfy in a general continuous-time Markov chain.
First we need a definition and a pair of lemmas.
Definition
For any pair of states i and j, let
qij = vi Pij
Since vi is the rate at which the process makes a transition when in state
i and Pij is the probability that this transition is into state j, it follows
that qij is the rate, when in state i, at which the process makes a
transition into state j. The quantities qij are called the instantaneous
transition rates.
X
X
qij
qij
=P
vi =
vi Pij =
qij and Pij =
vi
j qij
j
j
Continuous Time Markov Chains (CTMCs)
The Transition Probability Function Pij (t)
Instantaneous Transition Rates
The Transition Probability Function Pij (t)
Transition Rates
We shall derive a set of differential equations that the transition
probabilities Pij (t) satisfy in a general continuous-time Markov chain.
First we need a definition and a pair of lemmas.
Lemma (6.2)
1−Pii (h)
= vi
h
Pij (h)
limh→0 h = qij , when
a) limh→0
b)
i 6= j.
Lemma (6.3 – Chapman-Kolmogorov equations)
For all s, t ≥ 0
X
Pik (t)Pkj (s)
Pij (t + s) =
k∈S
P(t + s) = P(t)P(s)
Continuous Time Markov Chains (CTMCs)
The Transition Probability Function Pij (t)
Instantaneous Transition Rates
The Transition Probability Function Pij (t)
Transition Rates
We shall derive a set of differential equations that the transition
probabilities Pij (t) satisfy in a general continuous-time Markov chain.
First we need a definition and a pair of lemmas.
Theorem (6.1 – Kolmogorov’s Backward equations)
For all states i, j and times t ≥ 0
X
Pij0 (t) =
qik Pkj (t) − vi Pij (t)
k6=i
with initial conditions Pii (0) = 1 and Pij (0) = 0 for all j 6= i.
Theorem (6.2 – Kolmogorov’s Forward equations)
Under suitable regularity conditions
X
Pij0 (t) =
Pik (t)qkj − vj Pij (t)
k6=j
Continuous Time Markov Chains (CTMCs)
Limiting probabilities
CTMC and Limiting probabilities
Assuming that the limt→∞ Pij (t) exists
(a) all states communicate
(b) the MC is positive recurrent (finite mean return time)
we can define limt→∞ Pij (t) = Pj and then by taking properly the limit
as t → ∞ in the Kolmogorov forward equations yields
X
vj Pj =
qkj Pk
k6=j
This set of equations together with the normalization equation
can be solved to obtain the limiting probabilities.
Interpretations of Pj :
Limiting distribution
Long run fraction of time spent in j
Stationary distribution
P
j
Pj = 1
Continuous Time Markov Chains (CTMCs)
Limiting probabilities
CTMC and Limiting probabilities
Assuming that the limt→∞ Pij (t) exists
(a) all states communicate
(b) the MC is positive recurrent (finite mean return time)
we can define limt→∞ Pij (t) = Pj and then by taking properly the limit
as t → ∞ in the Kolmogorov forward equations yields
X
vj Pj =
qkj Pk
k6=j
This set of equations together with the normalization equation
can be solved to obtain the limiting probabilities.
P
j
Pj = 1
When the limiting probabilities exist we say that the MC is ergodic.
The Pj are sometimes called stationary probabilities since it can be shown
that if the initial state is chosen according to the distribution {Pj }, then
the probability of being in state j at time t is Pj , for all t.
Continuous Time Markov Chains (CTMCs)
Limiting probabilities
CTMC and Limiting probabilities
Assuming that the limt→∞ Pij (t) exists
(a) all states communicate
(b) the MC is positive recurrent (finite mean return time)
we can define the vector P = (limt→∞ P·j (t)) and then by taking properly
the limit as t → ∞ the Kolmogorov forward equations yield
PQ = 0
P
This set of equations together with the normalization equation j Pj = 1
can be solved to obtain the limiting probabilities.
The matrix Q is called generator matrix and contains all rate transitions:


−v0 q01 . . .


Q =  q10 −v1 . . . 
..
..
..
.
.
.
The sum of all elements in each row is 0.
Continuous Time Markov Chains (CTMCs)
Limiting probabilities
Birth-Death (B-D) Process and Limiting probabilities
X (t) = population size at time t, state space {0, 1, 2, . . .}
λ0
λ1
%
0e
%
λ2
1e
µ1
2e
µ2
&
···
µ3
Figure: Transition rate diagram of the Birth-Death Process
From the balance argument: “Outflow state j = Inflow state j”, we
obtain the following system of equations for the limiting probabilities:
P0 λ0
P1 (λ1 + µ1 )
Pi (λi + µi )
=
P1 µ1 ,
= P0 λ0 + P2 µ2 ,
..
.
= Pi−1 λi−1 + Pi+1 µi+1 , i = 2, 3, . . .
Continuous Time Markov Chains (CTMCs)
Limiting probabilities
Birth-Death (B-D) Process and Limiting probabilities
X (t) = population size at time t, state space {0, 1, 2, . . .}
λ0
λ1
%
0e
1e
µ1
λ2
%
2e
µ2
&
···
µ3
Figure: Transition rate diagram of the Birth-Death Process
By adding to each equation the equation preceding it, we obtain
P0 λ0
= P1 µ1 ,
P1 λ1
= P2 µ2 ,
..
.
Pi λi
= Pi+1 µi+1 , i = 2, 3, . . .
Solving yields
Pi
=
λi−1 λi−2 · · · λ0
P0 , i = 1, 2, . . .
µi µi−1 · · · µ1
Continuous Time Markov Chains (CTMCs)
Limiting probabilities
Birth-Death (B-D) Process and Limiting probabilities
X (t) = population size at time t, state space {0, 1, 2, . . .}
λ0
λ1
%
0e
%
1e
µ1
λ2
µ2
2e
&
···
µ3
Figure: Transition rate diagram of the Birth-Death Process
λi−1 λi−2 · · · λ0
P0 , i = 1, 2, . . .
µi µi−1 · · · µ1
P∞
And by using the fact that j=0 Pj = 1 we can solve for P0
P0
Pi
=
=
(1 +
∞
X
λi−1 λi−2 · · · λ0
i=1
µi µi−1 · · · µ1
)−1 , i = 1, 2, . . .
Continuous Time Markov Chains (CTMCs)
Limiting probabilities
Birth-Death (B-D) Process and Limiting probabilities
X (t) = population size at time t, state space {0, 1, 2, . . .}
λ0
λ1
%
0e
λ2
%
1e
µ1
2e
µ2
&
···
µ3
Figure: Transition rate diagram of the Birth-Death Process
Pi
=
P0
=
λi−1 λi−2 · · · λ0
P0 , i = 1, 2, . . .
µi µi−1 · · · µ1
∞
X
λi−1 λi−2 · · · λ0 −1
(1 +
) , i = 1, 2, . . .
µi µi−1 · · · µ1
i=1
It is sufficient and necessary for the limiting probabilities to exists
∞
X
λi−1 λi−2 · · · λ0
i=1
µi µi−1 · · · µ1
<∞
Otherwise, the MC is transient or null-recurrent; no limiting distribution.
Continuous Time Markov Chains (CTMCs)
Limiting probabilities
Balance equations
CTMC and Limiting probabilities : Balance equations
From the balance argument: “Outflow state j = Inflow state j”, we
obtain the following system of equations for the limiting probabilities,
called balance equations:
X
vj Pj =
qkj Pk
k6=j
In many cases we can reduce the original system of balance equations by
selecting appropriately a set A and summing over all states i ∈ A
X X
X X
qij
Pi
Pj
qji =
j∈A
i6∈A
i6∈A
j∈A
Continuous Time Markov Chains (CTMCs)
Embedded Markov chain
Embedded Markov chain
Let {X (t), t ≥ 0} be some irreducible and positive recurrent CTMC with
parameters vi and Pij = qij /vi . Let Yn be the state of X (t) after n
transitions. Then the stochastic process {Yn , n = 0, 1, . . .} is DTMC with
transition probabilities Pij (Pij = qij /vi i 6= j, and Pjj = 0). This process
is called Embedded chain.
The equilibrium probabilities πi of this embedded Markov chain satisfy
X
πi =
πj Pji .
j∈S
Then the equilibrium probabilities of the Markov process can be
computed by multiplying the equilibrium probabilities of the embedded
chain by the mean times spent in the various states. This leads to,
Pi = C
πi
vi
where the constant C is determined by the normalization condition.
Continuous Time Markov Chains (CTMCs)
Uniformization
Uniformization
Consider a continuous-time Markov chain in which the mean time spent
in a state is the same for all states.
That is, suppose that vi = v , for all states i. In this case since the
amount of time spent in each state during a visit is exponentially
distributed with rate v , it follows that if we let
N(t) denote the number of state transitions by time t,
then {N(t), t ≥ 0} will be a Poisson process with rate v .
To compute the transition probabilities Pij (t), we can condition on N(t):
Pij (t) = P[X (t) = j|X (0) = i]
∞
X
=
P[X (t) = j|X (0) = i, N(t) = n]P[N(t) = n|X (0) = i]
n=0
=
∞
∞
X
X
(vt)n
(vt)n
=
Pijn e −vt
P[X (t) = j|X (0) = i, N(t) = n] e −vt
{z
}
|
n!
n!
n=0
n=0
Pijn
Continuous Time Markov Chains (CTMCs)
Uniformization
Uniformization
Consider a continuous-time Markov chain in which the mean time spent
in a state is the same for all states.
Given that vi are bounded, set v ≥ max{vi }.
Now when in state i, the process actually leaves at rate vi , but this is
equivalent to supposing that transitions occur at rate v , but only the
fraction vi /v of transitions are real ones and the remaining fraction
1 − vi /v are fictitious transitions which leave the process in state i.
Continuous Time Markov Chains (CTMCs)
Uniformization
Uniformization
Consider a continuous-time Markov chain in which the mean time spent
in a state is the same for all states.
Given that vi are bounded, set v ≥ max{vi }.
Any Markov chain with vi bounded, can be thought of as being a process
that spends an exponential amount of time with rate v in state i and
then makes a transition to j with probability
(
1 − vi , j = i
∗
Pij = vi v
j 6= i
v Pij ,
Hence, the transition probabilities can be computed by
Pij (t) =
∞
X
n=0
Pij∗ n e −vt
(vt)n
n!
This technique of uniformizing the rate in which a transition occurs from
each state by introducing transitions from a state to itself is known as
uniformization.
Continuous Time Markov Chains (CTMCs)
Uniformization
Summary CTMC
1 Memoryless property
2 Poisson process
3 Birth-Death Process
Determine M(t) = E [X (t)]
First step analysis
4 The Transition Probability Function Pij (t)
Instantaneous Transition Rates
5 Limiting probabilities
Balance equations
6 Embedded Markov chain
7 Uniformization
Continuous Time Markov Chains (CTMCs)
Uniformization
Exercises
Introduction to Probability Models
Harcourt/Academic Press, San Diego, 9th ed., 2007
Sheldon M. Ross
Chapter 6
Sections 6.1, 6.2, 6.3, 6.4, 6.5 and 6.7
Exercises: 1,3,5,6(a)-(b),8,10(but you do not have to verify that the
transition probabilities satisfy the forward and backward
equations),12,13,15,17,18,22,23+
Continuous Time Markov Chains (CTMCs)
Uniformization
Exercises
1
Consider a Markov process with states 0, 1
following transition rate matrix Q:

−λ
λ
Q =  µ −(λ + µ)
µ
0
and 2 and with the

0
λ 
−µ
where λ, µ > 0.
a. Derive the parameters vi and Pij for this Markov process.
b. Determine the expected time to go from state 1 to state 0.
2
Consider the following queueing model: customers arrive at a service
station according to a Poisson process with rate λ. There are c
servers; the service times are exponential with rate µ. If an arriving
customer finds c servers busy, then he leaves the system immediately.
a. Model this system as a birth and death process.
b. Suppose now that there are infinitely many servers (c = ∞).
Again model this system as a birth and death process.