Download Keywords Limiting probability, Probability of state, Markov Processes

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of randomness wikipedia , lookup

Indeterminism wikipedia , lookup

Random variable wikipedia , lookup

Infinite monkey theorem wikipedia , lookup

Randomness wikipedia , lookup

Probability box wikipedia , lookup

Dempster–Shafer theory wikipedia , lookup

Birthday problem wikipedia , lookup

Probabilistic context-free grammar wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Conditioning (probability) wikipedia , lookup

Probability interpretations wikipedia , lookup

Inductive probability wikipedia , lookup

Transcript
Applied Mathematics 2016, 6(2): 36-39
DOI: 10.5923/j.am.20160602.03
Applied Problems of Markov Processes
Roza Shakenova*, Alua Shakenova
Kazach National Research Technical University Named after K. I. Satpaev, Almaty, Kazakhstan
Abstract In this paper authors proposed that the human-being’s state of health can be described as the function of three
main factors: economics, ecology and politics of government. We obtained three models of the state of health from the worst
to the best using Markov processes. We hope that our theoretical models can be applied in practice.
Keywords Limiting probability, Probability of state, Markov Processes
si
able to transfer from one state
1. Introduction
Markov chains can be used in the description of different
economical, ecological problems. The main problem is to
find final (limit) probabilities of different states of a certain
system. We may use graphs for the system with discrete
states. The vertices of graph correspond to the states of the
system on the picture #1. The arrows between graphs show
the possibility of system’s transfer from one state to another.
The final states of human-beings’ health are very important
in medicine. Therefore, this paper considers the problem of
determining final probabilities of human-being’s state of
health. Let’s recall basic definitions.
Law of the random variable distribution is every
relationship between the possible values of a random
variable and their probabilities. The table below shows the
law of distribution of a discrete random variable.
x
i
x
1
x
p
i
p
1
p
2…
2…
x
0
the past (when
t < t0 ), then the random process is Markov
process. Let’s consider conditional probability of
transferring of system S on the kth step in the state
, if it
sj
is known that it was in the state
n
si
on the (k-1)th step.
Denote this probability by:
pij (k ) = P S (k ) = s S (k − 1) = s 
n
i
j
,
(i, j = 1,2,..., n)
Probability
n
∑ pi = 1
i =1
s , s ,… , s ,…
1 2
i
(1)
Random process is the sequence of events of the following
type (1). We will consider this sequence («chain») of events.
We will confine to the finite number of states. The system is
* Corresponding author:
[email protected] (Roza Shakenova)
Published online at http://journal.sapub.org/am
Copyright © 2016 Scientific & Academic Publishing. All Rights Reserved
(2)
pii (k ) is the probability that the system
remains in the state
The set of different states of one physical system with
discrete states, where random process occurs, is finite or
countable.
sj
directly or through other states [2], [3]. Usually the graph of
states describes the states visually, where the vertices of
graph correspond to the states. If the probability of each state
in the future for every time t ( t > t ) of the system S with
0
0
discrete states,, s , s ,… s ,…, depends on the state in
1
2
i
the present ( t = t ), and does not depend on its behavior in

p
to another state
si
on kth step. Probabilities
pij (k )
are transition probabilities of Markov chain on the kth step.
Transition probabilities can be written in the form of square
table (matrix) of size n. This is Stochastic matrix; the sum
n
of all probabilities of one row is equal to 1 ∑ p = 1 ,
j =1 ij
because the system can be in one of mutually exclusive
states.
p
 11
p
P =  21
 ...

p
 n1
p
...
22
... ...
p 
1n 
p 
2n 
.
... 
p
p 
p
12
n2
...
...

nn 
(3)
Applied Mathematics 2016, 6(2): 36-39
In order to find unconditional probabilities we need to
know initial probability distribution, i.e. probabilities at the
beginning of the process at time t0 = 0 : p1 (0) , p2 (0) ,…,
pn (0) , with their sum equal to one.
Markov chain is called uniform, if transition probabilities
do not depend on the step’s number (3).
Let’s consider only uniform Markov chains to simplify
life.
probability of the system being in the state
H i ⋅ H j = 0 , i ≠ j and ∑ H = Ω . Then the probability
i
i =1
of A can be calculated using the formula of total
probability.
n
(4)
i =1
Where H1 , H 2 ,..., H n are called hypotheses, and the
values P( H i ) - probabilities of hypotheses.
{
si
}
at initial time with (k=0). The probability of this hypothesis
is known and equal to p (0) = P S (0) = s . Assuming
i
i
j
n
p j (2) = ∑ p (1) ⋅ p , ( j = 1,2,..., n) .
i
ij
i =1
n
p j (k ) = ∑ p (k − 1) ⋅ p
i
ij
i =1
that this hypothesis takes place, the conditional probability
of System S being in the state s on the first step is equal
j
to transition probability
,
(k = 1,2,...; j = 1,2,..., n)
n
i

Using the formula for total probability we obtain:
can occur together with only one of
following events H1 , H 2 , …, H n , which form the full
group of pair wise mutually exclusive events, i.e.
Make hypothesis such that the system was in the state
on the
pij = P S (2) = s S (1) = s 
second step is equal to
A
P( A) = ∑ P( H ) ⋅ P( A H ) .
i
i
sj
Applying this method several times we obtain recurrent
formula:
2. The Formula for Total Probability
Let the event
37
(6)
In some conditions in Markov chain with the increase of
the step k the stationary mode sets up, at which the system
continues to wander over states, but the probabilities of these
states do not depend on the number of step. These
probabilities are denoted final (limit) probabilities of
Markov chains. The equations for such probabilities can be
written using mnemonic rule: Given stationary mode, total
probability flux of the system remains constant: the flow into
the state s is equal to the flow out of the state s.
n
n
n
i =1
i =1
i =1
∑ pi ⋅ pij = ∑ p j ⋅ p ji = p j ⋅ ∑ p ji ,
(i ≠ j ) ,
( j = 1,2,..., n) .
This condition is balance condition for the state
(7)
s j . Add
n
normalization condition ∑ pi = 1 to these n equations.
i =1
pij = P S (1) = s S (0) = s 
i
j


Applying formula for total probability we obtain the
following:
{
n
}
p j (1) = ∑ P S (1) = s S (0) = s  ⋅P S (0) = s =
j
i
i

i =1
n
= ∑ p (0) , ( j = 1,2,..., n).
ij
(5)
i =1
Now, find the distribution of probabilities on the second
step, which depends on the distribution of probabilities on
the first step and matrix of transition probabilities for
Markov chain. Again make hypothesis such that the system
was in the state s on the first step, The probability of this
i
hypothesis is known and equal to p (1) = P S (1) = s ,
i
{
i
}
(i = 1,2,..., n). Given this hypothesis, the conditional
3. Markov Chains. The state of Health
Let the System C be human-being from certain
ecological, economic sphere. It can be in the following
states:
c1 - healthy and works;
c2 - ill, but the illness is not discovered;
c3 - ill, the light treatment;
c4 - does not work, under observation;
c5 - ill, under serious medical treatment. Marked
graph for the state of human-being is shown on
the picture #1.
The problem: To construct the equation and find the final
probabilities of human-being’s state of health.
38
Roza Shakenova et al.:
c5
Solution: Let’s consider
Applied Problems of Markov Processes
on the graph. Two arrows
are directed into this state; consequently, there are two terms
for addition on the left side (7) for j = 5 (state c ). One
5
arrow is directed out of this state, subsequently, there is only
one term on the right side (7) for j = 5 (state c ). Hence,
5
using balance condition (7), we obtain the first equation:
p2 ⋅ p25 + p4 ⋅ p45 = p5 ⋅ p51
(8)
Similarly, we write three more equations:
p ⋅ p = p (p + p ),
1
12
2
23
25
p41 = 0.6 ,
p45 = 0.2 , p31 = 0.5 , p = 0.1 ,
51
p23 = 0.6 , p = 0.2 , p = 0.2 .
25
12
p43 = 0.2 ,
p14 = 0.1 ,
Calculating the following values:
d2 =
d4 =
p2 ⋅ p23 + p4 ⋅ p43 = p3 ⋅ p31
d3 =
p1 ⋅ p14 = p4 ( p41 + p43 + p45 )
d5 =
0.2
(0.6 + 0.2)
equations in the following way:
1) p5 =
2) p2 =
2
25
45
1
12
(p + p )
,
4
43
,
p
p ⋅p
1
5)
14
(p + p + p )
41
43
,
45
Let’s solve the system of equations. From 2) we find:
p2 = d 2 ⋅ p1 , where d 2 =
p
12
(p + p )
23
p4 = d 4 ⋅ p1 , where d 4 =
25
p3 =
10 .
= 0.34 ,
0.5
(0.25 ⋅ 0.2 + 0.1⋅ 0.2)
=
0.07
0.1
= 0.7 .
0.1
According to the equality5) we have:
p + 0.25 ⋅ p + 0.34 ⋅ p1 + 0.1⋅ p1 + 0.7 ⋅ p = 1
1
1
1 100 10 ,
⋅
=
10 239 239
7 100 70 . Normalization
p5 = d5 ⋅ p1 = ⋅
=
10 239 239
(p + p + p )
43
1
p
43
condition
p1 + p2 + p3 + p4 + p5 = 1
100
14
41
4
. From 4) we find:
p
(d ⋅ p + d ⋅ p ) ⋅ p
23
1
(0.6 ⋅ 0.25 + 0.1⋅ 0.2)
1
= 0.1 =
p4 = d 4 ⋅ p1 =
p1 + p2 + p3 + p4 + p5 = 1 .
2
0.1
100 1 25 ,
⋅ =
239 4 239
34 100 34 .
p3 = d3 ⋅ p1 =
⋅
=
100 239 239
31
4) p4 =
=
p2 = d 2 ⋅ p1 =
(p ⋅ p + p ⋅ p )
23
4,
so p = 1 = 100 .
1
2.39 239
25
2
1
p1 (1 + 0.25 + 0.34 + 0.1 + 0.7) = 1 ,
,
51
p ⋅p
23
3) p3 =
4
p
= 0.25 =
0.8
(0.6 + 0.2 + 0.2)
1
(p ⋅ p + p ⋅ p )
0.2
0.1
The fifth equation is the normalization condition:
p1 + p2 + p3 + p4 + p5 = 1 . We rewrite the system of
=
. From 3) find:
239
+
25
239
+
34
239
+
10
239
+
70
239
45
works. We did not need the probabilities
= d 3 ⋅ p1 ,
=
239
=1
239
p11 , p22 , p33 ,
p44 , p .
55
31
where d 3 =
(d ⋅ p + d ⋅ p )
2
23
4
43
p
. From 1)
31
find: p5 =
(d ⋅ p + d ⋅ p ) ⋅ p
2
25
4
45
p
1
= d 5 ⋅ p1 ,
51
(d ⋅ p + d ⋅ p ) .
where d = 2 25
4
45
5
p
51
Giving corresponding values of probabilities:
4. Conclusions
In summary, we have found final probabilities of
human-being’s state of health, considering the system C- a
human-being from a certain ecological, economical sphere.
Looking at initial stages of disease in medicine it is possible
to make prognosis about the final probabilities of sick
human-being’s state of health. Doctors together with
researchers could invent such project for human-being’s
health recovery.
Applied Mathematics 2016, 6(2): 36-39
39
Appendix
C4
P43
P45
P14
C3
P41
C1
C5
P31
P51
P12
P23
P25
C2
Picture #1.
REFERENCES
[1]
R.A. Howard, Dynamic programming and Markov processes,
Soviet Radio, Russia, 1964.
[2]
Roza Shakenova. The Limiting Probabilities in the Process of
Servicing. Applied Mathematics 2012, 2(6):184-186 DOI:
10.5923/j.am.20120206.01 US.
[3]
R.K. Shakenova, “Markov processes of making decisions
with an overestimation and some economic problems”,
Materials of international scientific-practical conference
“Problems of Applied Physics and Mathematics.", pp. 9-14,
2003.
[4]
R. Bellman, Introduction to Matrix., M, Science, 1969.
[5]
A.N. Kolmogorov, S.V. Fomin, Elements of the theory of
functions and functional analysis, Science, 1972.
[6]
R. Campbell, McConnel, L. Brue Stanley, Economics.
Principles, Problems and Policies, Tallin, 1993.
[7]
H. Mine, S. Osaki, Markov Decision Processes, Nauka,
Moscow, 1977.
[8]
E.S Ventsel, and L.А. Ovcharov. The theory of random
processes and its engineering applications. – М.: High School,
2000.
[9]
V.P. Chistyakov. Probability theory course – М.: «Science»,
1982.
[10] Chzhun Kai-Lai. Uniform Markov Chains. – М.: Мир, 1964.