Download 1 - Logic Circuits, Algorithms and Complexity Theory

Document related concepts

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Probability interpretations wikipedia , lookup

Transcript
Theory of Computational Complexity
Probability and Computing 7.3-7.5
2012. 1. 23
Lee Minseon
Iwama and Ito lab M1
1
Chapter 7 :
Markov Chains and Random Walks
•
7.3 Stationary Distributions
7.3.1 Example: A Simple Queue
•
7.4 Random Walks on Undirected Graphs
7.4.1 Application: An s-t Connectivity Algorithm
•
7.5 Parrondo`s Paradox
2
7.3 Stationary Distributions
Def 7.8 : A stationary distribution of a Markov chain is a
probability distribution  such that
  
 : The one-step transition probability matrix of a
Markov chain
3
7.3 Stationary Distributions
Def 7.8 : A stationary distribution of a Markov chain is a
probability distribution  such that
  
If a chain ever reaches a stationary distribution then it
maintains that distribution for all future time.
4
7.3 Stationary Distributions
Def 7.8 : A stationary distribution of a Markov chain is a
probability distribution  such that
  
If a chain ever reaches a stationary distribution then it
maintains that distribution for all future time.
→ Stationary distributions play a key role in analyzing
Markov chains !
5
7.3 Stationary Distributions
Theorem 7.7 : Any finite, irreducible, and ergodic
Markov chain has the following properties:
1. The chain has a unique stationary distribution
  ( 0 , 1 ,  ,  n )
2. For all j and I, the limit lim t  tj ,i exists and it is
independent of j
3.
 i  lim P
t 
t
j ,i
1

hi ,i
6
7.3 Stationary Distributions
Theorem 7.7 :
2. For all j and I, the limit lim t  tj ,i exists and it is
independent of j
Proof :
• Using the Lemma 7.8
lim 
t 
t
i ,i
1

hi ,i
t
lim

Using the fact that
i ,i exists, for any j and I,
t 
lim 
t 
t
j ,i
1
 lim  
t 
hi ,i
t
i ,i
 these limits exist and are independent of the
starting state j
7
7.3 Stationary Distributions
Theorem 7.7 :
3.
 i  lim P
t 
t
j ,i
1

hi ,i
Proof :
•
Recall

rjt,i
: the probability that starting at j, the chain
first visits i at time t

t
r
t 1 j ,i  1  the chain is irreducible

 For any   0 there exists
t
r
t 1 j ,i  1  
t1 such that
t1
8
7.3 Stationary Distributions
Proof :
• 
t
j ,i
t
  r jk,i it,i k , for j  i
k 1
t
t1
 tj ,i   r jk,i it,i k   r jk,i it,i k , for t  t1
k 1
t1
k 1
 tj ,i   r jk,i it,i k , for t  t1
k 1
lim tj ,i  lim
t 
lim
t 
t 
t1
k
t k
r

 j ,i i,i limit exists and t1 is finite
k 1
t1
t1
k 1
k 1
t1
k
t k
k
t k
k
t
t
r


r
lim


r
lim


(
1


)
lim

 j ,i i ,i  j ,i
 j ,i
i ,i
i ,i
i ,i
t 
lim tj ,i  (1   ) lim it,i
t 
t 
k 1
t 
t 
t
t1
k 1
k 1
 tj ,i   rjk,i it,i k   rjk,i it,i k  
9
7.3 Stationary Distributions
Proof :
• We can deduce
t1
 tj ,i   rjk,i it,i k  
k 1
t1
lim  tj ,i  lim ( rjk,i it,i k   )
t 
t 
t1
k 1
t1
t1
lim ( rjk,i it,i k   )   rjk,i lim it,i k     rjk,i lim it,i  
t 
k 1
k 1
 lim 
t 
t
i ,i
t1
r
k 1
k
j ,i
t 
k 1
t1
t 
   lim    (  r  1)
 lim tj ,i  lim it,i  
t 
t 
t 
t
i ,i
k 1
k
j ,i
7.3 Stationary Distributions
Proof :
• Letting  approach 0, for any pair i and j,
lim 
Now let
t 
t
j ,i
 i  lim 
t 
1
 lim  
t 
hi ,i
t
i ,i
t
j ,i
1
 lim  
t 
hi ,i
t
i ,i
•   ( 0 , 1 ,  ,  n ) forms a stationary distribution?
Check whether  is a proper distribution or not
Check whether  is a stationary distribution or not
11
7.3 Stationary Distributions
Proof :
• Check whether  is a proper distribution or not
n
↔ Check whether   i  1 or not
n
Using
i 0
t

 j ,i  1 for any t  0
i 0
 lim
t 
 lim
t 
n
t

 j ,i  1
i 0
n
n
i 0
i 0
n
t
t


lim

 j ,i 
j ,i    i 1
t 
i 0
n
   i 1
i 0
 is a proper distribution
12
7.3 Stationary Distributions
Proof :
• Check whether  is a stationary distribution or not
↔ Check whether     or not
Using  (t  1)   (t )
  j ,i 
t 1
 lim
t 
 lim
t 
n
 
k 0

j ,i

j ,i
t 1
t 1
n
lim
t 
t
j ,k
 lim
t 
 lim
t 
 
k 0
n
  i   k
k 0
n
 

t
j ,k
k 0
t

j ,i
k ,i
k ,i
k ,i
 i,
n
t
j ,k
k ,i
 lim  
k 0
n
t
t 
j ,k
k ,i
  k
k 0

k ,i
   
 is a stationary distribution
13
7.3 Stationary Distributions
Theorem 7.7 :
1. The chain has a unique stationary distribution
  ( 0 , 1 ,  ,  n )
Proof :
•
Suppose there were another stationary distribution 
n
i   k kt ,i
k 0
taking the limit as t   yields
n
n
n
k 0
k 0
k 0
i   k  i   i  k   i (  k  1)
i   i for all i
  
14
7.3 Stationary Distributions
Remarks about Theorem 7.7:
•
The requirement that the Markov chain should be
aperiodic is not necessary for the existence of a
stationary distribution.
•
Any finite chain has at least one component that is
recurrent.
15
7.3 Stationary Distributions
Ways to compute the stationary distribution of a finite
Markov chain :
1. Solve the system of linear equations
2. Use the cut-sets of the Markov chain ← Smart!
16
7.3 Stationary Distributions
Solve the system of linear equations:
•
Ex)
  
0
 0   0   0 1 4
    1 2
0
13
 1    1 
 2   2  1 4 1 4 1 2
   
 3   3   0 1 2 1 4

3 4
1 6 
0 

1 4




We have five equations for the four unknowns, and
3

i 0
i

 1. The equations have a unique solution.
17
7.3 Stationary Distributions
Use the cut-sets of the Markov chain :
•
Theorem 7.9 :
Let S be a set of states of a finite, irreducible,
aperiodic Markov chain. In the stationary
distribution, the probability that the chain leaves
the set S equals the probability that it enters S.
n
n
j 0
j 0
  j j ,i   i   i  i, j
or
  j j ,i    i i, j
j i
j i
In the stationary distribution, the
probability of crossing the
cut-set in one direction is
equal to the probability of
crossing the cut-set in the
other direction.
18
7.3 Stationary Distributions
Use the cut-sets of the Markov chain :
•
p
Ex)
1 p
1
1 q
q
A simple Markov chain used to represent bursty behavior.
p 
 0   0  1  p
        



q
1

q

 1   1 
1
Three equations for the two unknowns, and   i  1.
q
p
 0 
, 1 
q p
q p
i 0
Is it same when using the cut-set formulation?
19
7.3 Stationary Distributions
Use the cut-sets of the Markov chain :
•
p
Ex)
1 p
1
1 q
q
Using the cut-set formulation, in the stationary
distribution the probability of leaving state 0 must
equal the probability of entering state 0.
q
p
, 1 
 0 p   1q,  0   1  1  0 
q p
q p
Same!
20
7.3 Stationary Distributions
Theorem 7.10 :
Consider a finite, irreducible, and ergodic Markov chain
with transition matrix  . If there are nonnegative
n
numbers   ( 0 , 1 ,  ,  n ) such that   i  1 and if,
i 0
for any pair of states i, j,
 j  j ,i   i i , j
Then  is the stationary distribution corresponding to  .
21
7.3 Stationary Distributions
Proof :
• Check whether  is a stationary distribution or not
↔ Check whether     or not
n
The jth entry of      i i , j
i 0
n
   j  j ,i
i 0
 j
← Using the assumption of the
theorem 7.10
   
 is a stationary distribution
22
7.3 Stationary Distributions
Theorem 7.11 : Any irreducible aperiodic Markov chain
belongs to one of the following two categories:
1. the chain is ergodic – for any pair of states i and j,
the limit lim t  tj ,i exists and is independent of j, and
the chain has a unique stationary distribution
 i  lim t  tj ,i  0; or
2. No state is positive recurrent – for all i and j,
lim t  tj ,i  0 , and the chain has no stationary
distribution.
23
Chapter 7 :
Markov Chains and Random Walks
•
7.3 Stationary Distributions
7.3.1 Example: A Simple Queue
•
7.4 Random Walks on Undirected Graphs
7.4.1 Application: An s-t Connectivity Algorithm
•
7.5 Parrondo`s Paradox
24
7.3.1 Example: A Simple Queue
A queue is a line where customers wait for service.
We can examine a queue using Markov chain ?
We examine a model for a bounded queue where time is
divided into steps of equal length. At each time step,
exactly one of the following occurs.
25
7.3.1 Example: A Simple Queue

1 


1   

n
-1
…
1

•


1   
n
1 

1   
If the queue has fewer than n customers, then with
probability  a new customer joins the queue.
26
7.3.1 Example: A Simple Queue

1 


1   
•

n
-1
…
1

•


1   
n
1 

1   
If the queue has fewer than n customers, then with
probability  a new customer joins the queue.
If the queue is not empty, then with probability  the
head of the line is served and leaves the queue.
27
7.3.1 Example: A Simple Queue

1 


1   
•
•

n
-1
…
1

•


1   
n
1 

1   
If the queue has fewer than n customers, then with
probability  a new customer joins the queue.
If the queue is not empty, then with probability  the
head of the line is served and leaves the queue.
With the remaining probability, the queue is unchanged
28
7.3.1 Example: A Simple Queue

1 




1   
n
-1
…
1


1   
n
1 

1   
If the queue has fewer than n customers, then with
probability  a new customer joins the queue.
• If the queue is not empty, then with probability  the
head of the line is served and leaves the queue.
• With the remaining probability, the queue is unchanged
X t : The number of customers in the queue at time t.
X t yield a finite-state Markov chain!
•
29
7.3.1 Example: A Simple Queue

1 


1   

n
-1
…
1

•

n

1   

1   
Nonzero entries of Transition matrix
i ,i 1  
i ,i 1  
1  

i ,i  1    
1  

1 

if i  n;
if i  0;
if i  0 ,
if 1  i  n  1,
if i  n.
30
7.3.1 Example: A Simple Queue

1 

1   
1  
 0

 ...

 0
 0
n
-1

Transition Matrix

…
1

•


1   
1 
n

1   


0
 1   
...
...
0
0
0
0
0 ... 0
0
0

...
0
0
0
0
... 0
... ...
...
...
...  1     
... 0
0

0 
0 
... 

0 
1   
31
7.3.1 Example: A Simple Queue
•
A unique stationary distribution  exists,
  0    0  1  
    0
 1   1 
 ...    ...   ...
   
 n1   n1   0
 n   n   0
•

0
 1   
...
...
0
0
0
0
  
0 ... 0
0
0

...
0
0
0
0
... 0
... ...
...
...
...  1     
... 0
0

0 
0 
... 

0 
1   
We can write
 0  (1   ) 0   1
 i   i 1  (1     ) i   i 1 , 1  i  n - 1
 n   n 1  (1   ) n
32
7.3.1 Example: A Simple Queue
•
A Solution to the preceding system of equations.

 i   0  

•
i
Another way to compute the stationary probability in this
case is to use cut-sets. In the stationary distribution, the
probability of moving from state i to state i+1 must be equal
to the probability of moving from state i + 1 to i.
 i   i 1
We can get a solution using a simple induction

 i   0  

i
33
7.3.1 Example: A Simple Queue
•
Adding the requirement
n

i 0
i
 1 , we have

 i  1    0 

i 0

i 0
n
n
 0 
i

  1

1
   
n
i
i 0

  i
 i 
i
n
i 0   
34
7.3.1 Example: A Simple Queue
•
•
•
We will examine the case where there is no upper limit n on
the number of customers in a queue.
In this case, The Markov chain is no longer finite and has a
countably infinte state space.
Applying Theorem 7.11, The Markov chain has a stationary
distribution if and only if all  i  0

 
i 
i

i 0   
i

  

i
1
   

i
0
i 0
35
7.3.1 Example: A Simple Queue
•

 
i 
i

i 0   
i

  

i
1
   

i
0
i 0
    should converge, and we can verify that

i
i 0
    converges      1    

i
i 0
  
36
7.3.1 Example: A Simple Queue
•
A solution of the system of equations

 i   

i

  

i

  

i
1
   
i

i 0
1
1
(     1)
1    


1  


37
7.3.1 Example: A Simple Queue
•
•
All of the

 i are greater than 0 if and only if    .
The rate at which customers arrive is lower than the
rate at which they are served.
•

The rate at which customers arrive is higher than the
rate at which they are served. → There is no stationary
distribution, and the queue length will become arbitrarily
long.
• 
The rate at which customers arrive is equal to the rate
at which they are served. → There is no stationary
distribution, and the queue length will become arbitrarily
long.
38
Chapter 7 :
Markov Chains and Random Walks
•
7.3 Stationary Distributions
7.3.1 Example: A Simple Queue
•
7.4 Random Walks on Undirected Graphs
7.4.1 Application: An s-t Connectivity Algorithm
•
7.5 Parrondo`s Paradox
39
7.4 Random Walks on Undirected Graphs
•
A random walk on an undirected graph is a special
type of Markov chain that is often used in analyzing
algorithms.
•
Let G = (V,E) be a finite, undirected, and connected
graph.
40
7.4 Random Walks on Undirected Graphs
Def 7.9 :
•
A random walk on G is a Markov chain defined by
the sequence of moves of a particle between
vertices of G.
•
In this process, the place of the particle at a given
time step is the state of the system.
•
If the particle is at vertex i and if i has d(i) outgoing
edges, then the probability that the particle follows
the edge (i, j) and moves to a neighbor j is 1/d(i).
41
7.4 Random Walks on Undirected Graphs
Lemma 7.12 : A random walk on an undirected graph G
is aperiodic if and only if G is not bipartite.
Proof: A graph is bipartite if and only if it does not have
cycles with an odd number of edges. In an
undirected graph, there is always a path of length 2
from a vertex to itself. If the graph I bipartite then the
random walk I periodic with period d=2. if the graph
is not bipartite then it has an odd cycle, and by
traversing that cycle we have an odd-length path
from any vertex to itself. It follows that the Markov
chain is aperiodic.
42
7.4 Random Walks on Undirected Graphs
•
A random walk on a finite, undirected, connected,
and non-bipartite graph G satisfies the conditions of
Theorem7.7
•
The random walk converges to a stationary
distribution.
•
We will show that this distribution depends only on
the degree sequence of the graph.
43
7.4 Random Walks on Undirected Graphs
Theorem 7.13 : A random walk on G converges to a
stationary distribution  , where
d (v )
v 
2E
Proof:

d (v )  2 E
 vV
d (v )
1
2E
vV
 vV  v  vV
and
d (v )
1
2E
 is a proper distribution over v V
44
7.4 Random Walks on Undirected Graphs
Proof:
 : The transition probability matrix of the Markov chain.
N (v) : The neighbors of v
The relation
    is equivalent to
d (u) 1
1
d (v)
 v  uN ( v )
 uN ( v )

2 E d (u)
2E 2E
And the theorem follows.
45
7.4 Random Walks on Undirected Graphs
Corollary 7.14 : For any vertex u in G,
hu ,u 
2E
d (u )
Proof:
hv ,u : The expected number of steps to reach u from v
2E
1
d (u )
u 
, u 
 hu ,u 
hu ,u
2E
d (u )
46
7.4 Random Walks on Undirected Graphs
Lemma 7.15: For any pair of vertices u and v,
If (u, v)  E, then hv,u  2 E
Proof: N(u) : The set of neighbors of vertex u in G.
We compute hu ,u in two different ways :
2E
d (u )
Therefore,
 hu ,u 
1
d (u )
2E 
 (1  h
wN ( u )
 (1  h
wN ( u )
w ,u
w ,u
)
)
And we conclude that
hv ,u  2 E
47
7.4 Random Walks on Undirected Graphs
Definition 7.10: The cover time of a graph G = (V,E) is
the maximum over all vertices v V of the expected
time to visit all of the nodes in the graph by a
random walk starting from v.
48
7.4 Random Walks on Undirected Graphs
Lemma 7.16 : The cover time of G=(V,E) is bounded
above by 4 V  E
Proof: Choose a spanning tree of G; that is, choose any
subset of the edges that gives an acyclic graph
connecting all of the vertices of G. There exists a
cyclic tour on this spanning tree in which every edge
is traversed once in each direction;
For example, such a tour can be found by
considering the sequence of vertices passed
through when doing a depth-first search.
49
7.4 Random Walks on Undirected Graphs
Proof: Let v0 , v1 ,..., v2 V 2  v0 be the sequence of
vertices in the tour, starting from vertex v0 . Clearly
the expected time to go through the vertices in the
tour is an upper bound on the cover time. Hence the
cover time is bounded above by
2 V 3
h
i 0
vi ,vi 1
 (2 V  2)( 2 E )  4 V  E
where the first inequality comes from Lemma 7.15
50
Chapter 7 :
Markov Chains and Random Walks
•
7.3 Stationary Distributions
7.3.1 Example: A Simple Queue
•
7.4 Random Walks on Undirected Graphs
7.4.1 Application: An s-t Connectivity Algorithm
•
7.5 Parrondo`s Paradox
51
7.4.1 Application: An s-t Connectivity Algorithm
•
Suppose we are given an undirected graph G = (V,E)
and two vertices s and t in G. n  V , m  E
•
There is a path connecting s and t ?
•
This is easily done in linear time using a standard
breadth-first search or depth-first search. However,
require (n) space
•
Here we develop a randomized algorithm that works
with only O(log n) space
52
7.4.1 Application: An s-t Connectivity Algorithm
s-t Connectivity Algorithm:
1. Start a random walk from s.
2. If the walk reaches t within
4n 3steps, return that there is
a path. Otherwise, return that there is no path.
We use the cover time result (Lemma 7.16) to bound
the number of steps that the random walk has to run.
4 V E  4 n  m  4n  n 2  4 n 3
n(n  1) 

 m 

2 

53
7.4.1 Application: An s-t Connectivity Algorithm
Theorem 7.17 : The s-t connectivity algorithm returns
the correct answer with probability ½, and it only
errs by returning that there is no path from s to t
when there is such a path.
Proof:
•
If there is no path then the algorithm returns the
correct answer.
•
If there is a path, the algorithm errs if it does not find
the path within 4n 3 steps of the walk.
54
7.4.1 Application: An s-t Connectivity Algorithm
Proof:
•
The expected time to reach t from s ( if there is a
path) is bounded from above by the cover time of
their shared component, which by Lemma 7.16 is at
3
most 4nm  2n
4 V E  4n  m  4n 
n( n  1)
 2n 2 ( n  1)  2n 3
2
n(n  1) 


m



2


•
By Markov`s inequality, the probability that a walk
takes more than 4n 3 steps to reach s from t is at
most 1/2
55
7.4.1 Application: An s-t Connectivity Algorithm
•
The algorithm must keep of its current position,
which takes O(logn) bits, as well as the number of
steps taken in the random walk, which also takes
only O(logn) bits (since we count up to only 4n 3 )
56
Chapter 7 :
Markov Chains and Random Walks
•
7.3 Stationary Distributions
7.3.1 Example: A Simple Queue
•
7.4 Random Walks on Undirected Graphs
7.4.1 Application: An s-t Connectivity Algorithm
•
7.5 Parrondo`s Paradox
57
7.5 Parrondo`s Paradox
•
The paradox appears to contradict the old saying
that two wrongs don`t make a right, showing that
two losing games can be combined to make a
winning game.
58
7.5 Parrondo`s Paradox
Game A :
•
We repeatedly flip a biased coin ( call it coin a)
•
Coin a comes up heads with probability p a  1 / 2 and
tails with 1 - pa probability .
•
You win a dollar if the coin comes up heads and
lose a dollar if it comes up tails.
•
Clearly, this is a losing game for you
Ex) if pa  0.49 then your expected loss is 2 cents
per game. ( 0.49  0.51  0.02)
59
7.5 Parrondo`s Paradox
Game B :
•
We repeatedly flip a biased coin.
•
The coins that is flipped depends on how you have
been doing so far in the game.
•
w : The number of your wins so far
l : The number of your losses so far
w-l: Your winnings.
If it is negative, you have lost money.
60
7.5 Parrondo`s Paradox
Game B :
•
Game B uses two biased coins, coins b and coin c.
•
Coin b : If your winnings in dollars are a multiple of 3,
then you coin b. coin b comes up heads with probability p b
and tails with probability 1 - p b .
•
Coin c : Otherwise you flip coin c. coin c comes up
heads with probability p c and tails with probability 1 - pc .
•
You win a dollar if the coin comes up heads and lose a
dollar if it comes up tails.
61
7.5 Parrondo`s Paradox
Example of Game B :
Coin b : Coin b comes up heads with probability p b  0.09
and tails with probability 1 - pb  0.91 .
Coin c : Coin c comes up heads with probability pc  0.74
and tails with probability 1 - pc  0.26 .
• If we use coin b for the 1/3 of the time that your winnings
are a multiple of 3 and use coin c for the other 2/3 of the time,
then probability w of winnings is
1 9 2 74 157 1
w



: Game B is in your favor?
3 100 3 100 300 2
• But coin b is not necessarily used 1/3 of the time!
62
7.5 Parrondo`s Paradox
w
1
-1
-2
•
Consider what happens when you first start the game, when
your winnings are 0.
•
You use coin b and most likely lose. (1 - p b  0.91)
•
•
You use coin c and most likely win. ( pc  0.74)
You may spend a great deal of time going back and forth
between having lost one dollar and breaking even
•
Before either winning one dollar or losing two dollars.
 you may use coin b more than 1/3 of the time.
63
7.5 Parrondo`s Paradox
How to determine if you are more likely to lose than win
Using the absorbing states
•
ⅰ
By solving equations directly
ⅱ By considering sequence of moves
•
Using the stationary distribution
64
7.5 Parrondo`s Paradox
Analyzing the absorbing states:
•
Suppose that we start playing game B when your
winnings are 0, continuing until you either lose 3 dollars
or win 3 dollars.
•
Consider the Markov chain on the state space
consisting of the integers {-3,-2,-1,0,1,2,3}, where the
states represent your winnings.
•
We want to know, when you start at 0, whether or not
you are more likely to reach -3 before reaching 3.
65
7.5 Parrondo`s Paradox
Analyzing the absorbing states:
•
zi
: probability that you will end up having lost 3 dollars
before having won 3 dollars when your current
winnings are i dollars.
•
{z -3 , z -2 , z -1 , z 0 , z1 , z 2 , z3}
• z 0  1 / 2 : we are more likely to lose 3 dollars than win
3 dollars starting from 0.
•
z -3  1, z 0  0 : boundary conditions.
66
7.5 Parrondo`s Paradox
Analyzing the absorbing states:
•
A system of five equations with five unknowns
z -2  (1  pc ) z 3  pc z 1 ,
z -1  (1  pc ) z  2  pc z0 ,
z 0  (1  pb ) z 1  pb z1 ,
z1  (1  pc ) z0  pc z 2 ,
z 2  (1  pc ) z1  pc z3
•
Hence it can be solved easily
z0
2

1  pb 1  pc 

1  pb 1  pc 2  pb pc 2
67
7.5 Parrondo`s Paradox
Analyzing the absorbing states:
•
On example of game B, the solution is
z 0  15,379 / 27,700  0.555
It shows one is much more likely to lose than win
playing this game over the long run.
68
7.5 Parrondo`s Paradox
How to determine if you are more likely to lose than win
Using the absorbing states
•
ⅰ
By solving equations directly
ⅱ By considering sequence of moves
•
Using the stationary distribution
69
7.5 Parrondo`s Paradox
Considering sequence of moves
•
Consider any sequence of moves that starts at 0 and
ends at 3 before reaching -3.
•
Ex) s = 0,1,2,1,2,1,0,-1,-2,-1,0,1,2,1,2,3
•
We create a one-to-one and onto mapping of such
sequences with the sequences that start at 0 and end
at -3 before reaching 3 by negating every number
starting from the last 0 in the sequence.
•
Ex) f(s) = 0,1,2,1,2,1,0,-1,-2,-1,0,-1,-2,-1,-2,-3
70
7.5 Parrondo`s Paradox
Lemma 7.18 : For any sequence s of moves that starts
at 0 and ends at 3 before reaching -3, we have
2
pb pc
Pr(s occurs)

Pr( f ( s ) occurs) (1  pb )(1  pc ) 2
Proof:
•
•
•
t1 : The number of transition from 0 to 1
t 2 : The number of transition from 0 to -1
t3 : The sum of the number of transitions from -2 to -1, -1
to 0, 1 to 2, and 2 to 3
•
t 4 : the sum of the number of transition from 2 to 1, 1 to 0,
-1 to -2, and -2 to -3
71
7.5 Parrondo`s Paradox
Proof:
•
The probability that the sequence s occurs
p (1  pb ) p (1  pc )
t1
b
t2
t3
c
t4
72
7.5 Parrondo`s Paradox
Proof:
•
What happens when we transform s to f(s)?
•
We change one transition from 0 to 1 into a
transition from 0 to -1.
•
After this point, t1 is 2 more than t 2 , since the
sequence ends at 3.
•
The probability that the sequence f(s) occurs
pbt1 1 (1  pb )t2 1 pct3  2 (1  pc )t4  2
73
7.5 Parrondo`s Paradox
Proof:
•
The probability that the sequence s occurs
p (1  pb ) p (1  pc )
t1
b
t2
t3
c
t4
74
7.5 Parrondo`s Paradox
Considering sequence of moves
•
Pr(3 is reached before - 3)
Pr( 3 is reached before 3)
sS Pr( s occurs)
2
pb pc


2
Pr(
f
(
s
)
occurs)
(
1

p
)(
1

p
)
sS
b
c
S : the set of all sequences of moves that start at 0 and end
at 3 before reaching -3
•
This ratio is less than 1, then you are more likely to lose tha
n win.
•
On example , the ratio < 1, It shows one is much more likely
to lose than win playing this game
75
7.5 Parrondo`s Paradox
How to determine if you are more likely to lose than win
Using the absorbing states
•
ⅰ
By solving equations directly
ⅱ By considering sequence of moves
•
Using the stationary distribution
76
7.5 Parrondo`s Paradox
Using the stationary distribution
•
Markov chain on the state { 0, 1, 2 }
•
The states represent the remainder when our
winnings are divided by 3. ( w – l mod 3)
•
The probability that we win a dollar in the stationary
distribution:
pb 0  pc 1  pc 2  pb 0  pc (1   0 )
 pc  ( pc  pb ) 0
•
Check if this is greater than or less than 1/2
77
7.5 Parrondo`s Paradox
Using the stationary distribution
•
The equations for the stationary distribution
 0   1   2  1,
pb 0  (1  pc ) 2   1 ,
pc 1  (1  pb ) 0   2 ,
pc 2  (1  pc ) 1   0
78
7.5 Parrondo`s Paradox
Using the stationary distribution
•
Since there are four equations and only three
unknowns, it can be solved easily
1  pc  pc
0 
,
2
3  2 pc  pb  2 pb pc  pc
2
pb pc  pc  1
1 
,
2
3  2 pc  pb  2 pb pc  pc
pb pc  pb  1
2 
2
3  2 pc  pb  2 pb pc  pc
79
7.5 Parrondo`s Paradox
Using the stationary distribution
•
If the probability of winning in the stationary
distribution is less than ½ , you lose.
•
pc  ( pc  pb ) 0  1 / 2
•
In example,
86,421 1
pc  ( pc  pb ) 0 

175,900 2
Therefore game B is a losing game in the long run.
80