Download Homework 1 - Math 468/568 solutions, Spring 15 1. (from Lawler

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

System of linear equations wikipedia , lookup

Transcript
Homework 1 - Math 468/568 solutions, Spring 15
1. (from Lawler) The Smiths receive the newspaper every morning and put
it in a pile after reading it. In the afternoon some one moves all the papers
in the pile to the recycling with probability 1/3. Also in the afternoon, if the
pile has five papers in it, then Mr. Smith moves all the papers to the recycling
(with probability 1). The number of papers in the pile in the evening can be
modelled as a Markov chain.
(a) Give the state space and the transition matrix.
Solution: There are five states corresponding to number of papers in the
pile in the evening which can be 0, 1, 2, 3, 4.


1/3 2/3 0
0
0
 1/3 0 2/3 0
0 


0 2/3 0 
P =

 1/3 0
 1/3 0
0
0 2/3 
1
0
0
0
0
(b) Suppose that in the first evening there is one paper in the pile (so X0 = 1.)
Find the probabilities that 5 evenings later there are 0, 1, 2, 3, 4 papers in the
pile.
Solution: Raise P to the fifth power and extract the second row (corresponding to X0 = 1. Result:
P (X5
P (X5
P (X5
P (X5
P (X5
= 0|X0
= 1|X0
= 2|X0
= 3|X0
= 4|X0
= 1)
= 1)
= 1)
= 1)
= 1)
=
=
=
=
=
0.3333
0.3539
0.1481
0.0988
0.0658
(c) Again start with X0 = 1. After a long time, find the probabilities of there
being 0, 1, 2, 3, 4 papers in the pile.
Solution: Raise P to a large power (like 1000) to get the stationary distribution - all rows are the same and are the stat. distrib.
π(0) = 0.3839
π(1) = 0.2559
π(2) = 0.1706
1
π(3) = 0.1137
π(4) = 0.0758
2. (468) Let ξn , n = 0, 1, 2, · · · be a sequence of i.i.d. (independent, identically
distributed) random variables taking values in {0, 1, 2, · · · , N − 1}. Let
Xn = (ξn + ξn−1 ) mod N, f or n ≥ 1,
Yn = max{ξ0 , ξ1 , · · · , ξn },
Zn = (ξ0 + ξ1 + · · · + ξn−1 + ξn ) mod N
X0 = ξ 0 ,
For each process Xn , Yn , Zn , determine if it is a Markov process. Explain
your reasoning.
Solution: Discussed in class
3. (468) Suppose you have a system that is not Markovian because the
probability of jumping to state j at time n + 1 depends not just on the state
at time n but also on the state at time n − 1. This still can be described by
a Markov process by using the following idea. We use a larger state space.
In the new state space a state will consist not just of the state of the system
at time n, but also its state at time n − 1. This problem uses this idea.
Suppose that the probability it rains today depends on whether or not it
rained yesterday and the day before. If it rained both yesterday and the day
before, then then it will rain today with probability 0.8. If it did not rain
either yesterday or the day before, then it will rain today with probability
0.2. In all the other cases the weather today will be same as it was yesterday
with probability 0.6. Set up a Markov chain to describe this and find its
transition matrix P .
Solution: State space is {N N, N R, RN, RR}. Here N stands for “no rain”
and R for “rain”. We use the convention that if the state at time n is AB,
that means the weather on day n is B and the weather on day n − 1 is A.
The other convention is just as good, but you should have made it clear in
your solution what convention you follow. With the above ordering of the
states and convention,


0.8 0.2 0
0
 0
0 0.4 0.6 

P =
 0.6 0.4 0
0 
0
0 0.2 0.8
2
3. (568) Suppose you have a system that is not Markovian because the
probability of jumping to state j at time n + 1 depends not just on the state
at time n but also on the state at time n − 1. This still can be described by
a Markov process by using the following idea. We use a larger state space.
In the new state space a state will consist not just of the state of the system
at time n, but also its state at time n − 1. Here is a problem that uses this
idea. Example 4.4. in section 4.1 of the 468 book also illustrates the idea.
The two dimensional random walk is a Markov chain defined as follows.
The state space is the set of points in the plane with integer coordinates,
i.e., all (n, m) with n, m integers. At each time step you either go up, down,
left or right with probability 1/4. Now consider a random walk which is not
allowed to revisit the site it just came from. At each step the walk can go
up, down , left or right except for the direction that would take it back to
the site it came from. Each allowable direction has probability 1/3. Explain
what state space you should use to make this a Markov chain and give the
transition probabilities. Note that the walk is allowed to revisit sites it has
been to before provided the previous visit was more than 2 time units ago.
Solution: The state needs to encode the position of the walk at the current
time and the previous time. We can determine where it was at the previous
time by just knowing its present position and the direction of the step it took
to get there. So we take the state space to be all (i, j, A) where (i, j) is the
current postion and A = U, D, R, L is the direction of the step that took us
to (i, j). The transition matrix is p((i, j, A), (k, l, B)) = 1/3 if (k, l) is the
site you get by going one step in the direction B from (i, j) and B is not the
opposite direction of A. For example, suppose the current state is (0, 0, U ).
This means the walk was at (0, −1) at the previous time. So the next step
is not allowed to be D and so we have
p((0, 0, U ), (0, 1, U )) = 1/3
p((0, 0, U ), (1, 0, R)) = 1/3
p((0, 0, U ), (−1, 0, L)) = 1/3
p((0, 0, U ), (0, −1, D)) = 0
4. Find the communication classes of the following chains and determine if
3
they are transient or recurrent.

0 0.5 0.5

P1 = 0.5 0 0.5 
0.5 0.5 0

0.5 0 0.5 0
0
 0.2 0.5 0.3 0
0 



0 
P3 =  0.5 0 0.5 0

 0
0
0 0.5 0.5 
0
0
0 0.5 0.5


0
0 0 1
 0
0 0 1

P2 = 
 0.5 0.5 0 0 
0
0 1 0

0.2 0.8 0
0
 0.5 0.5 0
0


0
1
0
P4 =  0
 0
0 0.3 0.7
1
0
0
0



0
0

0

0
0
Solution: P1 : one class and it is recurrent.
P2 : one class and it is recurrent.
P3 : three classes. {0, 2} is a recurrent class. {3, 4} is a recurrent class.
{1} is a transient class.
P4 : four classes. {0, 1} is a recurrent class. {2} is a recurrent class. {3}
is a transient class. {4} is a transient class.
4