Download Lecture 1: simple random walk in 1-d Today let`s talk about ordinary

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Birthday problem wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Randomness wikipedia , lookup

Inductive probability wikipedia , lookup

Probability box wikipedia , lookup

Infinite monkey theorem wikipedia , lookup

Probability interpretations wikipedia , lookup

Conditioning (probability) wikipedia , lookup

Central limit theorem wikipedia , lookup

Law of large numbers wikipedia , lookup

Random walk wikipedia , lookup

Transcript
Lecture 1: simple random walk in 1-d
Today let’s talk about ordinary simple random walk to introduce ourselves to some of
the questions. Starting in d = 1,
Definition 0.1. Let Y1 , . . . be a sequence of i.i.d. random variables defined on some probability space (Ω, F, P) with
P(Y1 = 1) = 1/2 = P(Y1 = −1) .
The sequence (Xn ) defined by X0 = 0 and Xn = Y1 + · · · + Yn for n ≥ 1 is called simple
symmetric random walk (starting at 0) in one dimension.
How do we know that a space with such variables exists? Well if we drop the independence requirement we can simply take Ω = {−1, 1} with Xi (ω) = ω for all i and
F = {∅, {−1}, {1}, Ω}, P(−1) = P(1) = 1/2. But of course these variables are not independent. The standard construction is to consider Ω to be the space of real sequences with the
sigma-algebra generated by cylinders. Then we construct the variables using Kolmogorov’s
extension theorem.
Recurrence
Our first question is: is the random walk recurrent? For this, we can define random walk
started at another site. For x ∈ Z define
Xnz = z + Xn .
Theorem 0.2. SSRW in 1-d is recurrent. That is, for each z ∈ Z,
P(Xnz = z for some n ≥ 1) = 1 .
Proof. For z ∈ [0, n] and n ≥ 2 let
f (z) = P(X z equals 0 before n) .
Clearly f (0) = 1 and f (n) = 0. We claim that for z ∈ [1, n − 1] f (z) = (1/2)(f (z − 1) +
f (z + 1)). Indeed, we just condition on the first step:
f (z) = P(X z equals 0 before n, X1 = 1) + P(X z equals 0 before n, X1 = −1) .
To estimate the first probability, we write the event as
{X equals − z before n − z} ∩ {X1 = 1}
={X̃ equals − z − 1 before n − z − 1} ∩ {X1 = 1} ,
1
where X̃ is the walk whose steps are X2 , X3 , . . .. By independence then we can factorize and
get
(1/2)P(X̃ equals − z − 1 before n − z − 1) = (1/2)P(X z+1 equals 0 before n) ,
which is (1/2)f (z + 1). Similarly the other probability equals (1/2)f (z − 1) and we are done
proving the claim.
The only function on [0, n] satisfying the above is f (k) = 1 − k/n, so
P(X 1 equals 0 before n) = 1 − 1/n .
Taking n → ∞,
P(X 1 equals 0 eventually) ≥ 1 − 1/n → 1 ,
Giving that X 1 eventually hits 0. Shifting to 0, X eventually hits -1. Now by symmetry, X
eventually hits 1 as well. This can only happen if it eventually comes back to 0.
Decay of hitting probabilities
Note that for each n, each realization of the walk (that is, the steps) (−1, 1, 1, 1, . . . , −1)
is equally likely.
Theorem 0.3. The return probability p2n = P(X2n = 0) satisfies
p2n ∼ (πn)−1/2 ,
in the sense that the ratio of both sides converges to 1. Furthermore, for all x even in Z and
n ≥ 1,
p2n (0, x) ≤ p2n (0, 0) .
A similar statement holds for odd times with pn (0, 1).
Proof. For the first, we just count and use Stirling’s formula:
n n
√
n! ∼ 2πn
.
e
Out of all sequences of length 2n there are exactly 2n
which add up to 0 (choose n spots
n
for the 1’s). As all sequences are equally likely and there are 22n , we find
√
2n
2n
(2n)!
(2n)
4πn
e2n
−n
p2n = nn =
∼
4
·
·
= (πn)−1/2 .
4
n!n!4n
e2n
n2n 2πn
For the second statement, we estimate for k ∈ [−2n, 2n],
(n − k)!k! ≥ n!n! ,
so
2n
k
≤
2n
n
. This implies the result. For the last, we use
2
2n+1
k
≤
2n+1
n
.
Exit times
We first solve explicitly for the distribution of the hitting time of a line. This will use
the famous reflection principle.
Theorem 0.4. Let σ(1) be the hitting time of 1; that is,
σ(1) = min{k ≥ 0 : Xk = 1} .
Then P(σ(1) > n) = P(Xn = 0 or 1), so P(σ(1) > 2n) ∼ (πn)−1/2 and the expected hitting
time is infinite.
Proof. We compute the probability that σ(1) ≤ n. We can partition this event according to
the first intersection time for the walk (since there must be one). We have
P(σ(1) ≤ n) =
n
X
P(σ(1) = k) .
k=1
Now we pair different walk trajectories that have the same σ(1) value. For each such trajectory, reflect the portion beyond time k about the line x = 1 (in a graph of the walk using x
for the position and the y axis for time). Formally, for such a walk X define X̃ by
(
Xj
if j ≤ k
X̃j =
.
2 − Xj if j > k
We thus see that exactly half of the walks that hit 1 at time k and end up at time n with
Xn 6= 1 lie to the right of 1 at time n and half lie to the left. Therefore
P(σ(1) = k) = P(σ(1) = k, Xn > 1) + P(σ(1) = k, Xn = 1) + P(σ(1) = k, Xn < 1)
= 2P(σ(1) = k, Xn > 1) + P(σ(1) = k, Xn = 1) .
Now we plug this back in
P(σ(1) ≤ n) =
n
X
[2P(σ(1) = k, Xn > 1) + P(σ(1) = k, Xn = 1)]
k=1
= 2P(σ(1) ≤ n, Xn > 1) + P(σ(1) ≤ n, Xn = 1)
= 2P(Xn > 1) + P(Xn = 1)
= P(Xn < −1) + P(Xn = 1) + P(Xn > 1)
= 1 − P(Xn = 0 or 1) .
Of course we cannot get a lower bound for pn (0, x) independent of n. This is because the
walk “spreads out” as time
√ grows. But we can give bounds that depend on time that show
Xn is typically distance n from the origin. In fact, one way to think about the distribution
3
of the position of the walk at time n is
√ that it is roughly uniformly distributed on a ball
centered at√the origin of radius of order n. Outside this√ball the distribution decays rapidly.
Of course n is not an exactly number, as P(Xn ≥ a n) is uniformly bounded below in
n for each fixed a > 0 (by for instance the central limit theorem), but it provides a useful
heuristic.
Theorem 0.5. We have Var(Xn ) = n and since EXn = 0, EXn2 = n. Furthermore, P(Xn ≥
2
x) ≤ e−x /(2n) , implying that pn (0, x) = P(Xn = x) satisfies
pn (0, x) ≤ e−|x|
2 /(2n)
.
Proof. Since EY1 = 0, we have EXn = nEY1 = 0. The first statement is then clear, using
independence:
Var(Xn ) =
EXn2
2
= E(Y1 + · · · + Yn ) =
n
X
EYi2 +
X
i=1
EXi Xj = n .
i6=j
Moving on to the second result, we use the exponential Markov inequality. For any
λ, s > 0 we have
P(Xn ≥ s) = P(eλXn ≥ eλs ) ≤ e−λs EeλXn = e−λs E
n
Y
eλYi = e−λs EeλY1
i=1
We can directly compute the final expectation and give a simple bound:
EeλY1 = (1/2)(e−λ + eλ ) ≤ eλ
2 /2
.
This last bound follows from the lemma:
Lemma 0.6. For t > 0,
2 /2
(1/2)(et + e−t ) ≤ et
.
Proof. Using power series,
"
(1/2)(et + e−t ) = (1/2)
∞
X
tn
n=0
n!
+
∞
X
(−t)n
#
n!
n=0
=
∞
X
tn
n!
n=0
n even
∞
∞
X
X
t2n
(t2 /2)n
=
=
.
n
(2n)!
(2n)!/2
n=0
n=0
But (2n)!/2n ≥ n! so we get an upper bound of
(t2 /2)n
n!
P∞
n=0
We now plug this back into (1) to get
2 /2
P(Xn ≥ s) ≤ e−λs+nλ
Minimize this by setting λ = s/n to get
P(Xn ≥ s) ≤ e−s
4
2 /(2n)
.
.
2 /2
= et
.
n
.
(1)
The above gives us a tool to find the position of the walk at time n: define the exit time
τ (n) = min{k ≥ 0 : |Xk | ≥ n} .
√
Corollary 0.7. With probability one, |Xn | ≤ 2 n log n for all large n. Therefore τ (n) ≥
for all large n.
Proof. Here we just plug into the exponential
bound and use the Borel-Cantelli lemma: if
P
(An ) is a sequence of events such that n P(An ) < ∞ then
P(An occurs for only finitely many n) = 1 .
For the other bound we need a lemma.
Lemma 0.8. For all n,
√ n
≥ 9/16
P |Xn | >
2
Proof. For the proof, we compute the fourth moment:
EXn4
=
n
X
EYi1 Yi2 Yi3 Yi4 =
i1 ,...,i4 =1
n
X
EX12 X22 = n2 .
i1 ,i2 =1
Therefore
EXn2 = E(Xn2 1Xn2 >n/4 ) + E(Xn2 1Xn2 ≤n/4 )
1/2 p
P(Xn2 > n/4) + n/4
≤ EXn4
This gives 9/16 = (3n/4)2 /n2 ≤ P(Xn2 > n/4).
Theorem 0.9. There exists c > 0 such that with probability one, τ (n) ≤ cn2 log n for all
large n.
Proof. We split the interval [0, cn2 log n] into b(c/16) log nc intervals of size 16n2 . Let
Ri = |Y16in2 +1 + · · · + Y(i+1)16n2 | .
Then if τ (n) ≥ cn2 log n, it must be that all Ri ’s are no bigger than 2n. Therefore
P(τ (n) ≥ cn2 log n) ≤ P(|Ri | ≤ 2n for all i = 0, . . . , b(c/16) log nc)
= P(|R1 | ≤ 2n)b(c/16) log nc
≤ (7/16)(c/16) log n = exp((c/16) log(7/16) log n) ≤ n−2
if c > 32(log(7/16)−1 )−1 . This is summable so Borel-Cantelli finishes the proof.
Compare this last result to the result on hitting a single line from before. The probability that the walk does not leave a strip [−n, n] by time cn2 log n is much smaller. The
√
corresponding probability for the half-line (−∞, n] is actually of order no smaller than 1/ n
(with logarithms). Obviously it is much harder to trap the walk in a bounded set.
5