Download (pdf)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
Transcript
STRONG APPROXIMATION OF BROWNIAN MOTION
LIYU XIA
Abstract. Simple random walk and Brownian motion are two strongly interconnected mathematical concepts. They are widely involved in not only pure
math, but also in many other scientific fields. In this paper I will first introduce and define some basic concepts of discrete-time random walk. Then I
will construct Brownian Motion with some basic properties, and use a method
called the strong approximation of Brownian Motion to show that the simple
random walk indeed converges to standard Brownian motion. Finally I will
give an application of this fact, and demonstrate how considering Brownian
motion as a limit of the simple random walk can be convenient when looking
at problems, since it is often easier to deal with Brownian motions instead of
simple random walks.
Contents
1. Introduction and Definitions
2. Construction of Brownian Motion
3. Skorokhod embedding
4. Application: long range estimate of intersection possibilities
Acknowledgments
References
1
3
6
9
12
12
1. Introduction and Definitions
Random walk is a widely employed mathematical subject. To imagine its simplest form, that is, the simple random walk in one dimension, one can think of a
random walker starting from the origin, flipping a coin. The walker will move to
+1 if the head is up; otherwise the random walker will move to -1.
Now that we have some basic perception of simple random walks, we need to
start introducing some definitions and notations in order to formalize the topic. In
this paper, we use lower case letters x, y, z to denote point on the integer lattice
Zn = {(x1 , . . . , xn ) : xj ∈ Z}. We use superscripts to indicate components, and we
use subscripts to enumerate elements. For example, xj = (x1j , . . . , xnj ) is a point in
Zn , decomposed into its individual components. And x1 , x2 , . . . denote a sequence
of points in Zn . In addition, we write e1 = (1, 0, . . . , 0), . . . , en = (0, . . . , 0, 1) for
the standard basis of unit vectors in Zn .
The process of the simple random walk starting at x ∈ Zn can be regarded as a
sequence of independent, identically distributed random variables.
Sn = x + X1 + · · · + Xn ,
1
2
LIYU XIA
with P{Xi = ek } = P{Xi = −ek } = 1/(2d), k = 1, . . . , d. With this basic idea in
mind, we can extend the simple random walk to more generalized version of random
walk, and formulate the definition of random walk in more than one dimension, and
various probability meansures.
We let V = {x1 , . . . , xn } ⊂ Zd6=0 be a spanning set of Zd . We let G denote the
collection of of spanning sets V with the property that the first nonzero entry of
x ∈ V is positive.
Definition 1.1. A finite range, symmetric, n-dimensional random walk is defined
by specifying a V ∈ G and a function f : V → (0, 1]n with f (x1 ) + · · · + f (xn ) ≤ 1.
In addition, the probability distribution on Zn is:
X
1
f (x).
p(xk ) = p(−xk ) = f (xk ), p(0) = 1 −
2
x∈V
We use Pn to denote the set of such distributions p on Zn , and let P = ∪n≥1 Pn .
Additionally, we will use P -walk or Pn -walk to denote such a random walk. In
particular for the simple random walk, the probability distribution is:
1
p(ej ) = p(−ej ) =
, j = 1, . . . , n.
2n
In addition, given p ∈ Pn , we use pk to denote the k − step distribution:
pk (x, y) = P{Sk = y|S0 = x}
Before moving on with the paper, it is useful for us to introduce some other
concepts on random walks, which would make our later discussion more convenient.
Definition 1.2. Suppose Sn is a P − n walk with S0 = 0. Sn is called bipartite
if pn (0, 0) = 0 for odd n, that is, the random walker always needs even number
of steps to get back to the origin. On the other hand, Sn is called aperiodic if
pn (0, 0) > 0 for all n that are sufficiently large.
For future convenience, it is also useful to introduce the discrete case of Reflection
Principle (there is also a continuous-time version, but we will not need this).
Proposition 1.3 (Reflection Principle). Suppose Sn is a random walk with
increment distribution p ∈ Pd , starting at the origin. If u ∈ Rd is a unit vector,
and c > 0, then
P{ max Sj · u ≥ c} ≤ 2P{Sn · u ≥ c}.
0≤j≤n
Proof. We first fix n ∈ N and a unit vector u. Define γ = γn,c to be the smallest j
such that Sj · u ≥ c. An important thing to note is that
n
[
{γ = j; (Sn − Sj ) · u ≥ 0} ⊂ {Sn · u ≥ c},
j=1
because as long as both of the events {γ = j} and {(Sn − Sj ) · u ≥ 0} are met, the
event {Sn · u ≥ c} is automatically fulfilled, while even if these two events do not
happen at the same time, the event {Sn · u ≥ c} is still possible.
Moreover, since p ∈ Pd , due to symmetry, for any 0 ≤ j ≤ n, P{Sn · u ≥ 0} ≥ 21 .
Therefore, under independence, we have
1
P{γ = j; (Sn − Sj ) · u ≥ 0} ≥ P{γ = j}.
2
STRONG APPROXIMATION OF BROWNIAN MOTION
3
Thus, we can conclude
P{Sn ·u ≥ c} ≥
n
X
P{γ = j; (Sn −Sj )·u ≥ 0} ≥
j=1
1 X
1
nP{γ = j} = P{ max Sj ·u ≥ c}.
2 j=1
2 0≤j≤n
The Reflection Principle can often help us reduce the complexity of the problem.
2. Construction of Brownian Motion
Now that we have some elementary knowledge of random walk, we introduce the
concept of Brownian motion.
Let Sn = X1 + · · · + Xn be a one-dimensional simple random walk. We first
transform this process into a continuous function by linear interpolation, i.e.
St = Sn + (t − n)[Sn+1 − Sn ], n ≤ t ≤ n + 1.
According to the Central Limit Theorem, n−1/2 Sn converges to a standard normal
distribution. In fact, a simple extension of the central limit theorem tells us that if
we let 0 < t1 < t2 < · · · < tk = 1, then as n → ∞,
n−1/2 Sti n → N (0, ti ), i = 1, . . . , k
Now define the random function:
(2.1)
(n)
Wt
:= n−1/2 Stn .
According to the functional central limit theorem, as n → ∞, the distribution of
(n)
Wt converges to the distribution of a random function t → Bt . Based on our
knowledge of the simple random walk, we can predict some of the properties of this
limiting random function Bt :
• If s < t, the distribution of Bt − Bs is N (0, t − s).
• If 0 ≤ t0 < t1 < · · · < tk , then Bt1 − Bt0 , . . . , Btk − Btk−1 are independent
random variables (the increments are independent).
A somewhat not so obvious property is that:
• The function t → Bt is continuous.
In order to show that Brownian motion is indeed the limit of this random function
(n)
Wt , which is defined from the simple random walk, we use a method called the
strong approximation of Brownian Motion. First we need to construct Brownian
Motion, Bt , based on the properties listed above.
With these properties, we can start constructing Brownian Motion. Although it
can be difficult for us to define random variables Bt indexed over an uncountable
set, that is, a real interval, we can first define it on a countable set, and then entend
the definition to real intervals by continuity.
Suppose (Ω, F, P) is a probability space that contains a countable collection of
independent standard normal random variables, denoted by:
Nn , k, n = 0, 1, . . . ; k = 0, 1, . . .
We will use these normal random variables later when defining a Brownian Motion
on the probability space (Ω, F, P).
4
LIYU XIA
Now we define a countable set of dyadic numbers:
Dn := {k/2n : k = 0, 1, . . . },
D=
∞
[
Dn .
n=0
Then we follow our strategy by defining Bt for t ∈ D with the first two properties
satisfied. Then we prove uniform continuity of Bt by quantifying the oscillation of
Bt on compact intervals. And finally extend the values of t to uncountable sets,
i.e. real intervals.
When defining Bt over t ∈ D, we need several properties of normal random
variables as well. Suppose M, N are independent N (0, 1) random variables, then
(2.2)
X :=
M
N
+ ,
2
2
Y :=
M
N
− ,
2
2
are both independent N (0, 12 ) random variables, since
Var(X) = E[X 2 ] − E[X]2 = E[
N2
MN
1
1
1
M2
] + E[
] + E[
] = Var(M ) + Var(N ) =
4
4
2
4
4
2
.
Calculating similarly, we have V ar(Y ) = 21 .
Then we set out defining Bt for t ∈ D0 :
B0 = 0;
Bj = N0,1 + · · · + N0,j .
We then define Bt recursively: for t ∈ Dn+1 \Dn ,
B 2k+1 = B
2n+1
k
2n
1
+ [B k+1
− B kn ] + 2−(n+2)/2 N2k+1,n+1 ,
2
2 2n
which is the same as:
1
[B k+1 − B kn ] + 2−(n+2)/2 N2k+1,n+1 .
2
2 2n
By induction, we can see that random variables ∆k,n := B kn − B k−1
are indepen2
2n
dent, with a distribution of N (0, 2−n ). Therefore, we have successfully constructed
Bt that satisfy both of the first two properties, if t ∈ D.
B 2k+1 − B
2n+1
2k
2n+1
=
We define the oscillation of Bt (still restricted to t ∈ D) by:
osc(B; δ, T ) := sup{|Bt − Bs | : s, t ∈ D; 0 ≤ s, t ≤ T ; |s − t| ≤ δ}.
In addition, define
Mn := max n sup{|Bt+k2−n − Bk2−n | : t ∈ D; 0 ≤ t ≤ 2−n }.
0≤k<2
We introduce Mn because it is easier to deal with. We write osc(B; δ) for osc(B; δ, 1)
for convenience of notation. Note that according to triangle inequality, if r ≤ 2−n ,
(2.3)
osc(B; r) ≤ osc(B; 2−n ) ≤ 3Mn .
Lemma 2.4. For n ∈ N and δ > 0,
r
−n/2
P{Mn > δ2
}≤4
2 2n −δ2 /2
e
.
π δ
STRONG APPROXIMATION OF BROWNIAN MOTION
5
Proof. It is obvious that the probability of an event is less than or equal to the sum
of the probabilities of the subsets of the event, the union of which is exactly the
original event. Thus, we have
P{Mn > δ2−n/2 } ≤ 2n P{ sup
0≤t≤2−n
|Bt | > δ2−n/2 } = 2n P{ sup |Bt | > δ}.
0≤t≤1
Remember we only define Bt over the countable set of dyadic numbers. Therefore
we also have,
P{ sup |Bt | > δ} = lim P{max{|Bk2−n | : k = 1, . . . , 2n } > δ}
0≤t≤1
n→∞
≤ 2 lim P{max{Bk2−n : k = 1, . . . , 2n } > δ}
n→∞
Then we use Proposition 1.3 (Reflection Principle):
P{max{Bk2−n : k = 1, . . . , 2n } > δ} ≤ 2P{B1 > δ}
Z ∞
2
1
√ e−x /2 dx
=2
2π
Zδ ∞
1 −xδ/2
√ e
≤2
dx
2π
δ
r
2 −1 −δ2 /2
=2
δ e
.
π
Proposition 2.5. For all 0 < δ ≤p
1, r ≥ 1, and positive integer T , there exists a
2
c > 0 such that P{osc(B; δ, T ) > cr δ log(1/δ)} ≤ cT δ r .
Proof. In fact, it is sufficient for us to prove under the condition of T = 1, since for
other T , we can split the big interval into 2T −1 small intervals [0, 1], [1/2, 3/2], . . . , [T −
1, T ], and then estimate the oscillation over each interval. Moreover, we only need
to prove for δ ≤ 1/4, since we just want to look at small values for δ.
Suppose that 2−n−1 ≤ δ ≤ 2−n , we have:
p
cr p −n
2 log(1/δ)}.
P{osc(B; δ) > cr δ log(1/δ)} ≤ P{Mn > √
3 2
According to Lemma 2.4, if c is chosen to be sufficiently big, the right hand side is
bounded by a constant times
1 c2 r 2
exp{−
log(1/δ)},
2
18
2
which is just a constant times δ r with c large enough.
Corollary 2.6. With probability one, for all integer T < ∞, the function t →
Bt , t ∈ D is uniformly continuous on [0, T ].
Proof. It is equivalent for us to prove that osc(B; 2−n , T ) → 0 as n → ∞. Proposition 2.5 shows that ∃ a constant c1 such that
√
P{osc(B; 2−n , T ) > c1 2−n/2 n} ≤ c1 T 2−n .
6
LIYU XIA
Since the infinite series
∞
X
2−n converges, we also have
n=1
∞
X
√
P{osc(B; 2−n , T ) > c1 2−n/2 n} < ∞.
n=1
√
According to Borel-Cantelli, with probability one, osc(B; 2−n , T ) ≤ c1 2−n/2 n for
all n sufficiently large.
Now we have proved the uniform continuity of the function t → Bt , we can
extend our definition of Bt over dyadic numbers to real intervals. For t ∈
/ D, define
Bt := lim Btn ,
tn →t
where {tn } is a sequence in D that converges to t. And it is obvious that this
satisfies the three conditions defined for Brownian motion. And due to continuity,
we can write:
osc(B; δ, T ) = sup{|Bt − Bs | : 0 ≤ s, t ≤ T ; |s − t| ≤ δ}.
Moreover, an important fact about the scaling of the Brownian motion is that if
Bt is a standard Brownian Motion, then ∀ a > 0, Yt := a−1/2 Bat is also a standard
Brownian motion. Another important fact about Brownian motion is the following
theorem.
Theorem 2.7 (Modulus of continuity of Brownian motion). Suppose Bt is
a standard Brownian motion, there exists a c < ∞, such that for 0 < δ ≤ 1, r ≥ c,
and T ≥ 1,
p
2
P{osc(B; δ, T ) > r δ log(1/δ)} ≤ cT δ (r/c) .
√
Moreover, if T > 0, then osc(B; δ, T ) has the same distribution as T osc(B; δ/T ).
In particular, if T ≥ 1,
(2.8)
p
p
2
P{osc(B; 1, T ) > cr log T } = P{osc(B; 1/T ) > r (1/T ) log T } ≤ cT −(r/c)
This theorem gives us a strong tool to bound the oscillation of he Brownian
motion, which would be very helpful when we try to bound the difference between
the Brownian motion and the simple random walk, as we break this differene into
seeral parts.
3. Skorokhod embedding
Now that we have finished the construction of Brownian Motion, we can use a
procedure called Skorokhod embedding to produce a random walk Sn from a Brownian motion Bt . In fact, the idea is very straightforward. We start the Brownian
motion and wait until it reaches +1 or -1. If it hits +1, then we set S1 = 1; if it hits
-1, we set S1 = −1. Then we again wait until the new increment of the Brownian
Motion reaches +1 or -1, and use this value for the new increment of the random
walk.
Making the idea more formly, let Bt be a standard one-dimensional Brownian
motion, and define
γ := inf{t ≥ 0 : |Bt | = 1}.
By symmetry it is obvious that P{Bγ = 1} = P{Bγ = −1} = 1/2.
STRONG APPROXIMATION OF BROWNIAN MOTION
7
More generally, let γ0 = 0, and
γn = inf{t ≥ γn−1 : |Bt − Bγn−1 | = 1}.
Then Sn := Bγn has the same distribution as a one-dimensional simple random
walk. In order to make the simple random continuous over real intervals, we again
use linear interpolation for t ∈
/ N. Now we can study the difference of the Brownian
motion and the simple random walk it induced side by side, as we have establish
the connection these two random processes. This connection would greatly ease
our study of the convergence of the simple random walk to the Brownian motion.
Define
Θ(B, S; n) := max{|Bt − St | : 0 ≤ t ≤ n}.
Suppose j ≤ t < j + 1 ≤ n, using triangle inequality, we have :
|Bt − St | ≤ |Sj − St | + |Bj − Bt | + |Bj − Sj | ≤ 1 + osc(B; 1, n) + |Bj − Bγj |.
Therefore, if n is an integer,
(3.1)
Θ(B, S; n) ≤ 1 + osc(B; 1, n) + max{Bj − Bγj : j = 1, . . . , n}.
We can estimate the value of the second term using the previous theorem on the
Modulus of continuity of Brownian Motion. Thus we will mainly focus on the
estimate of the last term.
Before proving the big theorem, we still need one corollary about martingle,
which is a crucial concept in probability.
Corollary 3.2. Let X1 , X2 , . . . be independent, identically distributed random variables in R with mean zero, variance σ 2 , such that for some δ > 0, the moment
generating function γ(t) = E[etXj ] exists for |t| < δ. √Let Sn = X1 + · · · + Xn . there
is an 0 > 0, such that for all n ∈ N and 0 ≤ r ≤ 0 n,
3 √
r
−r 2 /2
exp{O √ }.
(3.3)
P{ max Sj ≥ rσ n} ≤ e
0≤j≤n
n
We will not provide the proof of this Corollary in this paper. For its proof, see the
appendix of [1]. Then comes the big theorem, that realizes the strong approximation
of Brownian motion.
Theorem 3.4. There exists 0 < c1 and a < ∞ such that for all r ≤ n1/4 and all
integers n ≥ 3,
p
P{Θ(B, S; n) > rn1/4 log n} ≤ c1 e−ar .
Proof. It is sufficient to prove the theorem for r ≥ 9c20 , where c0 is the constant
2
c from Theorem before. This is because if we choose c1 ≥ e9ac0 , the result holds
trivially for r < 9c20 . Suppose 9c20 ≤ r ≤ n1/4 . If |Bn − Bγn | is large, then either
|n − γn | is large or the oscillation of B is large. According to the √
previous upper
1/4
bound on Θ(B, S; n), we know that the event {Θ(B, S; n) > rn
log n} can be
split into the two events below:
p
√
{osc(B; r n, 2n) ≥ (r/3)n1/4 log n},
√
{ max |γj − j| ≥ r n}.
1≤j≤n
If the first event happens, and the second event does not, then all
√ of the three terms
on the right hand side of (3.1) are each bounded by (r/3)n1/4 log n}.
8
LIYU XIA
According to Theorem 2.7, for 1 ≤ r ≤ n1/4 , we have :
p
p
√
√
P{osc(B; r n, 2n) > (r/3)n1/4 log n} ≤ 3P{osc(B; r n, n) > (r/3)n1/4 log n}
p
= 3P{osc(B; rn−1/2 ) > (r/3)n−1/4 log n}
q
√
≤ 3P{osc(B; rn−1/2 ) > ( r/3) rn−1/2 log(n1/2 /r)}.
√
If r/3 ≥ c0 and r ≤ n1/4 , according to Theorem 2.7, we can conclude that ∃ c, a,
such that
q
√
P{osc(B; rn−1/2 ) > ( r/3) rn−1/2 log(n1/2 /r)} < ce−ar log n .
For the second event, consider the martingale
Mj = γj − j.
If we use Corollary 3.2 on Mj and −Mj , we can see that there exists c, a such that
√
2
(3.5)
P{ max |γj − j| ≥ r n} ≤ ce−ar .
1≤j≤n
−ar 2
−ar log n
Note that this proof actually gives a stronger upper bound of c[e
+e
]
compared to our original goal of ce−ar . Since the infinite sum of e−ar is finite,
according
√ to Borel-Cantelli, we can say that with probability one, Θ(B, S; n) ≤
rn1/4 log n as long as n is chosen to be sufficiently large. Now we take a step
further to show that the random function defined in (2.1) indeed converges to
Brownian motion ,and this would close our proof of the statement that the simple
random walk converges to the Brownian Motion as the time step goes to zero.
First, let Bt be a standard Brownian Motion. Then, for positive integer n, define
(n)
Bt := n−1/2 Bnt . And let S (n) be the coresponding simple random walk derived
(n)
from Bt using Skorokhod embedding. Then, as we have proved before, for all
positive integer T ,
p
(n)
(n)
P{ max |St − Bt | ≥ r(T n)1/4 log(T n)} ≤ ce−ar .
0≤t≤T n
Now we let
(n)
(n)
Wt = n−1/2 Stn ,
just as how we define this random function in (2.1). Then we have:
p
(n)
P{ max |Wt − Bt | ≥ rT 1/4 n−1/4 log(T n)} ≤ ce−ar ,
0≤t≤T
as we shift the common n−1/2 out of the absolute value.
If we let r = c1 log n, where c1 = c1 (T ) is chosen to be sufficiently large,
(n)
P{ max |Wt
0≤t≤T
− Bt | ≥ c1 n−1/4 (log n)3/2 } ≤ c1 n−2 ,
Therefore, by Borel-Cantelli, we know that with probability one, W (n) converges
to B in the metric space C[0, T ].
Also, we can go a step further and extend the result to multidimensions. This
extension would greatly broaden the conditions that we are able to work with.
Definition 3.6. A n-dimensional Brownian motion with covariance matrix Γ and
filtration Ft is a collection of random variables Bt , t ≥ 0, such that,
STRONG APPROXIMATION OF BROWNIAN MOTION
9
• B0 = 0.
• If s < t, then Bt − Bs is a Ft -measurable random variable in Rn , independent of Fs , with a joint normal distribution of mean zero and covariance
matrix (t − s) Γ.
• With probability one, t → Bt is a continuous function.
In particular, a standard n−dimensional Brownian motion is of the form
(1)
(n)
Bt = (Bt , . . . , Bt ),
where B (1) , . . . , B (n) are independent, one-dimensional standard Brownian motions
with their covariance matrices be the identity matrix.
Lemma 3.7. Suppose B (1) , . . . , B (n) are independent one-dimensional standard
Brownian Motions, and v1 , . . . , vn ∈ Rd . Then we have:
(1)
(n)
Bt := Bt v1 + · · · + Bt vn
is a Brownian motion in Rd with covariance matrix Γ = AAT , where A = [v1 v2 . . . vn ].
The proof for this lemma is straightforward.
4. Application: long range estimate of intersection possibilities
One of the great things about the fact that Brownian motion is a limit of simple
random walk is that as long as we want to study certain properties of Brownian
motion, we only need to study the simple random walk first and then consider the
limit condition. If we want to explore the properties of intersection of Brownian
motions, we only need to look at the problem for the simple random walks and
then take the limit, as the simple random walks are often easier to work with. For
convenience of notation, we write S[n1 , n2 ] to denote {Sn : n1 ≤ n ≤ n2 } if Sn is
a simple random walk. Before we can state the main theorem of the intersection
probabilities, we need some lemmas and new concepts.
Lemma 4.1. Let X be a nonnegative random variable with E[X 2 ] < ∞, and let
0 < r < 1. Then
(1 − r)2 E(X)2
.
P{X ≥ rE(X)} ≥
E(X 2 )
Proof. Without loss of generality, let us assume that E(X) = 1. It is obvious that
E[X; X < r] ≤ r; therefore, E[X; X ≥ r] ≥ 1 − r. Also, since variance is never zero,
we have:
E(X 2 ) ≥ E[X 2 ; X ≥ r] = P{X ≥ r}E[X 2 |X ≥ r]
≥ P{X ≥ r}(E[X|X ≥ r])2
(E[X; X ≥ r])2
P{X ≥ r}
(1 − r)2
≥
.
P{X ≥ r}
≥
In order to fully address the problem of intersection possibilities of random walks
and brownian motions, we also need to introduce the concept of Green’s Function.
10
LIYU XIA
Definition 4.2. For p ∈ Pd , d ≥ 3, that is, we only consider the case of transient
random walks. The Green’s function G(x, y) = G(y, x) = G(y − x) is given by
∞
X
G(x) =
pn (x) = E[
n=0
∞
X
1{Sn = x}] = Ex [
n=0
∞
X
1{Sn = 0}].
n=0
The following lemma on Green’s function will be useful, but its proof is beyond
the scope of this paper.
Lemma 4.3. For all α > 0, there exists c, r such that for all n sufficiently large,
we have,
n
X
P{
G(Sj ) ≤ r log n} ≤ cn−α .
j=0
Now we are fully equipped to explore the intersection possibilities of random
walks and brownian motions.
Proposition 4.4. If p ∈ Pd , there exists c1 , c2 such that for all n ≥ 2,
c1 φ(n) ≤ P{S[0, n] ∩ S[2n, 3n] 6= ∅}
≤ P{S[0, n] ∩ S[2n, ∞] 6= ∅} ≤ c2 φ(n),
where


1
φn = (log n)−1

 (4−d)/2
n
(4.5)
d < 4,
d = 4,
d > 4.
Proof. The upper and lower bounds for d ≤ 3 are obvious. Therefore, we focus
on the case of d > 3. We assume that the random walk is aperiodic (the case for
bipartite random walks only requires subtle change in the proof). We let
Jn =
n X
3n
X
1{Si = Sj }, Kn =
i=0 j=2n
n X
∞
X
1{Si = Sj }.
i=0 j=2n
Therefore, we have:
P{S[0, n] ∩ S[2n, 3n] 6= ∅} = P{Jn ≥ 1},
P{S[0, n] ∩ S[2n, ∞] 6= ∅} = P{Kn ≥ 1}.
If we use p(n) to denote P{Sn = 0}, then
E(Jn ) =
n X
3n
X
p(i − j),
i=0 j=2n
and note that everything we apply to Jn can be similarly applied to Kn . From
Chapter [2], we know that p(i − j) ∼ (i − j)−d/2 . Thus,
E(Jn ) ∼
n X
3n
X
i=0 j=2n
(i−j)−d/2 ∼
n X
3n
X
i=0 j=2n
(i−n)−d/2 ∼
n X
3n
X
n−d/2 ∼
i=0 j=2n
n
X
n1−(d/2) ∼ n2−(d/2) .
i=0
After doing the same estimation of E(Kn ), we arrive at the conclusion that there
exists c1 , c2 , such that for d > 3,
(4.6)
c1 n2−(d/2) ≤ E(Jn ) ≤ E(Kn ) ≤ c2 n2−(d/2) .
STRONG APPROXIMATION OF BROWNIAN MOTION
11
For d > 4, we also have,
P{Jn ≥ 1} =
≤
∞
X
t=1
∞
X
P{Jn = t}
t P{Jn = t}
t=1
= E(Jn ).
Therefore, the upper bound for d > 4 is also justified. In order to obtain the lower
bound for the case of d > 4, we need to look at the second moment of Jn .
X
X
E(Jn2 ) =
P{Si1 = Sj1 , Si2 = Sj2 }
0≤i1 ,i2 ≤n 2n≤j1 ,j2 ≤3n
X
≤2
X
(P{Si1 = Sj1 , Si2 = Sj2 } + P{Si1 = Sj2 , Si2 = Sj1 }).
0≤i1 ≤i2 ≤n 2n≤j1 ≤j2 ≤3n
If 0 ≤ i1 ≤ i2 ≤ n and 2n ≤ j1 ≤ j2 ≤ 3n, then
P{Si1 = Sj1 , Si2 = Sj2 } ≤ [ max P{Sl = x}] [max P{Sj2 −j1 = x}]
l≥n,x∈Zd
x∈Zd
c
≤ d/2
.
n (m − k + 1)d/2
Thus, we have,
E(Jn2 ) ≤ cn2
X
2n≤j1 ≤j2 ≤3n
Since
X
1
1
2−(d/2)
≤
cn
.
d/2
d/2
n (m − k + 1)
(m − k + 1)d/2
0≤j ≤j ≤n
1
1
0≤j1 ≤j2 ≤n (m−k+1)d/2
P
2
is finite, we have
E(Jn2 ) ≤ cn(4−d)/2
(4.7)
for d > 4. Now we apply the second moment lemma (Lemma 4.1) that we have
proved beforehand,
E(Jn )2
P{Jn > 0} ≥
.
4E(Jn2 )
Plug in (4.6) and (4.7), we obtain the lower bound for d > 4.
Now the only situation left is d = 4, which turns out to be the critical point for the
discussion of intersection possibilities of both random walks and Brownian motions.
This part is a bit tricky, and we need help from Green’s function introduced before.
Assume that d = 4. We consider the conditional expectation E[Kn |Kn ≥ 1].
With the condition that Kn ≥ 1, let k be the smallest integer that is greater or
equal to 2n, such that Sk ∈ S[0, n]. Let m be the smallest index such that Sk = Sm .
Given [S0 , . . . , Sk ], by applying the Markov property, we get the expected value of
K2n ,
n X
∞
n
X
X
P{Sl = Si |Sk = Sm } =
G(Si − Sm ).
i=0 l=k
i=0
Define a random variable Yn as follow:
Yn :=
min
m=0,...,n
n
X
i=0
G(Si − Sm ).
12
LIYU XIA
For all r > 0, we have,
E[Kn |Kn ≥ 1, Yn ≥ r log n] ≥ r log n.
Also note that for each r,
P{Yn < r log n} ≤ (n + 1)P{
X
G(Si ) < r log n}.
i≤n/2
Now we apply the lemma on Green’s function (Lemma 4.3), we can find an r, such
that P{Yn < r log n} ∼ O(1/ log n). On the other hand,
c ≥ E[Kn ] ≥ P{Kn ≥ 1; Yn ≥ r log n} E[Kn |Kn ≥ 1, Yn ≥ r log n]
≥ P{Kn ≥ 1; Yn ≥ r log n} (r log n).
Thus,
P{Kn ≥ 1} ≤ P{Yn < r log n} + P{Kn ≥ 1 Yn ≥ r log n} ≤
c
.
log n
This gives us the bounds for the case of d = 4.
Since Brownian motion can be considered as the limit of random walk, after
taking the limit as n → ∞, we obtain the long range estimate of intersection
possibilities for Brownian motions: If B is a standard Brownian Motion in Rd , then
(
>0 d<4
(4.8)
P{B[0, 1] ∩ B[2, 3] 6= ∅} =
=0 d≥4
Thus, Brownian motion always tends to intersect each other when d ≥ 4.
Acknowledgments. It is a pleasure to thank my mentor, Mohammad Rezaei and
Antonio Auffinger, for their generous help and support on clarifying the confusions
I have met during the research as well as on guiding through the writing of this
entire paper.
References
[1] Gregory F. Lawler and Vlada Limic. Random Walk: A Modern Introduction. Cambridge
University Press. 2010.
[2] Gregory
F.
Lawler.
Random
Walk
and
the
Heat
Equation.
http://www.math.uchicago.edu/ lawler/reu.pdf