Download (pdf)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Probability interpretations wikipedia , lookup

Probability wikipedia , lookup

Transcript
TOPICS IN PERCOLATION
ALEXANDER DUNLAP
Abstract. Percolation is the study of connectedness in a randomly-chosen
subset of an infinite graph. Following Grimmett[4], we set up percolation on
a square lattice. We establish the existence of a critical edge-density, prove
several results about the behavior of percolation systems above and below this
critical density, and use these results to find the critical density of percolation
on the two-dimensional square lattice.
Contents
1. Introduction
2. Measure and probability
3. Graphs
3.1. Definitions and basic theory
3.2. Properties of probability measure on graphs
3.3. Lattices and translation
3.4. Planar Duality
4. Percolation
4.1. Basic theory and the critical probability
4.2. Subcritical percolation: exponential decay
4.3. Supercritical percolation: uniqueness of the infinite cluster
4.4. The critical value in two dimensions
Acknowledgments
References
1
2
5
5
5
8
10
11
11
13
15
18
20
20
1. Introduction
In nature, fluids (such as water) percolate through porous substances (such as
earth). The key feature of such a system is the porosity of the medium, which means
that the fluid can travel along some, “open,” paths but not along other, “closed,”
paths. The exact configuration of the open and closed paths in a given system
is of course very complicated, and a specific configuration is not of independent
interest. We instead want to study random percolation systems, in which each path
is declared open or closed with a given probability. This is analogous to a complex
physical system in which it is impossible to predict, say, the permeability of a
given particle in the earth, but in which we know the general density of permeable
particles in the entire section of earth under consideration.
While natural percolation is the inspiration for the subject, the mathematical
study of percolation that we describe in this paper does not attempt to model any
Date: 20 August 2012.
1
2
ALEXANDER DUNLAP
particular physical system. Rather, we think of a percolation system as a mathematical graph and establish results about graphs constructed randomly according
to certain parameters. In this paper, we restrict our attention to configurations of
the d-dimensional square lattice Ld . We generate such configurations by choosing
(as “open”) a random subset of the edges of Ld , which creates a system known as
bond-percolation. (In the more general site percolation, subgraphs are created by
choosing a subset of the vertices.) Percolation is subject to the density parameter
p, which defines the probability of any given edge being chosen open or closed.
Given this setup, percolation theory seeks to understand the sizes of “open clusters” in configurations. In particular, we want to know whether these open clusters
are of finite or infinite size. Curiously, as the density parameter varies, the probability of the existence of an infinite cluster changes sharply at the critical point
pc , which depends on the density of the lattice. The percolation system behaves in
qualitatively different ways in the “subcritical” and “supercritical” phases. When
p < pc , open clusters are almost surely finite, and we can ask questions about their
size (Section 4.2). When p > pc , there is almost surely an infinite open cluster, and
we can ask questions about how many such infinite open clusters there are (Section 4.3). Combining results from both phases, we can actually derive the value of
pc for d = 2 (Section 4.4), which for general d is very difficult to compute.
Chapter 2 sets down some basic concepts of measure theory and probability,
and Chapter 3 establishes various definitions and theorems that will be useful for
working with graphs and lattices. We apply this built-up theory to percolation
systems in Chapter 4.
2. Measure and probability
We briefly formulate basic probability concepts using measure theory.
Notation 2.1 (Set-theoretic notations). We write |A| to denote the cardinality of
a set A and P(A) to denote the power set of A. Given sets A and B, we define the
symmetric difference A 4 B as the set of elements that are members of either A or
B but not both. In symbols, we have A 4 B = (A \ B) ∪ (B \ A).
Definition 2.2. Given a set Ω, an algebra on Ω is a collection F ⊆ P(Ω) such
that
1. ∅, Ω ∈ F.
2. A ∈ F =⇒ Ac ∈ F.
S
3. If (Ai )i is a finite collection of elements of F, then i Ai ∈ F.
Definition 2.3. An algebra F on
S Ω is a σ-algebra if, given a countable collection
(Ai )i of elements of F, we have i Ai ∈ F.
Fact 2.4. By de Morgan’s Laws, an algebra is closed under finite intersections and
a σ-algebra is closed under countable intersections.
Proposition
S 2.5. If Fi ⊆ F2 ⊆ F3 ⊆ · · · is an increasing sequence of σ-algebras,
then F = i Fi is an algebra.
Proof. Clearly, ∅, Ω ∈ F. If A ∈ F, then A ∈ Fi forSsome i, so Ac ∈ Fi ⊆ F. If
n
A1 , . . . , An ∈ F, then A1 , . . . , An ∈ Fi for some i, so j=1 Aj ∈ Fi ⊆ F.
Definition 2.6. Given a set Ω and a σ-algebra F on Ω, a function P : F → [0, 1]
is a probability measure on F if the following conditions hold:
TOPICS IN PERCOLATION
1.
2.
3.
3
P(∅) = 0.
P(Ω) = 1.
S
If (Ai )i is a countable collection of disjoint elements of F, then µ( i Ai ) =
P
i µ(Ai ).
Definition 2.7. A probability space is a triple (Ω, F, P) of a set Ω (called the sample
space), a σ-algebra F on Ω (whose members are called events), and a probability
measure P on F. If F and P are clear from the context, we will often speak of the
probability space Ω to refer to the probability space (Ω, F, P).
Definition 2.8. We say that an event A is almost sure, or that A happens almost
always, if P(A) = 1, and that it happens almost never if P(A) = 0.
Facts 2.9. The following are basic properties of probability measures:
1. (Monotonicity.) If A ⊆ B, then µ(A) ≤ µ(B).
2. (Countable subadditivity.) If (Ai )i S
is a countable
collection of elements of
P
F, not necessarily disjoint, then µ( i Ai ) ≤ i µ(Ai ). S
3. (Continuity from below.) If A1 ⊆ A2 ⊆ · · · , and A = i Ai , then µ(A) =
limn→∞ µ(Ai ).
T
4. (Continuity from above.) If A1 ⊇ A2 ⊇ · · · , and A = i Ai , then µ(A) =
limn→∞ µ(Ai ).
Definition 2.10. Two events A and B are independent with respect to a measure
P if P(A ∩ B) = P(A)P(B).
Definition 2.11. Given events A, B ∈ F such that P(B) > 0, we define the
conditional probability P(A | B) = P(A ∩ B)/P(B).
Fact 2.12. If A and B are independent, then P(A | B) = P(A).
Definition 2.13.
1. Suppose that (Ω, F, P) is a probability space and X is
a real-valued function on Ω such that X(Ω) is finite or countable. We
say that X is F-measurable, and that X is a discrete random variable, if
{ω | X(ω) = x} ∈ F for each x ∈ X(Ω).
2. If A ∈ F, we define the indicator function of A to be the discrete random
variable given by
(
1 if ω ∈ A;
1A (ω) =
0 otherwise.
Notation 2.14. We will write P(X = x) to denote P {ω ∈ Ω | X(ω) = x}.
Definition 2.15. Let Ω be a sample space.
1. If A is a collection of subsets of Ω, then the σ-algebra generated by A,
denoted σ(A), is the intersection of all σ-algebras F of Ω such that A ⊆ F.
2. If X is a collection of discrete random variables X : Ω → R, then the σalgebra generated by X, denoted σ(X), is the intersection of all σ-algebras
F of Ω such that X is F-measurable for all X ∈ X.
Fact 2.16. If X is as in Definition 2.15.2, then
σ(X) = σ X −1 (x) X ∈ X, x ∈ X(Ω) .
It will be useful to approximate events in a σ-algebra generated by an algebra
by events in the original algebra.
4
ALEXANDER DUNLAP
Theorem 2.17. Suppose that P is a probability measure, F̃ is an algebra on Ω, and
F = σ(F̃). For every A ∈ F and ε > 0, there is an à ∈ F̃ so that P(A 4 Ã) < ε.
o
n
Proof. Let G = A ∈ F ∀ε > 0, ∃Ã ∈ F̃ so that P(A 4 Ã) < ε . It is clear that
F̃ ⊆ G. We claim that G is a σ-algebra, which will imply that F = σ(F̃ ) ⊆ G and
thus prove the theorem.
We clearly have ∅, Ω ∈ G. Suppose A ∈ G and ε > 0. Then there is an à ∈ F̃
so that P(A 4 Ã) < ε. But since A 4 Ã = Ac 4 (Ã)c , and (Ã)c ∈ F̃, this means
that Ac ∈ G.
S
Suppose that (Ai )i∈N is a countable collection of elements of G, let A = i∈N Ai ,
Sn
and let Bn = i=1 Ai . Let ε > 0. By continuity of the measure P, there is an n ∈ N
so that |P(Bn ) − P(A)| < ε, which implies that P(Bn 4 A) = P(A \ Bn ) < ε. For
each i = 1, . . . , n, there is an Âi ∈ F̃ such that P(Âi 4 Ai ) < ε/n, which implies
Sn
Sn
that P( i=1 Âi 4 Bn ) < ε. Therefore, P( i=1 Âi 4 A) < 2ε, so A ∈ G since
Sn
i=1 Âi ∈ F̃ by the definition of algebra.
Definition 2.18. Given a countable Q
collection of probability spaces {(Ω
Q i , Fi , Pi )},
respectively, we say that A ⊆ Ω = i Ωi is a cylinder set if A = i Ai , where
Ai ∈ Fi for all i and Ai = Ωi for all but finitely many i. We can then define the
product σ-algebra F on Ω as the σ-algebra
Q generated by the set of all cylinder sets.
A product
(probability)
measure
P
=
i Pi is a (probability) measure on F such
Q
Q
that P ( i Ai ) = i P(Ai ).
The actual construction of product measure, and thus the proof of the following
theorem, is omitted.
Theorem 2.19. For a countable product of probability spaces, a unique product
measure exists.
Q
Notation 2.20. If Ω = i Ωi , we have the usual projection functions pi : Ω → Ωi .
Proposition
2.21. If {Ωi } is a countable collection of probability spaces and Ω =
Q
Ω
is
endowed
with the product σ-algebra F, then F = σ ({pi }).
i
i
Notation 2.22. Let p ∈ [0, 1]. We can define a probability measure µp on P({0, 1})
by µp ({1}) = p. Given a finite or countable set E, we will write Pp for the measure
on {0, 1}E given by Pp = (µp )E .
Definition 2.23. Given a probability space (Ω, F, P) and a discrete random variable X, we define the expected value of X by
Z
X
E[X] =
X dP =
xP(X = x).
Ω
x∈X(Ω)
Facts 2.24. The following are basic properties of expected value:
1. (Linearity.) Given two random variables X and Y and a real number a, we
have E[aX + Y ] = aE[X] + E[Y ].
2. (Monotonicity.) Given two random variables X and Y such that X(ω) ≤
Y (ω) for almost all ω ∈ Ω, we have E[X] ≤ E[Y ].
3. The expected value of an indicator function of an event is the probability of
the event. Symbolically, E[1A ] = P(A).
Notation 2.25. With setup as in Notation 2.22, we will write Ep to denote the
expected value operator with respect to the product measure Pp .
TOPICS IN PERCOLATION
5
3. Graphs
3.1. Definitions and basic theory.
Definition 3.1. An undirected graph G is a pair (V, E) consisting of a set V and
a set E of unordered pairs of elements in V . The elements of V are called vertices
and the elements of E are called edges. If e is an edge between vertices v1 , v2 , we
will write e = hv1 , v2 i = hv2 , v1 i. In this paper, the term graph will always refer to
an undirected graph.
Definition 3.2. A configuration of a graph G = (V, E) is a function ω : E → {0, 1},
or equivalently an element w ∈ {0, 1}E . An edge e is called open in ω if ω(e) = 1
and closed if ω(e) = 0.
We will often wish to speak of configurations derived from other configurations
by opening or closing certain subsets of edges, and will use the following notation
frequently.
Notation 3.3. Let G = (V, E) be a graph and let ω be a configuration of G.
Suppose that F ⊆ E is a subset of the edges of G. Define the configurations ω F
and ωF by
(
(
1
if e ∈ F ;
0
if e ∈ F ;
F
ω (e) =
ωF (e) =
ω(e) otherwise;
ω(e) otherwise.
In other words, ω F and ωF are the configurations obtained from ω by taking all
edges of F to be open and closed, respectively.
Definition 3.4. Let G = (V, E) be a graph, ω be a configuration on G, and
x, y ∈ V . We say that x and y are connected in ω, and write x ↔ y, if there is a
finite sequence x = z1 , z2 , . . . , zn−1 , zn = y such that (zi , zi+1 ) is an open edge in ω
for all i = 1, . . . , n − 1.
Fact 3.5. The connectedness relation ↔ is an equivalence relation.
Definition 3.6. A cluster in a configuration ω is an equivalence class under ↔.
We will write Cx (ω) for the cluster containing the node x; in particular, C0 (ω) will
denote the cluster containing the origin.
3.2. Properties of probability measure on graphs. We review several important theorems for considering probability measure on graphs. While these theorems
are not specific to probability on graphs, graphs provide both an application for
the theorems and a set of examples for understanding them. In this section, we will
work with a discrete set E, the sample space Ω = {0, 1}E , the product σ-algebra
F, and the probability measure and expectation operators Pp and Ep defined in
Notations 2.22 and 2.25, respectively. For example, E might be the set of edges on
a graph G = (V, E) and Ω the set of configurations of G; in this case, Pp would
be the probability measure that independently takes each edge to be open with
probability p.
It is clear that Ω is partially ordered under the usual partial order for functions:
ω ≤ ω 0 if ω(e) ≤ ω 0 (e) for all e ∈ E. In our example of graphs and edges, we have
ω ≤ ω 0 if all of the edges that are open in ω are also open in ω 0 . Since Ω is partially
ordered, we thus also have a notion of an increasing random variable on Ω.
6
ALEXANDER DUNLAP
Theorem 3.7. If f : Ω → R is a nondecreasing function, and p ≤ r, then Ep [f ] ≤
Er [f ].
Proof. Consider the sample space Ξ = [0, 1]E with probability measure given by
the product of uniform measures on each copy of the unit interval. For p ∈ [0, 1]
and ξ ∈ Ξ, define a random configuration ωp (ξ) of Ω by
(ωp (ξ))(e) = 1[0,p] (ξ(e)).
Note that ωp is distributed according to the product measure Pp . If p ≤ r, it is
clear that ωp ≤ ωr . Therefore, we have f (ωp ) ≤ f (ωr ). But this means, since the
expected value operator is order-preserving, that Ep [f ] = E(f (ωp )) ≤ E(f (ωr )) =
Er [f ].
Definition 3.8. An event A ∈ F is increasing if ω ∈ A, ω 0 ∈ Ω, ω 0 ≥ ω implies
ω 0 ∈ A. An event B ∈ F is decreasing if B c is increasing.
Examples 3.9. Given a graph G = (V, E), we have the following examples of
increasing events.
1. Let E 0 ⊆ E. Then the event A = {ω ∈ Ω | ω(E 0 ) = 1} is increasing.
2. Let v1 , v2 ∈ V . Then the event
A = {ω ∈ Ω | v1 and v2 are connected in ω}
is increasing.
We state the following theorem, a special case of what is called the FKG inequality, with the proof[4] omitted.
Theorem 3.10. Suppose that Ω is as above, with E finite, and suppose that P is
a product measure on Ω. If A and B are increasing events on Ω, then P(A ∩ B) ≥
P(A)P(B).
Corollary 3.11. By symmetry, the (unmodified) conclusion of Theorem 3.10 also
holds under the hypothesis that A and B are decreasing: if A and B are decreasing
events on Ω, then P(A ∩ B) ≥ P(A)P(B).
Remark 3.12. The intuition behind Theorem 3.10 (in the setup of Examples 3.9)
is that an increasing event is one that requires edges to be open and that only
becomes more likely when it is given that more edges are open. Heuristically, then,
the occurrence of one increasing event increases the likelihood of another increasing
event.
Definition 3.13. Suppose that A and B are increasing subsets of Ω. Define
A ◦ B = ω ∈ Ω there is an F ⊆ E such that ωE\F ∈ A and ωF ∈ B ,
using the notation from 3.3.
Remark 3.14. Intuitively, ω ∈ A ◦ B if there is an F ⊆ E so that we only need
to examine the edges of F to know that ω ∈ A, and we only need to examine the
edges outside of F to know that ω ∈ B.
Proposition 3.15. If A and B are increasing subsets of Ω, then A◦B is increasing.
Proof. If ω ∈ A ◦ B, then there is an F ⊆ E so that ωE\F ∈ A and ωF ∈ B. If
0
0
ω 0 ≥ ω, then ωE\F
≥ ωE\F and thus ωE\F
∈ A since A is increasing. Similarly,
0
0
ωF ∈ B. Therefore, ω ∈ A ◦ B, so A ◦ B is increasing.
TOPICS IN PERCOLATION
7
Theorem 3.16 (BK inequality). Let Ω be defined as above and endowed with a
product measure P. Suppose that the edge-set E is finite, and suppose moreover
that A and B are increasing subsets of Ω. Then P(A ◦ B) ≤ P(A)P(B).
Proof. Let 1, . . . , N be an ordering of E. Let Ω × Ω0 be the product of two copies of
Ω and let P̂ = P × P be the product measure on Ω × Ω0 . If ω ∈ Ω and ω 0 ∈ Ω0 , let
Hj (ω, ω 0 ) = (ω 0 (1), . . . , ω 0 (j − 1), ω(j), . . . , ω(N )). Clearly, ω1 = ω and ωN +1 = ω 0 .
For each j = 1, . . . , N , define
Âj = {(ω, ω 0 ) | Hj (ω, ω 0 ) ∈ A}
B̂ = {(ω, ω 0 ) | ω ∈ B} = B × Ω.
It is clear that B̂ and all of the Âj s are increasing since A and B are increasing.
Moreover, Â1 = A × Ω0 , so
P̂(Â1 ◦ B̂) = P̂((A × Ω0 ) ◦ (B × Ω0 )) = P̂(A ◦ B).
Also, ÂN +1 = Ω × A, so
P̂(ÂN +1 ◦B̂) = P̂((Ω×A)◦(B×Ω0 )) = P̂ {(ω, ω 0 ) | ω ∈ B and ω 0 ∈ A} = P(A)P(B)
by the definition of the product measure P̂. The rest of the proof will establish the
chain of inequalities
P̂(A ◦ B) = P̂(Â1 ◦ B) ≤ P̂(Â2 ◦ B) ≤ · · · ≤ P̂(ÂN +1 ◦ B) = P(A)P(B),
which will imply the desired result.
Fix 1 ≤ j ≤ N . We want to show that
(3.17)
P̂(Âj ◦ B̂) ≤ P̂(Âj+1 ◦ B̂).
We do this by conditioning on the value of ω(i), ω 0 (i) for all i 6= j. Let E = E \ {j}
and let G = (V, E). We will use the notation ω|E to mean the restriction of ω (as a
function E → {0, 1}) to E. Let ω and ω 0 be configurations of G. Define the event
C ⊆ Ω × Ω0 by
C = ω|E = ω and ω 0 |E = ω 0 .
The goal is then to show that
(3.18)
P̂ (ω, ω 0 ) ∈ Âj ◦ B̂ C ≤ P̂ (ω, ω 0 ) ∈ Âj+1 ◦ B̂ C ;
if (3.18) holds for all C, then (3.17) clearly follows. Let ω j , ω 0j , ω j , ω 0j be configurations of G that agree on E \ {j} with ω and ω 0 , respectively, and have edge j
open and closed, respectively. Consider three cases:
1. (ω j , ω 0j ) 6∈ Âj ◦ B̂. Then
P̂ (ω, ω 0 ) ∈ Âj ◦ B̂ C = 0 ≤ P̂ (ω, ω 0 ) ∈ Âj+1 ◦ B̂ C ,
with the inequality following trivially from the definition of measure.
2. (ω j , ω 0j ) ∈ Âj ◦ B̂, so (ω j , ω 0j ) ∈ Âj+1 ◦ B̂. Then
P̂ (ω, ω 0 ) ∈ Âj+1 ◦ B̂ C = 1 ≥ P̂ (ω, ω 0 ) ∈ Âj ◦ B̂ C .
3. Neither of the above, so (ω j , ω 0j ) ∈ Âj ◦ B̂ and (ω j , ω 0j ) 6∈ Âj ◦ B̂. Membership
of a pair of configurations (ω, ω 0 ) in Âj ◦ B̂ does not depend on the value of
ω 0 (j), so this implies
(ω j , ω 0j ) ∈ Âj ◦ B̂
(ω j , ω 0j ) 6∈ Âj ◦ B̂.
8
ALEXANDER DUNLAP
Since Âj ◦ B̂ is increasing, this means that the conditional probability that
(ω, ω 0 ) is in Âj ◦ B̂ exactly the probability that ω(j) = 1. More formally, we
have
P̂ (ω, ω 0 ) ∈ Âj ◦ B̂ C = P(ω(j) = 1).
Since (ω j , ω 0j ) ∈ Âj ◦ B̂, there are sets F, F 0 ⊆ E so that we have
((ω j )E\F , (ω 0j )E\F 0 )) ∈ Âj and ((ω j )F , (ω 0j )F 0 )) ∈ B̂. The latter membership
is equivalent to (ω j )F ∈ B.
Consider two cases.
(a) j ∈ F , so ω j (j) “contributes” to Âj . The value of w0 (j) does not affect
whether or not B̂ occurs, and the value of w(j) does not affect whether
or not Âj+1 occurs, so we can certainly remove j from F and add j to
F 0 when considering membership in Âj+1 ◦ B̂. With this arrangement
0
of F and F 0 (and conditioning on C), we have (ωE\F 0 , ωE\F
0 ) ∈ Âj+1
0
if and only if ω (j) = 1. There may be other arrangements of F and F 0
that increase the membership of of Âj+1 ◦ B̂, so, in general,
P̂ (ω, ω 0 ) ∈ Âj+1 ◦ B̂ C ≥ P(ω 0 (j) = 1)
= P(ω(j) = 1)
= P̂ (ω, ω 0 ) ∈ Âj ◦ B̂ C .
(b) j ∈ E \ F . Now ω(j) “contributes” to B̂. There is no effect from
w0 (j) on membership in B̂, so we can add j to F when considering
membership of a given configuration in Âj+1 ◦ B̂. This may make the
probability of membership in Âj+1 ◦ B̂ higher than that in Âj ◦ B̂, but
certainly cannot make it lower. Therefore,
P̂ (ω, ω 0 ) ∈ Âj+1 ◦ B̂ C ≥ P̂ (ω, ω 0 ) ∈ Âj ◦ B̂ C .
In all cases, (3.18) holds. This applies for all fixed configurations ω, ω 0 , so (3.17)
holds, which implies the desired result in the manner indicated above.
3.3. Lattices and translation. We single out a special class of graphs, called
lattices, for the study of percolation. We will write Zd for the Cartesian product of
d copies of the integers.
Pd
Definition 3.19. The `1 norm on Zd is given by kxk1 = i=1 |xi |. The `∞ norm
is given by kxk∞ = max1≤i≤d |xi |.
Definition 3.20. The d-dimensional square lattice is the (infinite) graph Ld =
(Zd , Ed ), with the edge set Edconsisting
of pairs of (non-diagonally)
adjacent nodes.
More precisely, we set Ed = hx, yi x, y ∈ Zd and kx − yk1 = 1 .
For the remainder of the paper, (Ω, F, Pp ) will be the probability space of configurations of Ld with the product measure Pp .
Definition 3.21. A box B in Ld is a subgraph of Ld with vertex set V = [a1 , b1 ] ×
[a2 , b2 ] × · · · × [ad , bd ] and edge set consisting of all edges in Ed with both endpoints
TOPICS IN PERCOLATION
9
in V (B). The boundary of B, denoted ∂B, is (as its name suggests) the subgraph
of Ld with vertex set
V (∂B) = {x = (x1 , . . . .xn ) ∈ V (∂B) | there is an i such that xi ∈ {ai , bi }}
and edge set consisting of all edges in Ed with both endpoints in V (∂B).
Notation 3.22. Given n ∈ N, we define Λ(n) to be the box generated by [−n, n]d .
Definition 3.23. For each edge e ∈ Ed , we have as in Notation 2.20 the projection
function pe , which we can consider as a random variable on Ω. Define L(n) =
σ ({pe | e ∈ Λ(n)}), the σ-algebra ofSevents that depend only on the configurations
of the edges inside Λ(n). Let L = n∈N L(n).
Fact 3.24. The set of events L is an algebra, and F = σ(L).
The extra structure provided by the lattice lets us consider translations. Note
that the lattice itself is invariant under translation: none of the points of the lattice
are singled out.
Definition 3.25. Let z ∈ Zd .
1. If G is a subgraph of Ld , the translation
of G by z is the subgraph G + z of
Ld with vertex set V (G + z) = x + z x ∈ Zd and edge set consisting of
the edges of G translated similarly.
2. Let ω be a configuration of Ld . The translation of ω by z is the configuration
given by (ω + z)(e) = ω(e − z) for all e ∈ Ed .
d
3. Suppose that A is an event in the probability space (Ω = {0, 1}E , F =
P(Ω), Pp ). The translation of A by z is the event A + z = {ω | ω − z ∈ A}.
We say that A is translation-invariant if for all z ∈ Zd , we have A + z = A.
4. A measure P is translation-invariant if P(A) = P(A + z) for all events A
and all z ∈ Zd .
Examples 3.26. The following events are translation-invariant:
1. The event A = {all edges are open in ω}.
2. The event A = {there is an infinite open cluster in ω}.
The event B = {there is an infinite open cluster containing the origin in ω}, however, is not translation-invariant, since translating a configuration in which the
origin is part of an infinite open cluster may move the cluster away from the origin.
Fact 3.27. Product measure Pp is translation-invariant by definition.
Theorem 3.28 (Zero-one law). If A is a translation-invariant event, then either
Pp (A) = 0 or Pp (A) = 1.
Proof. Given e ∈ Ed , define the random variable e∗∗ : Ω → {0, 1} by e∗∗ (ω) = ω(e).
Note that the σ-algebra Ln = σ ({pe | e ∈ Λ(n)}) is the σ-algebra of events that only
depend on the edges in Λ(n).
S
Fix ε > 0. By Proposition 2.5, we know that n∈N Ln is an algebra. Furthermore, because any event can be expressed as a countable union or intersection of
events only depending
on the edges in finite boxes (see Proposition 2.21), we have
S
that A ∈ σ n∈N Ln . Therefore, by Theorem 2.17, there is an n ∈ N and an
event U ∈ Ln so that Pp (A 4 U ) < ε, which implies |Pp (A) − Pp (U )| < ε and
(3.29)
|Pp (A)2 − Pp (U )2 | = |Pp (A) + Pp (U )| · |Pp (A) − Pp (U )| < 2ε.
10
ALEXANDER DUNLAP
Let z = (n + 1, 0, 0, . . . , 0) ∈ Zd and let V = U + z. Clearly, U and V are
independent and have identical probabilities under the product measure, so Pp (U ∩
V ) = Pp (U )Pp (V ) = Pp (U )2 . Substituting into (3.29), we have
|Pp (A)2 − Pp (U ∩ V )| < 2ε.
(3.30)
Note that, by several applications of Definition 3.25.3 and by the translationinvariance of A, we have
A 4 V = {ω1 | ω1 ∈ A and ω1 6∈ V } ∪ {ω1 | ω1 ∈ V and ω1 6∈ A}
= {ω1 | ω1 ∈ A and ω1 6∈ U + z} ∪ {ω1 | ω1 ∈ U + z and ω1 6∈ A}
= {ω1 | ω1 ∈ A and ω1 − z 6∈ U } ∪ {ω1 | ω1 − z ∈ U and ω1 6∈ A}
= {ω2 − z | ω2 + z ∈ A and ω2 6∈ U } ∪ {ω2 − z | ω2 ∈ U and ω2 + z 6∈ A}
= {ω2 − z | ω2 ∈ A and ω2 6∈ U } ∪ {ω2 − z | ω2 ∈ U and ω2 6∈ A}
= (A 4 U ) − z,
and thus Pp (A 4 V ) = Pp (A 4 U ) by the translation-invariance of Pp . Note also
that
A 4 (U ∩ V ) = (A ∩ (U ∩ V )c ) ∪ (Ac ∩ U ∩ V )
= (A ∩ (U c ∪ V c )) ∪ (Ac ∩ U ∩ V )
= (A ∩ U c ) ∪ (A ∩ V c ) ∪ (Ac ∩ U ∩ V )
⊆ (A ∩ U c ) ∪ (Ac ∩ U ) ∪ (A ∩ V c ) ∪ (Ac ∩ V )
= (A 4 U ) ∪ (A 4 V ).
Therefore, Pp (A 4 (U ∩ V )) ≤ Pp (A 4 U ) + Pp (A 4 V ) = 2Pp (A 4 U ) < 2ε, so
(3.31)
|Pp (A) − P(U ∩ V )| < 2ε.
Combining (3.30) and (3.31) by the triangle inequality, we have that |Pp (A) −
Pp (A)2 | < 4ε. But this holds for all ε > 0, so Pp (A) = Pp (A)2 , so Pp (A) must be
equal to 0 or to 1.
Remark 3.32. The key mechanism in the above proof is the application of Theorem 2.17. The translation-invariant event A certainly cannot depend on any specific
region of the lattice, but Theorem 2.17 says that it can be approximated in measure by an event that depends only on some specific region of the lattice. This can
happen only if A happens almost always or almost never.
3.4. Planar Duality. An important peculiarity of the two-dimensional square lattice L2 is that the lattice is “dual” to a copy of itself translated by (1/2, 1/2). This
allows us to use arguments about the connectedness of the dual lattice to draw
conclusions about the connectedness of the original lattice.
Definition 3.33. We define the dual two-dimensional square lattice by L2d = L2 +
(1/2, 1/2) (Figure 1a).
Remark 3.34. The dual lattice L2d has a vertex at the center of every square in L2 .
Every edge e of L2 intersects exactly one edge ed of L2 and vice versa, so the map
e 7→ ed is a bijection.
Definition 3.35. A configuration ω of L2 gives rise to a configuration ωd of L2d
given by ωd (ed ) = ω(e), with ed defined as in Remark 3.34. We will sometimes
omit the subscript d and speak of the configuration ω of L2d .
TOPICS IN PERCOLATION
11
(a) Nodes and edges of L2 (dark) (b) Two open clusters (dark) in a
and L2d (light).
configuration of L2 are “separated”
by a closed path (light) in L2d .
Figure 1. The lattice and dual lattice.
Remark 3.36. Informally, if x and y are members of two different clusters of L2 or
of a subset of L2 , then there must be a closed path in L2d “separating” x from y
(Figure 1b). We omit the formalization and proof of this statement, but will use
this type of argument frequently in analyzing percolation on L2 .
4. Percolation
We consider percolation on the d-dimensional square lattice Ld . We will consider the sample space Ω consisting of all configurations ω of Ld , equipped with
the product σ-algebra F and product measure Pp . With respect to Pp , the individual edge-states ω(e) are independent, identically-distributed random variables
satisfying the law Pp (ω(e) = 1) = p.
4.1. Basic theory and the critical probability. Sometimes it will be more
convenient to think about the size of the cluster containing a node in terms of the
lengths of paths originating at that node, a concept we formalize next.
Definition 4.1. Let G = (V, E) be a graph. Given n ∈ N and x ∈ V , a selfavoiding walk of length n starting at x is a sequence W = (x = x1 , x2 , x3 , . . . , xn )
of length n such that hxi , xi+1 i ∈ E for all i = 1, . . . , N and also that xi = xj implies
i = j. Given a configuration ω of G, we say that W is open in ω if ω(hxi , xi+1 i) = 1
for all i = 1, . . . , N .
Proposition 4.2. Given a configuration ω ∈ Ω, there is an open self-avoiding walk
of length n starting at 0 for all n ∈ N if and only if |C0 (ω)| = ∞.
Proof. If there is an open self-avoiding walk Wn of
S length n starting at 0 for all
n ∈ N, then each of the infinite number elements of n∈N Wn is certainly connected
to 0. Conversely, if |C0 (ω)| = ∞, then for any n ∈ N the open cluster containing the
origin must contain a point with a graph-theoretic distance of n from the origin,
and the self-avoiding walk (which can be attained by erasing the cycles from an
arbitrary path) from that point to the origin must have length at least n.
12
ALEXANDER DUNLAP
Definition 4.3. Given Ω with product measure Pp , the percolation probability θ(p)
is defined by θ(p) = Pp (|C0 | = ∞).
Proposition 4.4. The percolation probability function θ : [0, 1] → [0, 1] is nondecreasing.
Proof. The function f : Ω → R given by f (ω) = 1{|C0 )|=∞} (ω) is certainly nondecreasing, since a configuration with an infinite cluster containing the origin will
continue to have an infinite cluster containing the origin if more edges are opened.
Therefore, by Theorem 3.7, θ(p) = Ep [f ] is nondecreasing in p.
Definition 4.5. The critical probability pc is defined by pc = sup(θ−1 (0)).
Remark 4.6. By Proposition 4.4, θ(p) = 0 for all p < pc and θ(p) > 0 for all p > pc .
It has been shown that θ(pc ) = 0 for d = 2 and d ≥ 19, and it is believed that this
result holds for 3 ≤ d ≤ 18, but the latter statement has not been proven.
Much of the remainder of this paper will be devoted to proving results about the
nature of percolation when p < pc and when p > pc . However, some of these results
would be vacuous if it turned out that pc = 0 or pc = 1. The next proposition and
two theorems show that while d = 1 is an uninteresting case, we have 0 < pc < 1
whenever d ≥ 2. This establishes the existence of criticality in percolation: the
behavior of the system changes abruptly at p = pc .
Proposition 4.7. For d = 1, we have pc = 1.
Proof. Let p < 1. For any ω ∈ Ω, we have |C0 (ω)| = ∞ if and only if either
ω(hn − 1, ni) = 1 for all n ∈ N or ω(h−n + 1, −ni) = 1 for all n ∈ N. Both of these
events have probability limn→∞ pn = 0, so θ(p) = 0.
Theorem 4.8. For d ≥ 2, we have pc > 0.
Proof. We show that pc > 0 by finding a p > 0 such that θ(p) = 0. By Proposition 4.2, |C0 (ω)| = ∞ if and only if for each n ∈ N, there is an open self-avoiding
walk of length n starting at 0. Let Wn be the set of all self-avoiding walks of length
n starting at 0, and let σn = | Wn |. Let Nn (ω) be the number
of open self-avoiding
T
walks of length n in ω starting at 0. Then, θ(p) = Pp ( n∈N {Nn ≥ 1}), so for all
n ∈ N,
θ(p) ≤ Pp (Nn ≥ 1) = Ep [1Nn ≥1 ]
≤ Ep [Nn ] = Ep
X
1{W
W ∈ Wn
open}
=
X
Pp (W open) = σn pn .
W ∈ Wn
All that remains is to bound σn . The first step of a self-avoiding walk starting at
the origin can be in any of 2d directions, and all subsequent steps can go in at most
2d − 1 directions, since a self-avoiding walk certainly cannot backtrack its previous
2d
(p(2d − 1))n
step. Therefore, σn ≤ 2d(2d − 1)n−1 , so θ(p) ≤ 2d(2d − 1)n−1 pn = 2d−1
1
for all n ∈ N. If p ∈ (0, 2d−1 ), this implies that θ(p) = 0, so pc ≥ 1/(2d−1) > 0. Theorem 4.9. For d ≥ 2, we have pc < 1.
Proof. We must find a p < 1 so that θ(p) > 0. If θ(p) > 0 for d = 2, then
θ(p) > 0 for all d ≥ 2. This is because we can define an embedding ` : L2 → Ld
by `(x1 , x2 ) = (x1 , x2 , 0, . . . , 0) and mapping edges in L2 to corresponding edges in
Ld . If there is an open cluster in `(L2 ) ⊂ Ld containing the origin (which, by the
TOPICS IN PERCOLATION
13
properties of product measure, happens with probability θ(p) for d = 2), then there
is certainly an open cluster in Ld containing the origin. Therefore, we can restrict
our argument to d = 2, which allows us to use a planar duality argument to show
the desired result.
Note that, as in Remark 3.36, we have |C(ω)| < ∞ if and only if ωd has a
connected “circuit” of closed edges encircling the origin in L2d . Let Qn be the set of
all circuits of length n encircling the origin in L2d . If Q ∈ Qn , then Q must contain
a node in the set {(k + 1/2, 1/2) | k ∈ {0, . . . , n − 1}}, which we can consider the
“starting point” of Q. Then, M must have n edges, and there can be at most 4
(in all but the first case, no more than 3, but the better estimate is unnecessary)
possibilities for the direction of each edge. Therefore, |Qn | ≤ n4n .
If we define the random variable Mn by Mn (ω) = | {Q ∈ Qn | Q is closed in ω} |,
we have
!
!
∞
∞
[
X
Pp (|C| < ∞) = Pp
{Mn ≥ 1} = Pp
Mn ≥ 1 = Ep 1(P∞
n=4 Mn ≥1)
≤ Ep
n=4
∞
X
n=4
!
Mn
n=4
=
∞
X
X
=
∞
X
X
Ep [1(M
closed) ]
n=4 M ∈Mn
Pp (M closed) =
n=4 M ∈Mn
∞
X
|Qn |(1 − p)n ≤
n=4
∞
X
n(4(1 − p))n .
n=4
By making p ∈ (0, 1) sufficiently close to 1, we can make this last sum less than 1,
which is what we want.
4.2. Subcritical percolation: exponential decay. Percolation is subcritical
when p < pc . In this case, there will almost never be an infinite cluster, and
thus it is natural to ask what we can say about the size of the cluster C0 containing
the origin. The chief result is that the probability of C0 extending to a box of radius
n decays exponentially in n.
Definition 4.10. Given p ∈ [0, 1], the mean cluster size of Ld is defined by χ(p) =
Ep [|C0 |], the expectation of the size of the cluster containing the origin.
Fact 4.11. If p > pc , then there is a nonzero probability of an infinite cluster, so
χ(p) = ∞.
An important result for establishing exponential decay, the proof[3] of which we
omit, is the following theorem.
Theorem 4.12. If p < pc , then χ(p) < ∞.
We can think of Theorem 4.12 as a statement about the coincidence of the
critical points of θ and of χ. Assuming this result, we prove the exponential decay
of subcritical percolation.
Theorem 4.13. If p < pc , then there is a κ(p) > 0 so that, for all n ≥ 1, we have
Pp (0 ↔ ∂Λ(n)) ≤ e−nκ(p) .
Proof. By Theorem 4.12, we have χ(p) = Ep [|C0 |] < ∞.
14
ALEXANDER DUNLAP
Λ(n + m)
Λ(n)
x
Figure 2. Illustration of the disjoint-paths argument that lets us
use the BK inequality.
We first show that we can bound Pp (0 ↔ ∂Λ(n + m)) by Kp (m)Pp (0 ↔ ∂Λ(n))
for a function Kp . Moreover, we can find an m so that Kp (m) < 1, which allows
us to use the division algorithm to establish exponential decay.
Let m, n ∈ N. In a configuration ω in which there is a path from 0 to ∂Λ(n + m),
there are also disjoint paths from 0 to some x ∈ ∂Λ(n) and from x to the translated
box boundary x + ∂Λ(m) (Figure 2). Because these paths are disjoint, we can find
a subset F ⊂ E so that ωF ∈ {0 ↔ x} and ωE\F ∈ {x ↔ x + ∂Λ(m)}. Therefore,
in the notation of Definition 3.13, we have
[
{ω ↔ ∂Λ(n + m)} ⊆
[{0 ↔ x} ◦ {x ↔ x + ∂Λ(m)}],
x∈∂Λ(n)
so the BK inequality (Theorem 3.16) says that
X
Pp (0 ↔ ∂Λ(m + n)) ≤
Pp (0 ↔ x)Pp (x ↔ x + ∂Λ(m))
x∈∂Λ(n)
X
=
Pp (0 ↔ x)Pp (0 ↔ ∂Λ(m))
x∈∂Λ(m)
= Pp (0 ↔ ∂Λ(m))
X
Pp (0 ↔ x)
x∈∂Λ(m)
= Pp (0 ↔ ∂Λ(m))Ep [| {x ∈ Λ(m) | 0 ↔ x} |].
We thus choose Kp (m) = Ep [| {x ∈ Λ(m) | 0 ↔ x} |], and we want to analyze Kp
to find an M such that Kp (M ) < 1. In fact, summing over all m, we have


∞
∞
X
X
X

Kp (m) =
Pp (0 ↔ x)
m=0
m=0
=
X
x∈Zd
x∈∂Λ(n)
Pp (0 ↔ x) = Ep [|C0 |] = χ(p) < ∞
TOPICS IN PERCOLATION
15
P∞
by hypothesis. Thus, since n=0 Kp (m) converges, limm→∞ Kp (m) = 0, and in
particular, there is an M ∈ N so that Kp (M ) < 1, which we fix for the remainder
of the proof.
Let n ∈ N. By the division algorithm, we have q, r ∈ N (with 0 ≤ r < M ) so
that n = qM + r. Since a path from 0 to ∂Λ(n) = ∂Λ(qM + r) certainly intersects
∂Λ(qM ), we have
Pp (0 ↔ ∂Λ(n)) ≤ Pp (0 ↔ ∂Λ(qM ))
≤ Kp (m)Pp (0 ↔ ∂Λ((q − 1)M )
..
.
≤ (Kp (m))q = eq log(Kp (m)) .
Since Kp (m) < 1, we have log(Kp (m)) < 0. Also, since n < (q + 1)M , we have that
1
1
− n1 ) ≥ n( M
− M1+1 ) for all n ≥ M + 1. Therefore, letting
q > −1 + n/M = n( M
1
1
0
−
log(Kp (M )),
κ (p) = −
M
M +1
we have κ0 (p) > 0 and
q log(Kp (m)) < −nκ0 (p)
0
for all n ≥ M + 1, which implies that Pp (0 ↔ ∂Λ(n)) < e−nκ (p) for all n ≥ M + 1.
Since there are only a finite number of n ≤ M , we can choose an S so that Pp (0 ↔
∂Λ(n)) ≤ e−nS for all n ≤ M , and then letting κ(p) = max{κ0 (p), S} gives the
desired inequality.
4.3. Supercritical percolation: uniqueness of the infinite cluster. If p > pc ,
then there is a nonzero probability of an infinite cluster containing the origin. By
the zero-one law (Theorem 3.28), for a given p there is either almost surely or
almost never an infinite cluster. Thus, if p > pc , there is almost surely an infinite
cluster. The goal of this section is to show that this infinite cluster is almost surely
unique.
We first must establish a technical result regarding the number of ways of partitioning a set, which we will use in the proof of the theorem to establish a relationship
between special points in the interior of a box and points on the boundary of the
box.
Definition
1.
(1)4.14.
Let Y be a set. A partition of Y is a collection P =
P , P (2) , P (3) of three nonempty, disjoint subsets of Y such that P (1) ∪
P (2) ∪ P (3) = Y . (Note that we consider two partitions of Y to be equal if
they consist of the same three subsets of Y , regardless of the ordering of the
three subsets.)
2. Two partitions P and Q of a set Y are compatible if there are orderings of
P and Q so that P (1) ⊇ Q(2) ∪ Q(3) .
3. A collection of partitions is compatible if the partitions are pairwise compatible.
Lemma 4.15. Suppose that S = {Si }i is a compatible collection of partitions of a
set Y . Then |S| ≤ |Y | − 2.
16
ALEXANDER DUNLAP
Proof. We proceed by induction on n = |Y |. Because elements of a partition must
be nonempty, there can only be one partition of Y if n = 3, so the lemma holds in
this case. For the rest of the proof, we will fix n ≥ 4 and assume that the lemma
holds for all Z with |Z| < n.
Let S ∈ S be an arbitrary partition. Fix an ordering S = S (1) , S (2) , S (3) of
the elements of S. Because S is compatible, for each T ∈ S \ S there is an ordering
of T and a unique i ∈ {1, 2, 3} so that S (i) ⊇ T (2) ∪ T (3) ; therefore, we can write
S \ S as the disjoint union S = S(1) ∪ S(2) ∪ S(3) , where T ∈ S(i) whenever there is
an ordering of T such that S (i) ⊇ T (2) ∪ T (3) .
Fix i ∈ {1, 2, 3}. Let Zi = S (i) ∪ {∆}, where ∆ is an arbitrary object (not an
element of Y ). Note that |Zi | = |S (i) | + 1 ≤ n − 1 < n, so there can be no more
than |S (i )| − 1 partitions of Zi by the inductive hypothesis. If T ∈ S(i) , then we
can define a partition T 0 of Zi by
n
o
T 0 = (T (1) ∩ S (i) ) ∪ {∆}, T (2) , T (3) .
It is clear that the map that takes T to T 0 is injective. Thus, since there can be
at most |S (i )| − 1 partitions of Zi , we have that |S(i) | ≤ |S (i )| − 1. This holds for
all i ∈ {1, 2, 3}, so |S| = |S(1) | + |S(2) | + |S(3) | + 1 ≤ (|S (1) | − 1) + (|S (2) | − 1) +
(|S (3) | − 1) + 1 = n − 2.
We can now prove the main theorem of this section.
Theorem 4.16 (Uniqueness of the infinite cluster). If p > pc , then there is almost
surely exactly one infinite cluster. In other words, if N (ω) is the number of infinite
clusters in a configuration ω, then Pp (N (ω) = 1 : ω ∈ Ω) = 1.
Proof. We will use the notation ω F and ωF from 3.3.
be the diamond of radius n around the origin in Ld , so V (S(n)) =
Let dS(n)
v ∈ L kvk1 ≤ n and E(S(n)) is the set all edges between adjacent elements of
V (S(n)).
For any k ∈ N, the event Ek = {N (ω) = k} is translation-invariant, so by the
zero-one law (Theorem 3.28), we have Pp (Ek ) ∈ {0, 1}. Thus, there is a k ∈
Z≥0 ∪ {∞} so that Pp (Ek ) = 1, and our goal is to show that k = 1. By the
assumption that p > pc , we know that k ≥ 1.
Suppose
first
that k ∈Z. For any natural number n, both of the two events
On = ω ∈ Ω ω = ω S(n) and Cn = ω ∈ Ω ω = ωS(n) are cylinders and thus
have nonzero measure under Pp . Let NS(n) (ω) be the number of distinct infinite
open clusters in ω that intersect S(n). Note that closing a finite number of edges
can only increase the number of infinite clusters intersecting S(n), so NS(n) (ω) ≤
NS(n) (ωS(n) ) for all n ∈ N, ω ∈ Ω.
Suppose that Pp (N (ω) 6= k | ω ∈ On ) > 0. Then
Pp (N (ω) 6= k) ≥ Pp (N (ω) 6= k and ω ∈ On )
= Pp (N (ω) 6= k | ω ∈ On )Pp (ω ∈ On ) > 0,
contradicting the definition of k. Therefore (using an analogous argument for Cn ),
Pp (N (ω) = k | ω ∈ On ) = Pp (N (ω) = k | ω ∈ Cn ) = 1, which implies that
Pp (N (ω S(n) ) = k) = Pp (N (ωS(n) ) = k) = 1.
The intersection of two almost-sure events is also almost-sure, so we have
Pp (N (ω S(n) ) = k = N (ωS(n) )) = 1.
TOPICS IN PERCOLATION
17
But if ω is such that N (ω S(n) ) = N (ωS(n) ), then ω can have no more than one open
cluster intersecting S(n), since otherwise opening all of the vertices of ωS(n) (to form
ω S(n) ) would decrease the total number of open clusters. Therefore, Pp (NS(n) (ω) ≥
2) = 0.
If ω ∈ Ω is such that N (ω) ≥2, then there is an n∈ N so that NS(n) (ω) ≥ 2,
S
so {ω ∈ Ω | N (ω) ≥ 2} = n∈N ω ∈ Ω NS(n) (ω) ≥ 2 . Therefore, by continuity
of measures, Pp (N (ω) ≥ 2) = limn→∞ Pp (NS(n) (ω) ≥ 2) = 0, which implies that
k = 1 if k ∈ Z.
We now need only to dismiss the case k = ∞. Given a configuration ω ∈ Ω,
we say that x ∈ Ld is a trifurcation of ω, and write Tx (ω), if the following three
conditions hold:
1. x is a member of an infinite cluster of ω.
2. x is an endpoint of exactly three open edges e1 = hx, y1 i , e2 = hx, y2 i , e3 =
hx, y3 i.
3. In the configuration ω{e1 ,e2 ,e3 } created by declaring the edges surrounding
x to be closed, we have that y1 , y2 , y3 are members of three distinct infinite
clusters.
By translation-invariance of Pp , it is clear that Pp (Tx ) = Pp (T0 ) for all x ∈ Ld .
Therefore,


X
X
Pp (Tx ) = |S(n)|Pp (Tx ).
Ep 
1 Tx  =
x∈S(n)
x∈S(n)
We wish to show that, if k = ∞, the expectation Ep
hP
x∈S(n)
i
1Tx grows in pro-
portion to |S(n)| as n → ∞, which means we must prove that Pp (T
0 ) 6= 0. This
hP
i
conclusion will lead to a contradiction, since we will show that Ep
1
x∈S(n) Tx
in fact cannot grow faster than |∂S(n)|, which grows more slowly than |S(n)|.
We assume that k = ∞, so Pp (N (ω) = ∞) = 1, which certainly implies that
Pp (N (ω) ≥ 3) = 1. Given a configuration ω such that N (ω) ≥ 3, there is clearly
an n ∈ N so that NS(n)
(ω) ≥ 3, which implies
that NS(n) (ωS(n) ) ≥ 3. Therefore,
S
{ω | N (ω) ≥ 3} ⊆ n∈N ω NS(n) (ωS(n) ) ≥ 3 , so
1 = Pp (N (ω) ≥ 3) ≤ lim Pp (NS(n) (ωS(n) ) ≥ 3) ≤ 1,
n→∞
so limn→∞ Pp (NS(n) (ωS(n) ) ≥ 3) = 1, so there is an m ∈ N such that
Pp (NS(m) (ωS(m) ) ≥ 3) ≥ 1/2,
with 1/2 chosen as an arbitrary constant in (0, 1).
Suppose ω is a configuration such that NS(m) (ωS(m) ) ≥ 3. Then there are three
points x(ω), y(ω), z(ω) ∈ ∂S(m) so that x(ω), y(ω), z(ω) are in different infinite
clusters of ω. It can be shown geometrically that there is a set of edges Eω ⊆
E(S(m)) connecting x(ω), y(ω), z(ω) through disjoint paths to the origin so that 0
Eω
is a trifurcation of the configuration ωE(S(m))\E
(which is given by taking all edges
ω
18
ALEXANDER DUNLAP
of Eω to be open and all other edges of S(m) to be closed). Now
Eω
Pp (T0 (ω)) ≥ Pp NS(m) (ωS(m) ) ≥ 3 and ω = ωE(S(m))\E
ω
Eω
= Pp NS(m) (ωS(m) ) ≥ 3 Pp ω = ωE(S(m))\E
N
(ω
)
≥
3
S(m)
S(m)
ω
1
Eω
≥ Pp ω = ωE(S(m))\E
NS(m) (ωS(m) ) ≥ 3
ω
2
> 0,
o
n Eω
is a cylinder event.
since ω ω = ωE(S(m))\E
ω
hP
i
Thus, we can write Ep
= K|S(n)|, where K = Pp (Tx ) > 0.
x∈S(n) 1Tx
However, we will useP
Lemma 4.15 to show that this is absurd, because for any
ω ∈ Ω, we must have x∈S(n) 1Tx (ω) < |∂S(n)|.
Let ω ∈ Ω and n ∈ N. Fix an infinite cluster C that intersects S(n). Let
UC = ∂S(n) ∩ C. If x ∈ C ∩ S(n) is a trifurcation of ω, then closing the edges
(3)
(2)
(1)
around x separates C into
Cx , Cx , Cx and thus gives
o
n three infinite clusters
(i)
(i)
of UC . Given two trifurcations
rise to a partition Px = Px = Cx ∩ UC
i=1,2,3
(1)
(1)
x, x0 ∈ C∩S(n) of ω, ordering Px and Px0 so that Px 3 x0 and Px0 3 x ensures that
(3)
(2)
(1)
Px ⊇ Px0 ∪ Px0 . Thus, in the language of Definition 4.14, the class of partitions
P = {Px | x ∈ C ∩ S(n) and x is a trifurcation of ω} is compatible, so |P| < |UC | −
2 by Lemma 4.15. Moreover,
P by the definition of trifurcation, the map that takes
x to Px is a bijection, so x∈S(n)∩C 1Tx (ω) ≤ |UC | − 2 < |U |. Summing over all
P
clusters C, we have that x∈S(n) 1Tx (ω) < |∂S(n)| since the UC s are disjoint. But
hP
i
this means that for large n, we cannot possibly have Ep
x∈S(n) 1Tx = K|S(n)|,
since |S(n)| grows in proportion to nd while |∂S(n)| grows in proportion to nd−1 .
This dismisses the case k = ∞, so we have proved that k = 1.
4.4. The critical value in two dimensions. In general, it is very difficult to compute exactly the critical probability pc for a given lattice. For the two-dimensional
square lattice L2 , however, we can use planar duality and the results of the previous two sections to prove that pc = 1/2. We use separate arguments to show that
pc ≥ 1/2 and that pc ≤ 1/2.
Theorem 4.17. On L2 , we have pc ≥ 1/2.
Proof. Let p = 1/2. Suppose that pc < 1/2, so θ(1/2) > 0. For each n ∈ N, define
the event An = {∂Λ(n) ↔ ∞}, and note that the An s form an increasing sequence.
Since p > pc , there is almost surely an infinite cluster in any configuration,
S and the
box Λ(n) will intersect this infinite cluster if n is sufficiently large, so n An = Ω.
Therefore, limn→∞ P1/2 (An ) = 1, so there is an N ∈ N so that
(4.18)
P1/2 (∂Λ(n) ↔ ∞) = P1/2 (An ) ≥ 1 − (1/8)4
for all n ≥ N . Fix n = N + 1. Let AN , AS , AE , and AW be the events that the
north, south, east, and west sides of Λ(n), respectively, are members of infinite open
clusters. Note that these events have equal probability by symmetry. We have that
P1/2 (Λ(n) 6↔ ∞) = P1/2 ((AN )c ∩ (AS )c ∩ (AE )c ∩ (AW )c ) ≥ [P1/2 ((AN )c )]4
TOPICS IN PERCOLATION
19
by Corollary 3.11 (since (AN )c , (AS )c , (AE )c , (AW )c are decreasing events), so
P1/2 ((AN )c ) ≤ [1 − P1/2 (Λ(n) ↔ ∞)]1/4 ≤ 1/8
by (4.18).
Consider next the dual box Λ(n)d in L2d with vertex set [−n, n − 1]2 + (1/2, 1/2).
S
E
W
Let AN
d , Ad , Ad , Ad be the events that the north, south, east, and west sides of
Λ(n)d are members of infinite closed clusters in L2d . The dual box Λ(n)d has side
c
length n − 1 = N , so (4.18) applies to Λ(n)d as well, and thus P1/2 ((AN
d ) ) ≤ 1/8.
N
S
E
W
Let A = A ∩ A ∩ Ad ∩ Ad . If A occurs, then by Theorem 4.16, there must be
an open path in L2 joining the north and south sides of A and a closed path in L2d
joining the east and west sides of A. But this is impossible since it would require
the paths to cross. Therefore, P1/2 (A) = 0. On the other hand, we have
c
W c
P1/2 (Ac ) = P1/2 ((AN )c ∪ (AS )c ∪ (AE
d ) ∪ (Ad ) )
c
W c
≤ P1/2 ((AN )c ) + P1/2 ((AS )c ) + P1/2 ((AE
d ) ) + P1/2 ((Ad )
≤ 1/2,
so P1/2 (A) ≥ 1/2, contradicting the fact proven above that P1/2 (A) = 0. Therefore,
pc ≥ 1/2, as desired.
Theorem 4.19. On L2 , we have pc ≤ 1/2.
Proof. Let p = 1/2. Suppose for the sake of contradiction that pc > 1/2. By
Theorem 4.13, there is a κ(1/2) > 0 so that P1/2 (0 ↔ ∂Λ(n)) ≤ exp(−nκ(1/2)) for
all n ∈ N.
Let B(n) be the modified box given by removing the edges on the east and west
sides of the box given by [0, n + 1] × [0, n]. Let An be the event that there is
an open path from the west side of B(n) to the east side of B(n). We consider
the dual box B(n)d in L2d generated by [0, n] × [0, n + 1] + (1/2, −1/2), with the
edges on the north and south sides removed. Let A0n be the event that there is a
closed path from the north side of B(n)d to the south side of B(n)d . It is clear
that P1/2 (An ) = P1/2 (A0n ) and that exactly one of An and A0n must occur (see
Remark 3.36), so
(4.20)
P1/2 (An ) = 1/2.
However, we also have
An ⊆
n+1
[
{(0, i) ↔ i + Λ(n)} ,
i=1
which implies
P1/2 (An ) ≤ P1/2
n+1
[
!
{(0, i) ↔ i + Λ(n)}
i=1
≤
n+1
X
Pp ((0, i) ↔ i + Λ(n))
i=1
= (n + 1)Pp (0 ↔ Λ(n))
≤ (n + 1) exp(−κ(1/2)n),
which will eventually fall below 1/2 as n grows large, contradicting (4.20).
20
ALEXANDER DUNLAP
Acknowledgments. I would like to thank my mentor, Mohammad Abbas Rezaei,
for his valuable guidance throughout this project, and Professor Peter May for his
work organizing and directing this REU.
References
[1] Patrick Billingsley. Probability and Measure. Second edition. John Wiley and Sons. 1986.
[2] R. M. Burton and M. Keane. Density and Uniqueness in Percolation. Communications in
Mathematical Physics 121, 501-505. 1989.
[3] Geoffrey Grimmett. Percolation. Second edition. Springer. 1999.
[4] Geoffrey Grimmett. Probability on Graphs: Random Processes on Graphs and Lattices. Statistical Laboratory, University of Cambridge. 2010.