Download Branching Processes with Negative Offspring Distributions

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Indeterminism wikipedia , lookup

Inductive probability wikipedia , lookup

Birthday problem wikipedia , lookup

Stochastic geometry models of wireless networks wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Infinite monkey theorem wikipedia , lookup

Probability interpretations wikipedia , lookup

Random variable wikipedia , lookup

Karhunen–Loève theorem wikipedia , lookup

Randomness wikipedia , lookup

Central limit theorem wikipedia , lookup

Random walk wikipedia , lookup

Conditioning (probability) wikipedia , lookup

Law of large numbers wikipedia , lookup

Transcript
c Birkhäuser Verlag, Basel, 2003
Annals of Combinatorics 7 (2003) 35-47
Annals of Combinatorics
0218-0006/03/010035-13
Branching Processes with Negative Offspring
Distributions
Ioana Dumitriu1, Joel Spencer2, and Catherine Yan3∗
1Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139
USA
[email protected]
2Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA
[email protected]
3Department of Mathematics, Texas A&M University, College Station, TX 77843, USA
[email protected]
Received January 3, 2003
AMS Subject Classification: 05A16, 05D40, 60J80
Abstract. A branching process is a mathematical description of the growth of a population
for which the individual produces offsprings according to stochastic laws. Branching processes
can be modeled by one-dimensional random walks with non-negative integral step-sizes. In this
paper we consider the random walk with a similar algebraic setting but the step-size distribution
is allowed to take negative values. We gave the exact formula for the limit probability under
which the random walk continues forever. Asymptotic results are also presented.
Keywords: branching processes, negative offsprings
1. Introduction
A branching process is a mathematical description of the growth of a population for
which the individual produces offsprings according to stochastic laws. A typical process
is of the following form. Considering a population of individuals developing from a
single progenitor — the initial individual. The initial individual produces a random
number of offsprings, each of them in turn produces a random number of offsprings;
and so the process continues as long as there are live individuals in the population. An
interesting question is to find the probability that the population survives, (or, extincts).
The branching process was proposed by Galton [5], and the probability of extinction
was first obtained by Watson [17] by considering the probability generating function for
the number of children in the nth generation. This mathematical model was known as
∗
Research supported by NSF grants DMS-0070574 and a Sloan fellowship. Part of the research was done
during the author’s visit to Institute for Advanced Study.
35
36
I. Dumitriu, J. Spencer, and C. Yan
the Galton-Watson branching process, and had been studied thoroughly in literature,
for example, [2, 6, 7, 10]. Some interesting details on the early history of branching
processes can be found in [9].
Another model for the branching processes was based on the interpretation of the
random walk Sn − n and the branching processes in terms of queuing theory, which is
due to Kendall [8]. Here Sn = Z1 + Z2 + · · · + Zn , where Zi ’s are independent random
variables with the identical distribution as the offsprings. In this model, the random
family of a branching process is represented by a queue with a single server, with the
service time for each customer being a unit period. The queue starts with one customer,
corresponding to the initial individual in the branching process. Each individual in the
random family represents a different customer, who arrives and waits for service in the
queue. A customer j arrives during the service period of i if j is a child of i. Let Yt be
the number of customers in the queue at the end of the t-th service period. Then Y0 = 1,
and
Yi = Yi−1 + Zi − 1 (i ≥ 1),
where Zi are independent random variables with the offspring distribution. Note that
the total progeny is represented by the length of the first busy period of the server, i.e.,
smallest t such that Yt = 0. The process continues forever iff Yi > 0 for all i.
By the above correspondence, the probabilities of extinction given by the GalsonWatson model and the random walk model are identical. Let Z be the offspring distribution. A basic fact about branching processes states that when E[Z] = c < 1, with
probability 1 the process dies out; but when E[Z] = c > 1, there is a positive probability α that the process will continue forever [4]. Let f (x) be the probability generating
function for Z. In the case c > 1, α can be computed as 1 − s where s, the probability
of extinction, is the unique solution of s = f (s) in the interval [0, 1). In particular, if
the offspring distribution is Poisson with mean 1 + ε, then the probability of survival
is asymptotically 2ε (c.f., [15]). An important condition in solving such probability in
Galton-Watson branching processes is that the possible values of the offspring distribution Z are non-negative integers.
In this paper we consider a random walk with a similar algebraic setting as the ones
corresponding to branching processes. However, the step-size distribution can take
negative values. Explicitly, let Z be a probability distribution with non-negative integral
values, and E[Z] = 1 − ε, where 0 < ε < 1. Define a random walk Y = Y0 , Y1 , . . . by the
following formula
Y0 = 1,
(1.1)
Yi = Yi−1 + 1 − Zi,
where Z1 , Z2 , . . . are independent random variables each with distribution Z. In particular, we consider the case where Z is distributed according to the Poisson law with
mean 1 − ε, denoted by Po(1 − ε). Let T be the least t for which Yt ≤ 0. If no such
t exists, we say that T = +∞. Since the expected step-size E[1 − Z] = ε > 0, with
probability approaching 1, Yn > 0 when n → ∞. We will compute the probability that
Yn > 0 for all n, or equivalently, Pr[T = +∞], in which case we say the random walk
Y escapes. We also consider the asymptotics of Pr[T > N]. Note that this problem is
essentially different from the one defined by branching processes, since the values of
Branching Processes with Negative Offspring Distributions
37
the step-size random variable 1 − Z are mostly negative. Traditional methods (e.g., [4])
for Galton-Watson branching processes based on the existence of a functional equation
for probability generating function are not easily applied.
In this paper we find the exact value of the escaping probability for the random
walk Y. We present two proofs. The first is a pure probabilistic one, based on the
linearity of expectations. The second is more analytic, using a theorem of Otter and
Dwass in branching processes, which is equivalent to the Lagrange inversion formula
for formal power series. The second proof applies to more general offspring distributions Z, provided that both the expected value and variance of Z exist. It is also used
to obtain asymptotics of Pr[T > N]. With suitable assumptions on Z, we prove that
Pr[T = ∞|T > N] is asymptotically 1 if N ε−2 , when ε → 0+ . We also give a formula
that generalizes the Otter-Dwass theorem, and use it to prove that if Z = Po(1 − ε), then
N ε−2 is both necessary and sufficient for Pr[T = ∞|T > N] ∼ 1.
Generalizations of the Galton-Watson branching processes have been studied in statistical literatures, for example, by Quine and Szczotka [12–14]. It is showed that some
standard branching process results hold in more general settings, e.g., the offspring
random variables are permitted to take negative integer values, and are not necessarily
i.i.d. The statistical work concentrated on the stochastic process {Xn , n = 0, 1, 2, . . .},
where Xn represents the size of the n-th generation in a branching process. Our model
is different from theirs. Firstly, we use a different scale. We consider the limit behavior
of the random walk as the time evolves, which corresponds to the evolution of the size
of a queueing system. There is no natural correspondence of generation in a queueing system. Secondly, we assume that the step-size random variables are i.i.d. with the
distribution 1 − Z, where Z is non-negative. These assumptions enable us to get exact
solutions instead of general description. Thirdly, we take a different approach. Our
proofs are more elementary, which combine the basic probability theory and combinatorial methods, and lead to applications in enumeration.
2. The Escaping Probability
Theorem 2.1. Fix ε > 0. For the random walk Y defined by the formula (1.1) with
Z = Po(1 − ε), the escaping probability Pr[T = +∞] is e1−ε ε.
2.1. The Probabilistic Proof
Let pi = Pr[Z = i] for i ≥ 0. Considering a new random walk Y defined by the same
recurrence as (1.1) except that the beginning position is zero, i.e.,
Y0 = 0,
(2.1)
+1−Z ,
Yi = Yi−1
i
where Z1 , Z2 , . . . are independent random variables with the same distribution Z. Let T be the least t > 0 for which Yt ≤ 0. If no such t exists, we say that T = +∞. Denote
by s the escaping probability Pr[T = +∞] of Y . It is clear that if s is the escaping
probability of the original random walk Y, then s = p0 s and hence s = s /p0 .
38
I. Dumitriu, J. Spencer, and C. Yan
Let q = 1 − s be the probability of extinction for Y . Let Wn be the indicator random
variable for the event Yn = 0, i.e.,
1, if Yn = 0,
Wn =
0, otherwise,
and let W = ∑∞
i=1 Wn . Then W is the number of t such that Yt = 0. (It is possible that
W = ∞. )
A crucial property for the random walk Y is that, in each step, Y can raise at most
one unit. Therefore, any random walk with W = k (k is an integer and k ≥ 2) consists of
k + 1 part (see Figure 1): The first k parts are random walks that begin at zero, drop to
the x-axis, or drop below x-axis and then move back to zero. The last part is a random
walk that begins at 0, and remains above the x-axis for all t > 0. It is clear that the
last part corresponds to a random walk defined as Y with T = ∞. This happens with
probability s .
Y’
0
t1
...
t2
tk
Figure 1.
In Y , let T be the least t such that t > 0 and Yt = 0. Hence
Pr[W = k] =
∑
s
k
∏ Pr[T = ti ]
t1 ,...,tk >0 i=1
= s
k
∑ Pr[T = t]
t>0
= s (Pr[T < ∞])k .
Branching Processes with Negative Offspring Distributions
39
If T < ∞, then T < ∞. Conversely, we claim that if T < ∞, then with probability
one T < ∞. To prove the claim, first note that T < ∞ but T = ∞ implies that there exists
N > 0 such that for all n ≥ N, Yn < 0. Let An , n ∈ N be the event that Yn < 0. Yn has
the distribution n − Sn where Sn is distributed according to the Poisson law with mean
n(1 − ε). For the Poisson distribution, we have the following large deviation result (e.g.,
see [1, Appendix A.15]),
Lemma 2.2. Let P have Poisson distribution with mean µ. For δ > 0, we have
Pr[P ≤ µ(1 − δ)] ≤ e−δ
2 µ/2
,
Pr[P ≥ µ(1 + δ)] ≤ [cδ ]µ ,
where cδ = eδ (1 + δ)−(1+δ) < 1 is a constant depending only on δ.
Using the lemma, Pr[An ] = Pr[Yn < 0] = Pr[Sn > n] ≤ (cδ )n(1−ε) where δ = ε/(1 − ε),
and cδ is a constant in (0, 1). It follows that ∑∞
n=1 Pr[An ] converges. By Borel-Cantelli
Lemma, with probability one Yn < 0 is true only for finitely many n. This proves the
claim. Therefore Pr[T < ∞] = Pr[T < ∞] = q and Pr[W = k] = qk s .
It follows that Pr[W ≥ k] = q k . Hence
∞
∑ Pr[W ≥ k] =
E[W ] =
k=1
q
1 − s
=
.
1 − q
s
On the other hand, by the linearity of the expectation,
E[W ] =
Again, note
Therefore
that Yn
∞
∞
n=1
n=1
∑ E[Wn] = ∑ Pr[Yn = 0].
= n − Sn, where Sn has the Poisson distribution with mean n(1 − ε).
E[W ] =
∞
∞
e−(1−ε)n (n − nε)n
n!
n=1
∑ Pr[Sn = n] = ∑
n=1
=
∞
nn ∑ n!
(1 − ε)e−(1−ε)
n
= te−t R (te−t ),
(2.2)
n=1
where t = 1 − ε and R(x) = (xe−x )<−1> is the inverse function of y = xe−x . (See,
e.g., [16, Chap. 5.3].)
Let y = te−t . From R(te−t ) = t, differentiating with respect to t, we have
R (y)(e−t − te−t ) = 1,
that is,
R (y) =
et
.
1−t
Substituting into (2.2), we have
1−ε
1 − s
t
=
.
= E[W ] =
s
1−t
ε
Therefore s = ε and s = ε/p0 = e1−ε ε.
40
I. Dumitriu, J. Spencer, and C. Yan
Remark. Theorem 2.1 asserts that the escaping probability s is ε for the random walk
Y . Such a clean answer deserves an easy explanation. The following is an intuitive
one:
In the infinite random walk Y , for any i > 0, let Wi be the number of t such that Yt =
i. For a fixed i, almost surely the random walk will hit the line y = i, i.e., Pr[Wi ≥ 1] = 1.
For any k > 1, similar to the argument in the preceding proof, we have Pr[Wi = k] =
qk−1 s , and Pr[Wi ≥ k] = qk−1 . Therefore E[Wi ] = 1+q +q2 +· · · = 1/s , consequently
E[W1 + W2 + · · · + WNε ] = Nε/s .
On the other hand,
E[W1 + W2 + · · · + WNε ] =
∞
∑ Pr[1 ≤ Y j ≤ Nε],
j=0
where Y j has distribution j − S j , S j = Z1 + Z2 + · · · + Z j ∼ Po( j(1 − ε)).
√
√
Using Lemma 2.2, for N < j < N − K N, (K 1), we have
Pr[Y j > Nε] = Pr[S j < j − Nε] ≤ e
ε2 (N− j)2
− 12 j(1−ε)
≤e
ε2
− 12 (1−ε)
K 2√
(1−K/ N)
,
Pr[Y j ≤ 0] = Pr[S j − j ≥ 0] ≤ (cε ) j ,
where cε < 1 is a constant. Taking K = N 1/3 , we have
N−N 5/6
∑√
Pr[1 ≤ Y j ≤ Nε] ∼ N(1 + o(1)).
j= N
√
While for j > N + K N, K = N 1/3 , we have
Pr[1 ≤ Y j ≤ Nε] ≤ Pr[Y j ≤ Nε] = Pr[S j − j(1 − ε) ≥ ( j − N)ε] ≤ (cδ ) j(1−ε) ,
ε
where δ = (1−ε)
for δ small. So
j−N
j .
From Lemma 2.2, cδ is decreasing as δ increases, and cδ ∼ e−δ
2 /2
cδ ≤ cε/(1−ε)N −1/6 ∼ e
j
ε2 N −1/3
− 2(1−ε)
Thus Pr[1 ≤ Y j ≤ Nε] ≤ e
and
∑
Pr[1
≤ Y j
−
ε2
N −1/3
2(1−ε)2
.
ε2 N 1/3
− 2(1−ε)
≤ Nε] = O e
= o(1).
j=N+N 5/6
Therefore
E[W1 + W2 + · · · + WNε ] = N(1 + o(1)) + O(N 1/2) + O(N 5/6 ) + o(1) = N(1 + o(1)).
Comparing the two expressions for E[W1 + W2 + · · · + WNε ], we have Nε/s ∼ N
when N → ∞. This implies that the escaping probability s equals ε.
Branching Processes with Negative Offspring Distributions
41
2.2. The Analytic Proof
We compute the probability Pr[T > N] directly, using the Otter-Dwass theorem in the
theory of branching processes. This proof applies to all integral distributions Z, provided that E(Z) and var(Z) exist.
In a branching process with offspring distribution Z, let #Fk be the total progeny in
the process with k individuals to start with. The following theorem presents a remarkable formula for the distribution of #Fk restricted to positive integers. This formula was
discovered by Otter [11] for k = 1, and extended to all k ≥ 1 by Dwass [3].
Theorem 2.3 (Otter and Dwass). For all n, k = 1, 2, . . . , we have
Pr[#Fk = n] =
k
Pr[Sn = n − k],
n
(2.3)
where Sn = Z1 + Z2 + · · · + Zn .
To prove Theorem 2.1, as in the first proof, we consider the random walk Y defined
by
Y0 = 0, Yi = Yi−1
+ 1 − Zi,
where Zi are independent with distribution Z, E(Z) = 1 − ε > 0, and var(Z) = σ. Again
let T be the least t > 0 such that Yt ≤ 0.
Lemma 2.4. For 1 ≤ k ≤ n, we have
Pr[T > n, Yn = k] =
k
Pr[Sn = n − k].
n
Proof. For any walk {Y0 , Y1 , Y2 , . . . , Yn } satisfying Y0 = 0, Yi > 0 for i = 1, 2, . . . , n − 1
.
and Yn = k, define the reversed walk X = {X0 , X1 , . . . , Xn } as follows: X0 = k, Xi = Yn−i
Then Xi = Yn−i = Yn−i−1 + 1 − Zn−i = Xi+1 + 1 − Zn−i . That is, X satisfies the recurrence
X0 = k, Xi+1 = Xi + Zi − 1 where Zi are independent random variables with the identical
distribution Z. In other words, {X0 , X1 , . . . , Xn } is a random walk corresponding to the
Galton-Watson branching process with the offspring distribution Z and k individuals at
the zero generation. The conditions Y0 = 0, Yi > 0 for i = 1, 2, . . . , n − 1 imply that
Xi > 0 for i = 1, 2, . . . , n − 1, and Xn = 0. So this corresponds to the case that the
total progeny in the branching process is exactly n, which happens with probability
Pr[#Fk = n]. Now the lemma follows immediately from Otter-Dwass theorem.
Proof of Theorem 2.1. For the random walk Y , T > n means Y1 = 1 (i.e., Z1 = 0).
Hence for the random walk Y, p0 Pr[T > n,Yn = k] = Pr[T > n + 1,Yn+1
= k], where
p0 = Pr[Z = 0]. Therefore
Pr[T > n, Yn = k] =
1
k
Pr[Sn+1 = n + 1 − k],
p0 (n + 1)
(2.4)
and
Pr[T > n] =
1
1 n+1
∑ k Pr[Sn+1 = n + 1 − k].
p0 (n + 1) k=1
(2.5)
42
I. Dumitriu, J. Spencer, and C. Yan
We evaluate the sum in Equation (2.5).
1 n+1
∑ k Pr[Sn+1 = n + 1 − k]
n + 1 k=1
=
1 n
∑ (n + 1 − k) Pr[Sn+1 = k]
n + 1 k=0
1
1 n
(n + 1) Pr[Sn+1 ≤ n] −
∑ k Pr[Sn+1 = k]
n+1
n + 1 k=0
1
= Pr[Sn+1 ≤ n] −
E[Sn+1] − ∑ k Pr[Sn+1 = k]
n+1
k>n
=
= 1 − Pr[Sn+1 ≥ n + 1] − (1 − ε) +
= ε − Pr[Sn+1 ≥ n + 1] +
1
∑ k Pr[Sn+1 = k]
n + 1 k>n
1
∑ k Pr[Sn+1 = k].
n + 1 k>n
Note that E[Sn+1] = (n + 1)(1 − ε), and var(Sn+1 ) = (n + 1)σ. By Chebyshev’s inequality,
Pr[Sn+1 ≥ n + 1] = Pr[Sn+1 − (n + 1)(1 − ε) ≥ (n + 1)ε] ≤
σ
= o(1),
(n + 1)ε2
and
1
1
k Pr[Sn+1 = k] = Pr[Sn+1 ≥ n + 1] +
∑
∑ Pr[Sn+1 ≥ k]
n + 1 k>n
n + 1 k>n+1
≤
σ
(n + 1)σ
1 ∞
+
∑
2
(n + 1)ε
n + 1 t=1 ((n + 1)ε + t)2
∼
σ
+σ
(n + 1)ε2
∞
1
1
dt
((n + 1)ε + t)2
= O(n−1 ) = o(1).
Hence
Pr(T = ∞) = lim Pr(T > n) = ε/p0 .
n→∞
For the case that Z is the Poisson distribution with mean 1 − ε, p0 = e−(1−ε) , and
hence Pr[T = ∞] = e1−ε ε.
Let pi = Pr[Z = i]. In the random walk Y, for 1 ≤ k ≤ n, we have
Pr[T > n, Yn = k] =
∑
n
∏ p zi ,
(z1 , z2 ,..., zn ) i=1
Branching Processes with Negative Offspring Distributions
43
where (z1 , z2 , . . . , zn ) are non-negative integers so that Yi > 0 for i = 1, 2, . . . , n − 1, and
Yn = k. In terms of zi , these conditions are equivalent to
z1 ≤ 1,
z1 + z2 ≤ 2,
··· ≤ ···
(2.6)
z1 + z2 + · · · + zn−1 ≤ n − 1,
z1 + z2 + · · · + zn = n + 1 − k.
When Z has the Poisson distribution with mean µ,
∑
Pr[T > n, Yn = k] =
N
∏ e−µ
(z1 , z2 ,..., zn ) i=1
=e
−µn
µ zi
=
zi ! (z
µn+1−k
(n + 1 − k)! (z
∑
∑
1 , z2 ,..., zn )
1 , z2 ,..., zn )
N
µ zi
i=1 zi !
e−µn ∏
n+1−k
,
z1 , z2 , . . . , zn
(2.7)
where (z1 , z2 , . . . , zn ) satisfies the condition (2.6). By Equation (2.4),
Pr[T > n, Yn = k] = eµ
= eµ
Combining (2.7) and (2.8), we get
∑
(z1 , z2 ,..., zn )
k
Pr[Sn+1 = n + 1 − k]
n+1
k −(n+1)µ ((n + 1)µ)n+1−k
e
.
n+1
(n + 1 − k)!
n+1−k
z1 , z2 , . . . , zn
(2.8)
= k(n + 1)n−k ,
(2.9)
where (z1 , z2 , . . . , zn ) satisfies condition (2.6).
The sum in the left-hand side of Equation (2.9) has the following combinatorial
interpretation. Put balls B1 , B2 , . . . , Bn+1−k into n different boxes. The sum counts the
number of ways one can do it so that the number of balls in the first i boxes does not
exceed i, for i = 1, 2, . . . , n. Hence, we have
Corollary 2.5. Let 1 ≤ k ≤ n. The number of ways to put n + 1 − k distinct balls into n
distinct boxes, so that the number of balls in the first i boxes does not exceed i, for all
i = 1, 2, . . . , n, is k(n + 1)n−k .
3. The Asymptotic Formula
In this section we consider the asymptotics of the escaping probability of Y when
ε → 0+ . Using the Otter-Dwass theorem and Lemma 2.4, we estimate the escaping
44
I. Dumitriu, J. Spencer, and C. Yan
probability after a finite number of steps. More precisely, let Z (ε) be a family of nonnegative integral random variables such that as ε → 0+ , Pr[Z (ε) = 0] → p0 ∈ (0, 1),
E[Z (ε) ] = 1 − cε + o(ε) for some positive constant c. Further we assume a uniform
bound M such that var(Z (ε) ) ≤ M for all positive ε in some neighborhood of the origin.
Under these conditions,
Theorem 3.1. Let the random walk Y(ε) be defined by
(ε)
(ε)
Y0 = 1, Yi
(ε)
(ε)
= Yi−1 + 1 − Zi ,
(ε)
(ε)
where Zi are independent random variables with identical distribution Z (ε) . Set Sn =
(ε)
(ε)
(ε)
(ε)
Z1 +Z2 +· · ·+Zn , and T (ε) the least t such that Yt = 0. Then Pr[T (ε) > N] ∼ cε/p0
if N ε−2 .
Proof. From the analytic proof of Theorem 2.1, we have
Pr[Z (ε) = 0] Pr[T (ε) > N] = 1 − E[Z (ε)] +
1
(ε)
∑ Pr[SN+1 ≥ k].
N + 1 k>N+1
Let K = Nε2 . If N ε−2 , then K → ∞ as ε → 0+ . By Chebyshev’s inequality,
(N + 1)var(Z (ε) )
1
1
(ε)
Pr[SN+1 ≥ k] ≤
∑
∑
N + 1 k>N+1
N + 1 t≥1 ((N + 1)(1 − E[Z (ε)]) + t)2
≤
M
Mε
= o(ε).
=
N(1 − E[Z (ε)]) K(c + o(1))
Hence
Pr[T (ε) > N]
c + o(1)
c
=
→ .
ε
p0 + o(1)
p0
Remark. A typical example is given by Z (ε) = Po(1 − ε), in which case Pr[Z (ε) = 0] =
e−(1−ε) → e−1 , E[Z (ε) ] = 1 − ε, and var(Z (ε) ) = 1 − ε ≤ 1 for ε ∈ [0, 1]. Taking, for
example, N = ε−2 ln ln ε−1 we have Pr[T (ε) > N] ∼ eε.
As an extension to the Otter-Dwass theorem, we have the following formula for the
random walk Y.
Theorem 3.2. In the random walk Y with any offspring distribution Z,
Pr[T = n, Yn = 0] =
1
Pr[Sn = n + 1].
n
Proof. Note that
Pr[T = n, Yn = 0] =
∑
n
∏ p zi ,
(z1 , z2 ,..., zn ) i=1
Branching Processes with Negative Offspring Distributions
45
where zi are non-negative integers such that Yi ≥ 1 for i = 1, 2, . . . , n − 1, and Yn = 0.
Equivalently, zi satisfies the condition
z1 ≤ 1
z1 + z2 ≤ 2,
··· ≤ ···
(3.1)
z1 + z2 + · · · + zn−1 ≤ n − 1,
z1 + z2 + · · · + zn = n + 1.
Also note
Pr[Sn = n + 1] =
∑
n
∏ p zi ,
(z1 , z2 ,..., zn ) i=1
where zi are non-negative integers with z1 + · · · + zn = n + 1. Hence it is sufficient to
show that for any sequence of non-negative integers α = (a1 , a2 , . . . , an ) with a1 + a2 +
· · · + an = n + 1, there is exactly one cyclic shift αi = (ai , ai+1 , . . . , an , a1 , . . . , ai−1 )
satisfying condition (3.1). The proof is contained in the following two claims.
Claim 1. For any sequence α, there is at most one cyclic shift of α satisfying condition
(3.1).
Prove by contradiction. Assume both α = (a1 , a2 , . . . , an ) and αi = (ai , ai+1 , . . . , an ,
a1 , . . . , ai−1 ) satisfy condition (3.1), where i is an index between 2 and n. Then a1 +a2 +
· · · + ai−1 ≤ i − 1, and ai + · · · + an ≤ n − i. Hence a1 + a2 + · · · + an ≤ n, contradicting
the fact that a1 + a2 + · · · + an = n + 1. Note that Claim 1 also implies that all the
conjugates of α are distinct.
Claim 2. For any sequence α = (a1 , a2 , . . . , an ) of non-negative integers, there exists a
cyclic shift αi satisfying condition (3.1).
Let xi = i − (a1 + a2 + · · · + ai ) for i = 1, 2, . . . , n − 1. If all xi ≥ 0, then α satisfies condition (3.1). Otherwise, let x = min{xi , i = 1, 2, . . . , n − 1}. Then x < 0.
Find the smallest index i such that xi = x. We prove that the cyclic shift αi+1 =
(ai+1 , . . . , an , a1 , . . . , ai ) satisfies (3.1). Denote αi+1 by (b1 , b2 , . . . , bn ).
1. For 1 ≤ k ≤ n − 1 − i, b1 + b2 + · · · + bk = ai+1 + ai+2 + · · · + ak+i. Since
(k + i) − (a1 + a2 + · · · + ak+i) = xk+i ≥ xi = i − (a1 + a2 + · · · + ai ),
b1 + b2 + · · · + bk ≤ k.
2. For k = n − i, we have
b1 + b2 + · · · + bk = ai+1 + ai+2 + · · · + an = (n + 1) − (a1 + a2 + · · · + ai )
= n + 1 − (i − x) ≤ n − i,
as x < 0.
46
I. Dumitriu, J. Spencer, and C. Yan
3. For n − i < k < n, we have
b1 + b2 + · · · + bk = ai+1 + · · · + an + a1 + · · · + ak−(n−i)
= n + 1 − (i − x) + a1 + · · · + ak−(n−i).
Since k − (n − i) − (a1 + · · · + ak−(n−i)) ≥ x + 1, we have
b1 + b2 + · · · + bk ≤ n + 1 − i + x + k − (n − i) − (x + 1) = k.
This finishes the proof.
Using Theorem 3.2, we have the following result for Z = Po(1 − ε).
Theorem 3.3. In the random walk Y with Z = Po(1 − ε),
Pr[T > N] ∼ eε iff N ε−2 .
Or equivalently,
Pr[T = ∞|T > N] ∼ 1 iff N ε−2 .
Proof. The sufficient part is proved in Theorem 3.1. For the necessary part, we need to
show
∞
∑ Pr(T = i) = o(ε),
If
then N ε−2 ,
i=N
where the summation does not contain the term Pr[T = +∞]. We show this by finding a
lower bounds for ∑∞
i=N Pr[T = i]. Note that when Z = Po(1 − ε), SN has the distribution
Po(N(1 − ε)).
Pr[T = N] ≥ Pr[T = N, YN = 0] =
=
1
Pr[SN = N + 1]
N
t
e−tN t N+1 N N
(teε )N
∼√
,
(N + 1)!
2π (N + 1)3/2
where t = 1 − ε. Note that teε ∼ 1 − ε−2/2. So if N = Cε−2 for a constant C, then
∞
t
∞
(teε )i
∑ Pr[T = i] ≥ √2π ∑ (i + 1)3/2
i=N
i=N
t 2N (teε )i
≥√ ∑
2π i=N (i + 1)3/2
2N
t
≥ √ (teε )2N ∑ (i + 1)−3/2
2π
i=N
2t −C
1
∼√
e
1 − √ ε.
2πC
2
Branching Processes with Negative Offspring Distributions
That is, if N = Cε−2 , then
47
∞
∑ Pr[T = i] > C1 · ε,
i=N
for a constant C1 > 0. This proves that for ∑∞
i=N Pr(T = i) = o(ε), it is necessary that
N ε−2 .
References
1. N. Alon and J. Spencer, The Probabilistic Method, 2nd Edition, John Wiley & Sons, Inc.,
2000.
2. K.B. Athreya and P. Ney, Branching Processes, Springer-Verlag, 1972.
3. M. Dwass, The total progeny in a branching process, J. Appl. Probab. 6 (1969) 682–686.
4. W. Feller, An Introduction to Probability Theory and Its Applications, John Wiley & Sons,
Inc., 3rd Edition, 1970.
5. F. Galton, Problem 4001, Educational Times 1 April (1873) p. 17.
6. T.E. Harris, The Theory of Branching Processes, Springer-Verlag, New York, 1963.
7. P. Jagers, Branching Processes with Biological Applications, Wiley, 1975.
8. D.G. Kendall, Some problems in the theory of queue, J. Roy. Statist. Soc. Ser. B 13 (1951)
151–185.
9. D.G. Kendall, Branching processes since 1873, J. London Math. Soc. 41 (1966) 385–406.
10. C.J. Mode, Multitype Branching Processes, Theory and Applications, American Elsevier,
New York, 1971.
11. R. Otter, The multiplicative process, Ann. Math. Statist. 20 (1949) 206–224.
12. M.P. Quine and W. Szczotka, Generalizations of the Bienayme-Galton-Watson branching
process via its representation as an embedded random walk, Ann. Appl. Probab. 4 (4) (1994)
1206–1222.
13. M.P. Quine and W. Szczotka, Asymptotic normality of processes with a branching structure,
Comm. Statist. Stochastic Models 14 (4) (1998) 833–848.
14. M.P. Quine and W. Szczotka, Existence and positivity of the limit in processes with a branching structure, J. Appl. Probab. 36 (1) (1999) 132–138.
15. J. Spencer and C.H. Yan, An enumeration problem and branching processes, preprint.
16. R. Stanley, Enumerative Combinatorics, Vol. 2, Cambridge University Press, 1999.
17. H.W. Watson, Solution to problem 4001, Educational Times 1 August (1873) 115–116.
18. C.H. Yan, On the enumeration of generalized parking functions, Congr. Numer. 147 (2000)
201–209.