Download Average-Case Analysis of Algorithms and Data Structures

Document related concepts

B-tree wikipedia , lookup

Hash table wikipedia , lookup

Rainbow table wikipedia , lookup

Lattice model (finance) wikipedia , lookup

Binary search tree wikipedia , lookup

Transcript
(page i)
Average-Case Analysis of Algorithms
and Data Structures
L’analyse en moyenne des algorithmes
et des structures de données
Jeffrey Scott Vitter 1 and Philippe Flajolet 2
Abstract. This report is a contributed chapter to the Handbook of Theoretical Computer
Science (North-Holland, 1990). Its aim is to describe the main mathematical methods and
applications in the average-case analysis of algorithms and data structures. It comprises
two parts: First, we present basic combinatorial enumerations based on symbolic methods
and asymptotic methods with emphasis on complex analysis techniques (such as singularity
analysis, saddle point, Mellin transforms). Next, we show how to apply these general
methods to the analysis of sorting, searching, tree data structures, hashing, and dynamic
algorithms. The emphasis is on algorithms for which exact “analytic models” can be
derived.
Résumé. Ce rapport est un chapitre qui paraı̂t dans le Handbook of Theoretical Computer Science (North-Holland, 1990). Son but est de décrire les principales méthodes et
applications de l’analyse de complexité en moyenne des algorithmes. Il comprend deux
parties. Tout d’abord, nous donnons une présentation des méthodes de dénombrements
combinatoires qui repose sur l’utilisation de méthodes symboliques, ainsi que des techniques asymptotiques fondées sur l’analyse complexe (analyse de singularités, méthode
du col, transformation de Mellin). Ensuite, nous décrivons l’application de ces méthodes
génerales à l’analyse du tri, de la recherche, de la manipulation d’arbres, du hachage et
des algorithmes dynamiques. L’accent est mis dans cette présentation sur les algorithmes
pour lesquels existent des hh modèles analytiques ii exacts.
1
Dept. of Computer Science, Brown University, Providence, R. I. 02912, USA. Research was
also done while the author was on sabbatical at INRIA in Rocquencourt, France, and at Ecole
Normale Supérieure in Paris. Support was provided in part by NSF research grant DCR–84–
03613, by an NSF Presidential Young Investigator Award with matching funds from an IBM
Faculty Development Award and an AT&T research grant, and by a Guggenheim Fellowship.
2
INRIA, Domaine de Voluceau, Rocquencourt, B. P. 105, 78153 Le Chesnay, France.
i
(page ii)
Table of Contents
0. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1. Combinatorial Enumerations
5
1.1.
1.2.
1.3.
1.4.
. . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
5
7
10
13
2. Asymptotic Methods . . . . . . . . . . . . . . . . . . . . . . . . . .
15
2.1.
2.2.
2.3.
2.4.
2.5.
Overview . . . . . . . . . . . . .
Ordinary Generating Functions . . . .
Exponential Generating Functions . .
From Generating Functions to Counting
Generalities . . . . . . . .
Singularity Analysis . . . . .
Saddle Point Methods . . . .
Mellin Transforms . . . . . .
Limit Probability Distributions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
17
20
22
25
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
28
28
31
31
37
39
39
40
41
4. Recursive Decompositions and Algorithms on Trees . . . . . . . . . . . .
43
3. Sorting Algorithms . . . . . . .
3.1. Inversions . . . . . . . .
3.2. Insertion Sort . . . . . .
3.3. Shellsort . . . . . . . . .
3.4. Bubble Sort . . . . . . .
3.5. Quicksort . . . . . . . .
3.6. Radix-Exchange Sort . . .
3.7. Selection Sort and Heapsort
3.8. Merge Sort . . . . . . . .
4.1.
4.2.
4.3.
4.4.
Binary Trees and Plane Trees
Binary Search Trees . . . .
Radix-Exchange Tries . . .
Digital Search Trees . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
43
51
57
60
5. Hashing and Address Computation Techniques . . . . . . . . . . . . . .
62
5.1 Bucket Algorithms and Hashing by Chaining . . . . . . . . . . . . .
5.2 Hashing by Open Addressing . . . . . . . . . . . . . . . . . . . .
63
76
6. Dynamic Algorithms . . . . . . . . . . .
6.1. Integrated Cost and the History Model
6.2. Size of Dynamic Data Structures . . .
6.3. Set Union-Find Algorithms . . . . . .
.
.
.
.
81
81
83
87
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
Index and Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
(page 1)
Average-Case Analysis of Algorithms
and Data Structures
Jeffrey Scott Vitter and Philippe Flajolet
0. Introduction
Analyzing an algorithm means, in its broadest sense, characterizing the amount of computational resources that an execution of the algorithm will require when applied to data
of a certain type. Many algorithms in classical mathematics, primarily in number theory
and analysis, were analyzed by eighteenth and nineteenth century mathematicians. For
instance, Lamé in 1845 showed that Euclid’s GCD
√ algorithm requires at most ≈ log φ n division steps (where φ is the “golden ratio” (1 + 5 )/2) when applied to numbers bounded
by n. Similarly, the well-known quadratic convergence of Newton’s method is a way of
describing its complexity/accuracy tradeoff.
This chapter presents analytic methods for average-case analysis of algorithms, with
special emphasis on the main algorithms and data structures used for processing nonnumerical data. We characterize algorithmic solutions to a number of essential problems,
such as sorting, searching, pattern matching, register allocation, tree compaction, retrieval
of multidimensional data, and efficient access to large files stored on secondary memory.
The first step required to analyze an algorithm A is to define an input data model and
a complexity measure:
1. Assume that the input to A is data of a certain type. Each commonly used data type
carries a natural notion of size: the size of an array is the number of its elements; the
size of a file is the number of its records; the size of a character string is its length;
and so on. An input model is specified by the subset In of inputs of size n and by a
probability distribution over In , for each n. For example, a classical input model for
comparison-based sorting algorithms is to assume that the n inputs are real numbers
independently and uniformly distributed in the unit interval [ 0, 1]. An equivalent
model is to assume that the n inputs form a random permutation of {1, 2, . . . , n}.
2. The main complexity measures for algorithms executed on sequential machines are
time utilization and space utilization. These may be either “raw” measures (such as
the time in nanoseconds on a particular machine or the number of bits necessary for
storing temporary variables) or “abstract” measures (such as the number of comparison operations or the number of disk pages accessed).
2
/
Average-Case Analysis of Algorithms and Data Structures
Figure 1. Methods used in the average-case analysis of algorithms.
Let us consider an algorithm A with complexity measure µ. The worst-case and
best-case complexities of algorithm A over In are defined in an obvious way. Determining
the worst-case complexity requires constructing extremal configurations that force µ n , the
restriction of µ to In , to be as large as possible.
The average-case complexity is defined in terms of the probabilistic input model:
X
µn [A] = E{µn [A]} =
k Pr{µn [A] = k},
k
where E{.} denotes expected value and Pr{.} denotes probability with respect to the
probability distribution over In . Frequently, In is a finite set and the probabilistic model
assumes a uniform probability distribution over In . In that case, µn [A] takes the form
1 X
kJnk ,
µn [A] =
In
k
where In = |In | and Jnk is the number of inputs of size n with complexity k for algorithm A.
Average-case analysis then reduces to combinatorial enumeration.
The next step in the analysis is to express the complexity of the algorithm in terms
of standard functions like nα (log n)β (log log n)γ , where α, β, and γ are constants, so that
the analytic results can be easily interpreted. This involves getting asymptotic estimates.
The following steps give the route followed by many of the average-case analyses that
appear in the literature (see Figure 1):
1. RECUR: To determine the probabilities or the expectations in exact form, start by
setting up recurrences that relate the behavior of algorithm A on inputs of size n to
its behavior on smaller (and similar) inputs.
2. SOLVE: Solve previous recurrences explicitly using classical algebra.
3. ASYMPT: Use standard real asymptotics to estimate those explicit expressions.
Section 0. Introduction
/
3
An important way to solve recurrences is via the use of generating functions:
4. GENFUN: Translate the recurrences into equations involving generating functions.
The coefficient of the nth term of each generating function represents a particular
probability or expectation. In general we obtain a set of functional equations.
5. EXPAND: Solve those functional equations using classical tools from algebra and
analysis, then expand the solutions to get the coefficients in explicit form.
The above methods can often be bypassed by the following more powerful methods,
which we emphasize in this chapter:
6. SYMBOL: Bypass the use of recurrences and translate the set-theoretic definitions
of the data structures or underlying combinatorial structures directly into functional
equations involving generating functions.
7. COMPLEX: Use complex analysis to translate the information available from the
functional equations directly into asymptotic expressions of the coefficients of the
generating functions.
The symbolic method (SYMBOL) is often direct and has the advantage of characterizing
the special functions that arise from the analysis of a natural class of related algorithms.
The COMPLEX method provides powerful tools for direct asymptotics from generating
functions. It has the intrinsic advantage in many cases of providing asymptotic estimates
of the coefficients of functions known only implicitly from their functional equations.
In Sections 1 and 2 we develop general techniques for the mathematical analysis of
algorithms, with emphasis on the SYMBOL and COMPLEX methods. Section 1 is devoted
to exact analysis and combinatorial enumeration. We present the primary methods used to
obtain counting results for the analysis of algorithms, with emphasis on symbolic methods
(SYMBOL). The main mathematical tool we use is the generating function associated
with the particular class of structures. A rich set of combinatorial constructions translates
directly into functional relations involving the generating functions. In Section 2 we discuss
asymptotic analysis. We briefly review methods from elementary real analysis and then
concentrate on complex analysis techniques (COMPLEX ). There we use analytic properties
of generating functions to recover information about their coefficients. The methods are
often applicable to functions known only indirectly via functional equations, a situation
that presents itself naturally when counting recursively defined structures.
In Sections 3–6, we apply general methods for analysis of algorithms, especially those
developed in Sections 1 and 2, to the analysis of a large variety of algorithms and data structures. In Section 3, we describe several important sorting algorithms and apply statistics
of inversion tables to the analysis of the iteration-based sorting algorithms. In Section 4,
we extend our approach of Section 1 to consider valuations on combinatorial structures,
which we use to analyze trees and structures with a tree-like recursive decomposition; this
includes plane trees, binary and multidimensional search trees, digital search trees, quicksort, radix-exchange sort, and algorithms for register allocation, pattern matching, and
tree compaction. In Section 5, we present a unified approach to hashing, address calculation techniques, and occupancy problems. Section 6 is devoted to performance measures
that span a period of time, such as the expected amortized time and expected maximum
data structure space used by an algorithm.
4
/
Average-Case Analysis of Algorithms and Data Structures
General References. Background sources on combinatorial enumerations and symbolic
methods include [Goulden and Jackson 1983] and [Comtet 1974]. General coverage of
complex analysis appears in [Titchmarsh 1939], [Henrici 1974], and [Henrici 1977], and
applications to asymptotics are discussed in [Bender 1974], [Olver 1974], [Bender and
Orszag 1978], and [De Bruijn 1981]. Mellin transforms are covered in [Doetsch 1955], and
limit probability distributions are studied in [Feller 1971], [Sachkov 1978], and [Billingsley 1986].
For additional coverage of average-case analysis of algorithms and data structures, the
reader is referred to Knuth’s seminal multivolume work The Art of Computer Programming [Knuth 1973a], [Knuth 1981], [Knuth 1973b], and to [Flajolet 1981], [Greene and
Knuth 1982], [Sedgewick 1983], [Purdom and Brown 1985], and [Flajolet 1988]. Descriptions of most of the algorithms analyzed in this chapter can be found in Knuth’s books,
[Gonnet 1984] and [Sedgewick 1988].
Section 1.1. Overview
/
5
1. Combinatorial Enumerations
Our main objective in this section is to introduce useful combinatorial constructions that
translate directly into generating functions. Such constructions are called admissible. In
Section 1.2 we examine admissible constructions for ordinary generating functions, and
in Section 1.3 we consider admissible constructions for exponential generating functions,
which are related to the enumeration of labeled structures.
1.1. Overview
The most elementary structures may be enumerated using sum/product rules†
Theorem 0 [Sum-Product rule]. Let A, B, C be sets with cardinalities a, b, c. Then
C = A ∪ B,
C =A×B
with A ∩ B = ∅
=⇒
=⇒
c = a + b;
c = a · b.
Thus, the number of binary strings of length n is 2n , and the number of permutations
of {1, 2, . . . , n} is n!.
In the next order of difficulty, explicit forms are replaced by recurrences when structures are defined in terms of themselves. For example, let Fn be the number of coverings
of the interval [1, n] by disjoint segments of length 1 and 2. By considering the two possibilities for the last segment used, we get the recurrence
Fn = Fn−1 + Fn−2 ,
for n ≥ 2,
(1a)
with initial conditions F0 = 0, F1 = 1. Thus, from the classical theory of linear recurrences,
we find the Fibonacci numbers expressed in terms of the golden ratio φ:
√
1
5
1
±
.
(1b)
Fn = √ (φn − φ̂n ),
with φ, φ̂ =
2
5
This example illustrates recurrence methods (RECUR) in (1a) and derivation of explicit
solutions (SOLVE ) in (1b).
Another example, which we shall discuss in more detail in Section 4.1, is the number Bn of plane binary trees with n internal nodes [Knuth 1973a]. By considering all
possibilities for left and right subtrees, we get the recurrence
Bn =
n−1
X
k=0
Bk Bn−k−1 ,
for n ≥ 1,
(2a)
with the initial
P condition B0 = 1. To solve (2a), we introduce a generating function (GF):
Let B(z) = n≥0 Bn z n . From Eq. (2a) we get
B(z) = 1 + z B 2 (z),
(2b)
† We also use the sum notation C = A + B to represent the union of A and B when A ∩ B = ∅.
6
/
Average-Case Analysis of Algorithms and Data Structures
and solving the quadratic equation for B(z), we get
√
1 − 1 − 4z
.
B(z) =
2z
(2c)
Finally, the Taylor expansion of (1 + x)1/2 gives us
µ ¶
1
2n
(2n)!
Bn =
.
=
n+1 n
n! (n + 1)!
(2d)
In this case, we started with recurrences (RECUR) in (2a) and introduced generating
functions (GENFUN ), leading to (2b); solving and expanding (EXPAND) gave the explicit
solutions (2c) and (2d). (This example dates back to Euler; the Bn are called Catalan
numbers.)
The symbolic method (SYMBOL) that we are going to present can be applied to this
last example as follows: The class B of binary trees is defined recursively by the equation
B = { } ∪ ({°} × B × B) ,
(3a)
where
and ° represent external nodes and internal nodes, respectively. A standard
lemma asserts that disjoint unions and cartesian products of structures correspond respectively to sums and products of corresponding generating functions. Therefore, specification (3a) translates term by term directly into the generating function equation
B(z) = 1 + z · B(z) · B(z),
(3b)
which agrees with (2b).
Definition. A class of combinatorial structures C is a finite or countable set together
with an integer valued function |.|C , called the size function, such that for each n ≥ 0
the number Cn of structures in C of size n is finite. The counting sequence for class C
is the integer sequence {Cn }n≥0 . The ordinary generating function (OGF) C(z) and the
b
exponential generating function (EGF) C(z)
of a class C are defined, respectively, by
C(z) =
X
Cn z n
and
n≥0
b
C(z)
=
X
n≥0
Cn
zn
.
n!
(4)
The coefficient of z n in the expansion of a function f (z) (or simply, the nth coefficient
b
of f (z)) is written [z n ]f (z); we have Cn = [z n ] C(z) = n! [z n ] C(z).
b
The generating functions C(z) and C(z) can also be expressed as
C(z) =
X
γ∈C
z |γ|
and
b
C(z)
=
X z |γ|
,
|γ|!
(5)
γ∈C
which can be checked by counting the number of occurrences of z n in the sums.
We shall adopt the notational convention that a class (say, C), its counting sequence
(say, Cn or cn ), its associated ordinary generating function (say, C(z) or c(z)), and its
Section 1.2. Ordinary Generating Functions
/
7
b
associated exponential generating function (say, C(z)
or b
c(z)) are named by the same
group of letters.
The basic notion for the symbolic method is that of an admissible construction in which
the counting sequence of the construction depends only upon the counting sequences of
its components (see [Goulden and Jackson 1983], [Flajolet 1981], [Greene 1983]); such a
construction thus “translates” over generating functions. It induces an operator of a more
or less simple form over formal power series. For instance, let U and V be two classes of
structures, and let
W =U ×V
(6a)
be their cartesian product. If the size of an ordered pair w = (u, v) ∈ W is defined as
|w| = |u| + |v|, then by counting possibilities, we get
X
Wn =
Uk Vn−k ,
(6b)
k≥0
so that (6a) has the corresponding (ordinary) generating function equation
W (z) = U (z)V (z) .
(6c)
Such a combinatorial (set-theoretic) construction that translates in the manner of (6a)–(6c)
is called admissible.
1.2. Ordinary Generating Functions
In this section we present a catalog of admissible constructions for ordinary generating
functions (OGFs). We assume that the size of an element of a disjoint union W = U ∪ V
is inherited from its size in its original domain; the size of a composite object (product,
sequence, subset, etc.) is the sum of the sizes of its components.
Theorem 1 [Fundamental sum/product theorem].
product constructions are admissible for OGFs:
W = U ∪ V,
W =U ×V
with U ∩ V = ∅
=⇒
=⇒
The disjoint union and cartesian
W (z) = U (z) + V (z);
W (z) = U (z)V (z).
P
Proof. Use recurrences Wn = Un + Vn and Wn = 0≤k≤n Uk Vn−k . Alternatively, use
Eq. (5) for OGFs, which yields for cartesian products
X
X
X
X
z |w| =
z |u|+|v| =
z |u| ·
z |v| .
w∈W
(u,v)∈U ×V
u∈U
v∈V
Let U be a class of structures that have positive size. Class W is called the sequence
class of class U, denoted W = U ∗ , if W is composed of all sequences (u1 , u2 , . . . , uk ) with
uj ∈ U. Class W is the (finite) powerset of class U, denoted W = 2U , if W consists of all
finite subsets {u1 , u2 , . . . , uk } of U (the uj are distinct), for k ≥ 0.
8
/
Average-Case Analysis of Algorithms and Data Structures
Theorem 2. The sequence and powerset constructs are admissible for OGFs:
1
;
1 − U (z)
W = U∗
=⇒
W (z) =
W = 2U
=⇒
W (z) = eΦ(U )(z) ,
where Φ(f ) =
f (z) f (z 2 ) f (z 3 )
−
+
− ···.
1
2
3
Proof. Let ² denote the empty sequence. Then, for the sequence class of U, we have
W = U ∗ ≡ {²} + U + (U × U) + (U × U × U) + · · · ;
¡
¢−1
W (z) = 1 + U (z) + U (z)2 + U (z)3 + · · · = 1 − U (z)
.
The powerset class W = 2U is equivalent to an infinite product:
Y
W = 2U =
({²} + {u});
W (z) =
Y
u∈U
(1 + z |u| ) =
Y
(1 + z n )Un .
(7)
(8)
n
u∈U
Computing logarithms and expanding, we get
log W (z) =
X
Un log(1 + z n ) =
n≥1
X
n≥1
Un z n −
1X
Un z 2n + · · · .
2
n≥1
Other constructions can be shown to be admissible:
1. Diagonals and subsets with repetitions. The diagonal W = {(u, u) | u ∈ U} of U × U,
written W = ∆(U × U), satisfies W (z) = U (z 2 ). The class of multisets
Q (or subsets
with repetitions) of class U is denoted W = R{U}. It is isomorphic to u∈U {u}∗ , so
that its OGF satisfies
W (z) = eΨ(U )(z) ,
where Ψ(f ) =
f (z) f (z 2 ) f (z 3 )
+
+
+ ···.
1
2
3
(9)
2. Marking and composition. If U is formed with “atomic” elements (nodes, letters,
etc.) that determine its size, then we define the marking of U, denoted W = µ{U}, to
consist of elements of U with one individual atom marked. Since W n = nUn , it follows
d
that W (z) = z dz
U (z). Similarly, the composition of U and V, denoted W = U[V],
is defined as the class of all structures¡resulting
from substitutions of atoms of U by
¢
elements of V, and we have W (z) = U V (z) .
Examples. 1. Combinations. Let m be a fixed integer and J (m) = {1, 2, . . . , m}, each
element of J (m) having size 1. The generating function of J (m) is J (m) (z) = mz. The
(m)
class C (m) = 2J
is the set of all combinations of J (m) . By Theorem 2, the generating
(m)
function of the number Cn of n-combinations of a set with m elements is
¶
µ
¡
¢
mz 2
mz 3
mz
(m)
Φ(J (m) )(z)
−
+
− · · · = exp m log(1 + z) = (1 + z)m ,
C (z) = e
= exp
1
2
3
Section 1.2. Ordinary Generating Functions
/
9
and by extracting coefficients we find as expected
µ ¶
m!
m
(m)
=
Cn =
.
n
n! (m − n)!
Similarly, for R(m) = R{J (m) }, the class of combinations with repetitions, we have from (9)
µ
¶
m+n−1
(m)
−m
(m)
R (z) = (1 − z)
=⇒
Rn =
.
m−1
2. Compositions and partitions. Let N = {1, 2, 3, . . .}, each i ∈ N having size i. The
∗
sequence class
is called the set of integer compositions. Since N (z) = z/(1 − z)
¡ C = N ¢−1
, we have
and C(z) = 1 − N (z)
C(z) =
1−z
1 − 2z
Cn = 2n−1 ,
=⇒
for n ≥ 1.
The class P = R{N } is the set of integer partitions, and we have
P=
Y
α∈N
{α}∗
=⇒
P (z) =
Y
n≥1
1
.
1 − zn
(10)
3. Formal languages. Combinatorial processes can often be encoded naturally as
strings over some finite alphabet A. Regular languages are defined by regular expressions
or equivalently by deterministic or nondeterministic finite automata. This is illustrated by
the following two theorems, based upon the work of Chomsky and Schützenberger [1963].
Further applications appear in [Berstel and Boasson 1989].
Theorem 3a [Regular languages and rational functions]. If L is a regular language, then
its OGF is a rational function L(z) = P (z)/Q(z), where P (z) and Q(z) are polynomials.
The counting sequence Ln satisfies a linear recurrence with constant coefficients, and we
have, when n ≥ n0 ,
X
Ln =
πj (n)ωjn ,
j
for a finite set of constants ωj and polynomials πj (z).
Proof Sketch. Let D be a deterministic automaton that recognizes L, and let S j be
the set of words accepted by D when D is started in state j. The Sj satisfy a set of
linear equations (involving unions and concatenation with letters) constructed from the
transition table of the automaton. For generating functions, this translates into a set of
linear equations with polynomial coefficients that can be solved by Cramer’s rule.
Theorem 3b [Context-free languages and algebraic functions]. If L is an unambiguous
context-free language, then its OGF is an algebraic function. The counting sequence L n
satisfies a linear recurrence with polynomial coefficients: For a family q j (z) of polynomials
and n ≥ n0 , we have
X
Ln =
qj (n)Ln−j .
1≤j≤m
10
/
Average-Case Analysis of Algorithms and Data Structures
Proof Sketch. Since the language is unambiguous, its counting problem is equivalent
to counting derivation trees. A production in the grammar S → aT bU + bU U a + abba
translates into S(z) = z 2 T (z)U (z) + z 2 U 2 (z) + z 4 , where S(z) is the generating function
associated with nonterminal
¡
¢ S. We obtain a set of polynomial equations that reduces to
a single equation P z, L(z) = 0 through elimination. To obtain the recurrence, we use
Comtet’s theorem [Comtet 1969] (see also [Flajolet 1987] for corresponding asymptotic
estimates).
4. Trees. We shall study trees in great detail in Section 4.1. All trees here are rooted .
In plane trees, subtrees under a node are ordered; in non-plane trees, they are unordered.
If G is the class of general plane trees with all node degrees allowed, then G satisfies an
equation G = {°} × G ∗ , signifying that a tree is composed of a root followed by a sequence
of subtrees. Thus, we have
√
¶
µ
1 − 1 − 4z
1 2n − 2
z
.
=⇒
G(z) =
and
Gn =
G(z) =
1 − G(z)
2
n n−1
If H is the class of general non-plane trees, then H = {°} × R{H}, so that H(z) satisfies
the functional equation
H(z) = z eH(z)+H(z
2
)/2+H(z 3 )/3+···
.
(11)
There are no closed form expressions for Hn = [z n ]H(z). However, complex analysis
methods make it possible to determine Hn asymptotically [Pólya 1937].
1.3. Exponential Generating Functions
Exponential generating functions are essentially used for counting well-labeled structures.
Such structures are composed of “atoms” (the size of a structure being the number of
its atoms), and each atom is labeled by a distinct integer. For instance, a labeled graph
of size n is just a graph over the set of nodes {1, 2, . . . , n}. A permutation (respectively,
circular permutation) can be viewed as a linear (respectively, cyclic) directed graph whose
nodes are labeled by distinct integers.
The basic operation over labeled structures is the partitional product [Foata 1974],
[Goulden and Jackson 1983], [Flajolet 1981, Chapter I], [Greene 1983]. The partitional
product of U and V consists of forming ordered pairs (u, v) from U × V and relabeling them
in all possible ways that preserve the order of the labels in u and v. More precisely, let
w ∈ W be a labeled structure of size q. A 1–1 function θ from {1, 2, . . . , q} to {1, 2, . . . , r},
where r ≥ q, defines a relabeling, denoted w 0 = θ(w), where label j in w is replaced by θ(j).
Let u and w be two labeled structures of respective sizes m and n. The partitional product
of u and v is denoted by u ∗¡v, and it consists
of the set of all possible relabelings (u 0 , v 0 )
¢
0 0
of (u, v) so that (u , v ) = θ1 (u), θ2 (v) , where θ1 : {1, 2, . . . , m} → {1, 2, . . . , m + n},
θ2 : {1, 2, . . . , n} → {1, 2, . . . , m + n} satisfy the following:
1. θ1 and θ2 are monotone increasing functions. (This preserves the order structure of u
and v.)
2. The ranges of θ1 and θ2 are disjoint and cover the set {1, 2, . . . , m + n}.
Section 1.3. Exponential Generating Functions
/
11
The partitional product of two classes U and V is denoted W = U ∗ V and is the union of
all u ∗ v, for u ∈ U and v ∈ V.
Theorem 4 [Sum/Product theorem for labeled structures]. The disjoint union and
partitional product over labeled structures are admissible for EGFs:
W = U ∪ V,
with U ∩ V = ∅
W =U ∗V
=⇒
=⇒
c (z) = U
b (z) + Vb (z);
W
c (z) = U
b (z)Vb (z).
W
Proof. Obvious for unions. For products, observe that
X µq¶
Wq =
Um Vq−m ,
m
(12)
0≤m≤q
since the binomial coefficient counts the number of partitions of {1, 2, . . . , q} into two sets
of cardinalities m and q − m. Dividing by q! we get
X Um Vq−m
Wq
=
.
q!
m! (q − m)!
0≤m≤q
The partitional complex of U is denoted U h∗i . It is analogous to the sequence class
construction and is defined by
U h∗i = {²} + U + (U ∗ U) + (U ∗ U ∗ U) + · · · ,
¡
¢
b (z) −1 . The kth partitional power of U is denoted U hki . The
and its EGF is 1 − U
abelian partional power, denoted U [k] , is the collection of all sets {υ1 , υ2 , . . . , υk } such that
(υ1 , υ2 , . . . , υk ) ∈ U hki . In other words, the order of components is not taken into account.
1 hki
1 b hki
U
so that the EGF of U [k] is k!
U (z). The abelian
We can write symbolically U [k] = k!
partitional complex of U is defined analogously to the powerset construction:
U [∗] = {²} + U + U [2] + U [3] + · · · .
Theorem 5. The partitional complex and abelian partitional complex are admissible for
EGFs:
1
c (z) =
;
W = U h∗i
=⇒
W
b (z)
1−U
(13)
c (z) = eUb(z) .
W = U [∗]
=⇒
W
Examples. 1. Permutations and Cycles. Let P be the class of all permutations, and let C
be the class of circular permutations (or cycles). By Theorem 5, we have Pb(z) = (1 − z)−1
and Pn = n!. Since any permutation
decomposes
into an unordered set of cycles, we have
¡
¢
[∗]
b
P = C , so that C(z) = log 1/(1 − z) and Cn = (n ¡− 1)!. This¢ construction also shows
that the EGF for permutations having k cycles is log k 1/(1 − z) , whose nth coefficient is
sn,k /n!, where sn,k is a Stirling number of the first kind.
12
/
Average-Case Analysis of Algorithms and Data Structures
Let Q be the class of permutations without cycles of size 1 (that is, without fixed
points). Let D be the class of cycles of size at least 2. We have D ∪ {(1)} = C, and hence
b
b
b
D(z)
+ z = C(z),
D(z)
= log(1 − z)−1 − z. Thus, we have
−z
b = e
b
Q(z)
= eD(z)
.
1−z
(14)
Similarly, the generating function for the class I of involutions (permutations with cycles
of lengths 1 and 2 only) is
b = ez+z2/2 .
I(z)
(15)
2. Labeled graphs. Let G be the class of all labeled graphs, and let K be the class of
b
b
connected labeled graphs. Then Gn = 2n(n−1)/2 , and K(z)
= log G(z),
from which we can
prove that Kn /Gn → 1, as n → ∞.
3. Occupancies and Set Partitions. We define the urn of size n, for n ≥ 1, to be
the structure formed from the unordered collection of the integers {1, 2, . . . , n}; the urn
of size 0 is defined to be the empty set. Let U denote the class of all urns; we have
b (z) = ez . The class U hki represents all possible ways of throwing distinguishable balls
U
hki
into k distinguishable urns, and its EGF is ekz , so that as anticipated we have Un = k n .
Similarly, the generating function for the number of ways of throwing n balls into k urns,
no urn being empty, is (ez − 1)k , and thus the number of ways is n! [z n ] (ez − 1)k , which
is equal to k! Sn,k , where Sn,k is a Stirling number of the second kind.
If S = V [∗] , where V is the class of nonempty urns, then an element of S of size n
corresponds to a partition of the set {1, 2, . . . , n} into equivalence classes. The number of
such partitions is a Bell number
βn = n! [z n ] exp(ez − 1).
(16)
In the same vein, the EGF of surjections S = V h∗i (surjective mappings from {1, 2, . . . , n}
onto an initial segment {1, 2, . . . , m} of the integers, for some 1 ≤ m ≤ n) is
b
S(z)
=
1
1
=
.
z
1 − (e − 1)
2 − ez
(17)
For labeled structures, we can also define marking and composition constructions that
translate into EGFs. Greene [1983] has defined a useful boxing operator : C = A ∗B denotes
the subset of A ∗ B obtained by retaining only pairs (u, v) ∈ A ∗ B such that label 1 is in u.
This construction translates into the EGF
Z z
b
b0 (t) B(t)
b dt.
C(z) =
A
0
Section 1.4. From Generating Functions to Counting
/
13
1.4. From Generating Functions to Counting
In the previous section we saw how generating function equations can be written directly
from structural definitions of combinatorial objects. We discuss here how to go from the
functional equations to exact counting results, and then indicate some extensions of the
symbolic method to multivariate generating functions.
Direct Expansions from generating functions. When a GF is given explicitly as the
product or composition of known GFs, we often get an explicit form for the coefficients
of the GF by using classical rules for Taylor expansions and sums. Examples related to
previous calculations are the Catalan numbers (2), derangement numbers (14), and Bell
numbers (16):
µ ¶
X kn
X (−1)k
e−z
2n
1
n
;
[z n ]
=
[z ] √
=
;
[z n ] exp(ez − 1) = e−1
.
n
1−z
k!
k!
1 − 4z
0≤k≤n
k≥0
Another method for obtaining coefficients of implicitly defined GFs is the method of indeterminate coefficients. If the coefficients of f (z) are sought, we translate over coefficients
the functional relation for f (z). An important subcase is that of a first-order linear recurrence fn = an + bn fn−1 , whose solution can be found by iteration or summation factors:
fn = an + bn an−1 + bn bn−1 an−2 + bn bn−1 bn−2 an−3 + · · · .
(18)
Solution Methods for Functional Equations. Algebraic equations over GFs may be
solved explicitly if of low degree, and the solutions can then be expanded (see the Catalan
numbers (2d) in Section 1.1). For equations of higher degrees and some transcendental
equations, the Lagrange-Bürmann inversion formula is useful:
Theorem 6 [Lagrange-Bürmann
inversion formula]. Let f (z) be defined implicitly by
¡
¢
the equation f (z) = z ϕ f (z) , where ϕ(u) is a series with ϕ(0)
¡ 6= 0.
¢ Then the coeffik
cients of f (z), its powers f (z) , and an arbitrary composition g f (z) are related to the
coefficients of the powers of ϕ(u) as follows:
1 n−1
[u
] ϕ(u)n ;
(19a)
n
k
(19b)
[z n ] f (z)k = [un−k ] ϕ(u)n ;
n
¡
¢
1
(19c)
[z n ] g f (z) = [un−1 ] ϕ(u)n g 0 (u).
n
P
Examples. 1. Abel identities. By (19a), f (z) = n≥1 nn−1 z n /n! is the expansion of
f (z) = zef (z) . By taking coefficients of eαf (z) eβf (z) = e(α+β)f (z) we get the Abel identity
X µn ¶
n−1
(k + α)k−1 (n − k + β)n−k−1 .
(α + β)(n + α + β)
= αβ
k
[z n ] f (z) =
k
2. Ballot numbers. Letting b(z) = z + zb2 (z) (which is related to B(z) defined in (2b)
by b(z)
= ¢zB(z 2 ), see also Section 4.1) and ϕ(u) = 1 + u2 , we find that [z n ]B k (z) =
¡
2n+k
k
(these are the ballot numbers).
2n+k
n
14
/
Average-Case Analysis of Algorithms and Data Structures
Differential equations occur especially in relation to binary search trees, as we shall
d
f (z) = a(z) + b(z)f (z),
see in Section 4.2. For the first-order linear differential equation dz
the variation of parameter (or integration factor ) method gives us the solution
Z z
Z z
−B(t)
B(z)
a(t)e
dt,
where B(z) =
b(u) du.
(20)
f (z) = e
0
z0
The lower bound z0 is chosen to satisfy the initial conditions on f (z).
For other functional equations, iteration (or bootstrapping) may be useful. For
example, under suitable
¡
¢ (formal or analytic) convergence conditions, the solution to
f (z) = a(z) + b(z)f γ(z) is
¶
Xµ ¡
¡ ((j)) ¢
¢ Y
((k))
(21)
b γ (z) ,
f (z) =
a γ
(z)
0≤j≤k−1
k≥0
where γ ((k)) (z) denotes the kth iterate γ(γ(· · · (γ(z)) · · ·)) of γ(z) (cf. Eq. (18)).
In general, the whole arsenal of algebra can be used on generating functions; the
methods above represent only the most commonly used techniques. Many equations still
escape exact solution, but asymptotic methods based upon complex analysis can often be
used to extract asymptotic information about the GF coefficients.
Multivariate generating functions. If we need to count structures of size n with a
certain combinatorial characteristic having value k, we can try to treat k as a parameter
(see the examples above with Stirling numbers). Let gn,k be the corresponding counting
sequence. We may also consider bivariate generating functions, such as
G(u, z) =
X
gn,k uk z n
or
G(u, z) =
n,k≥0
X
gn,k uk
n,k≥0
zn
.
n!
Extensions of previous translation schemes exist (see [Goulden 1983]). For instance, for
the Stirling numbers sn,k and Sn,k , we have
X
sn,k uk
n,k≥0
X
n,k≥0
Sn,k k! uk
¡
¢
zn
= exp u log(1 − z)−1 = (1 − z)−u ;
n!
zn
1
=
.
n!
1 − u(ez − 1)
(22a)
(22b)
Multisets. An extension of the symbolic method to multisets is carried out in [Flajolet 1981]. Consider a class S of structures and for each σ ∈ S a “multiplicity” µ(σ).
The
and its generating function is by definition S(z) =
P pair (S,|σ|µ) is called a multiset,
n
µ(σ)z
so
that
S
=
[z
]S(z)
is the cumulated value of µ over all structures of
n
σ∈S
size n. This extension is useful for obtaining generating functions of expected (or cumulated) values of parameters over combinatorial structures, since translation schemes based
upon admissible constructions also exist for multisets. We shall encounter such extensions
when analyzing Shellsort (Section 3.3), trees (Section 4), and hashing (Section 5.1).
Section 2.1. Generalities
/
15
2. Asymptotic Methods
In this section, we start with elementary asymptotic methods. Next we present complex
asymptotic methods, based upon singularity analysis and saddle point integrals, which
allow in most cases a direct derivation of asymptotic results for coefficients of generating functions. Then we introduce Mellin transform techniques that permit asymptotic
estimations of a large class of combinatorial sums, especially those involving certain arithmetic and number-theoretic functions. We conclude by a discussion of (asymptotic) limit
theorems for probability distributions.
2.1. Generalities
We briefly recall in this subsection standard real analysis techniques and then discuss
complex analysis methods.
Real Analysis. Asymptotic evaluation of the most elementary counting expressions may
be done directly, and a useful formula is this regard is Stirling’s formula:
¶
³ n ´n µ
√
1
1
139
n! ∼ 2πn
1+
+
−
− ··· .
(1)
e
12n 288n2
51840n3
¡ ¢
√
2
n
For instance, the central binomial coefficient satisfies 2n
n = (2n)!/n! ∼ 4 / πn.
The Euler-Maclaurin summation formula applies when an expression involves a sum
at regularly spaced points (a Riemann sum) of a continuous function: such a sum is
approximated by the corresponding integral, and the formula provides a full expansion.
The basic form is the following:
Theorem 1 [Euler-Maclaurin summation formula].
any integer m, we have
g(0) + g(1)
−
2
Z
If g(x) is C ∞ over [ 0, 1], then for
1
g(x) dx =
0
X
1≤j≤m−1
¢
B2j ¡ (2j−1)
g
(1) − g (2j−1) (0) −
(2j)!
Z
1
g (2m) (x)
0
B2m (x)
dx,
(2m)!
(2a)
where Bj (x) ≡ j! [z j ] zexz /(ez −1) is a Bernoulli polynomial, and Bj = Bj (1) is a Bernoulli
number.
We can derive several formulæ by summing (2a). If {x} denotes the fractional part
of x, we have
X
0≤j≤n
g(j) −
Z
n
g(x) dx =
0
X
¢
B2j ¡ (2j−1)
1
1
g
(n) − g (2j−1) (0)
g(0) + g(n) +
2
2
(2j)!
1≤j≤m−1
Z n
B2m ({x})
−
g (2m) (x)
dx,
(2b)
(2m)!
0
16
/
Average-Case Analysis of Algorithms and Data Structures
which expresses the difference between a discrete sum and its corresponding integral. By
a change of scale, for h small, setting g(x) = f (hx), we obtain the asymptotic expansion
of a Riemann sum, when the step size h tends to 0:
X
0≤jh≤1
1
f (jh) ∼
h
Z
1
f (x) dx +
0
+
f (0) + f (1)
2
X B2j h2j−1 ¡
¢
f (2j−1) (1) − f (2j−1) (0) .
(2j)!
(2c)
j≥1
Examples. 1. The harmonic numbers are defined by Hn = 1 + 21 + 31 + · · · + n1 , and they
1
+ · · ·.
satisfy Hn = log n + γ + 2n
¡ 2n ¢
2. The binomial coefficient n−k
, for k < n2/3 , is asymptotically equal to the cen¡2n¢
tral coefficient n times exp(−k 2 /n), which follows from estimating its logarithm. This
Gaussian approximation is a special case of the central limit theorem of probability theory.
P
Laplace’s method for sums is a classical approach for evaluating sums S n = k f (k, n)
that have a dominant term. First we determine the rank k0 of the dominant
term.
¡
¢ We
can often show for “smooth”
functions f (k, n) that f (k, n) ≈ f (k0 , n)φ (k − k0 )h , with
√
h = h(n) small (like 1/ n or 1/n). We conclude by applying the Euler-Maclaurin summation to φ(x). An example is the asymptotics of the Bell numbers defined in (1.16) [De
Bruijn 1981, page 108] or the number of involutions (1.15) [Knuth 1973, page 65]. There
are extensions to multiple sums involving multivariate Euler-Maclaurin summations.
Complex Analysis. A powerful method (and one that is often computationally simple)
is to use complex analysis to go directly from a generating function to the asymptotic form
of its coefficients. For instance, the EGF for the number of 2–regular graphs [Comtet 1974,
page 273] is
2
e−z/2−z /4
f (z) = √
,
(3)
1−z
and [z n ]f (z) is sought. A bivariate Laplace method is feasible. However, it is simpler to
notice that f (z) is analytic for complex z, except when z = 1. There a “singular expansion”
holds:
e−3/4
f (z) ∼ √
,
as z → 1.
(4a)
1−z
General theorems that we are going to discuss in the next section let us “transfer” an
approximation (4a) of the function to an approximation of the coefficients:
e−3/4
[z n ]f (z) ∼ [z n ] √
.
1−z
Thus, [z n ]f (z) ∼ e−3/4 (−1)n
¡−1/2¢
n
√
∼ e−3/4 / πn.
(4b)
Section 2.2. Singularity Analysis
/
17
2.2. Singularity Analysis
A singularity is a point at which a function ceases to be analytic. A dominant singularity
is one of smallest modulus. It is known that a function with positive coefficients that is
not entire always has a dominant positive real singularity. In most cases, the asymptotic
behavior of the coefficients of the function is determined by that singularity.
Location of singularities. The classical exponential-order formula relates the location
of singularities of a function to the exponential growth of its coefficients.
Theorem 2 [Exponential growth formula]. If f (z) is analytic at the origin and has
nonnegative coefficients, and if ρ is its smallest positive real singularity, then its coefficients
fn = [z n ]f (z) satisfy
(1 − ²)n ρ−n <i.o. fn <a.e. (1 + ²)n ρ−n ,
(5)
for any ² > 0. Here “ i.o.” means infinitely often (for infinitely many values) and “ a.e.”
means “almost everywhere” (except for finitely many values).
Examples. 1. Let f (z) = 1/ cos(z) (EGF for “alternating permutations”) and g(z) =
1/(2 − ez ) (EGF for “surjections”). Then bounds (5) apply with ρ = π/2 and ρ = log 2,
respectively.
2. The solution f (z) of the functional equation f (z) = z + f (z 2 + z 3 ) is the OGF of
2-3 trees [Odlyzko 81]. Setting σ(z) = z 2 + z 3 , the functional equation has the following
formal solution, obtained by iteration (see Eq. (1.21)):
f (z) =
X
σ ((m)) (z),
(6a)
m≥0
where σ ((m)) (z) is the mth iterate of σ(z). The sum in (6a) converges geometrically when
|z| is less than the smallest positive root ρ of the equation ρ = σ(ρ), and it becomes √
infinite
at z = ρ. The smallest possible root is ρ = 1/φ, where φ is the golden ratio (1 + 5 )/2.
Hence, we have
Ã
Ã
√ !n
√ !n
1+ 5
1+ 5
n
n
(1 − ²) <i.o. [z ] f (z) <a.e.
(1 + ²)n .
2
2
(6b)
The bound (6b) and even an asymptotic expansion [Odlyzko 81] are obtainable without an
explicit expression for the coefficients. See Theorem 4.7.
Nature of singularities. Another way of expressing Theorem 2 is as follows: we have
fn ∼ θ(n)ρ−n , where the subexponential factor θ(n) is i.o. larger than any decreasing
exponential and a.e. smaller than any increasing exponential. Common forms for θ(n) are
nα logβ n, for some constants α and β. The subexponential factors are usually related to
the growth of the function around its singularity. (The singularity may be taken equal to 1
by normalization.)
18
/
Average-Case Analysis of Algorithms and Data Structures
Method [Singularity analysis]. Assume that f (z) has around its dominant singularity 1
an asymptotic expansion of the form
f (z) = σ(z) + R(z),
with R(z) ¿ σ(z),
as z → 1,
(7a)
where σ(z) is in a standard set of functions that include (1 − z)a logb (1 − z), for constants
a and b. Then under general conditions Eq. (7a) leads to
[z n ]f (z) = [z n ]σ(z) + [z n ]R(z),
with [z n ]R(z) ¿ [z n ]σ(z),
as n → ∞.
(7b)
Applications of this principle are based upon a variety of conditions on function f (z)
or R(z), giving rise to several methods:
1. Transfer methods require only growth information on the remainder term R(z), but
the approximation has to be established for z → 1 in some region of the complex plane.
Transfer methods largely originate in [Odlyzko 1982] and are developed systematically
in [Flajolet and Odlyzko 1989].
2. Tauberian theorems assume only that Eq. (7a) holds when z is real and less than 1
(that is, as z → 1− ), but they require a priori Tauberian side conditions (positivity,
monotonicity) to be satisfied by the coefficients fn and are restricted to less general
types of growth for R(z). (See [Feller 1971, page 447] and [Greene and Knuth 1982,
page 52] for a combinatorial application.)
3. Darboux’s method assumes smoothness conditions (differentiability) on the remainder
term R(z) [Henrici 1977, page 447].
Our transfer method approach is the one that is easiest to apply and the most flexible
for combinatorial enumerations. First, we need the asymptotic growth of coefficients of
standard singular functions. For σ(z) = (1 − z)−s¡, where
expan¢ s > 0, by Newton’s
n+s−1
s−1
sion the nth coefficient in its Taylor expansion is
, which is ∼ n /Γ(s). For
n
many standard singular functions, like (1 − z)−1/2 log2 (1 − z)−1 , we may use either EulerMaclaurin summation on the explicit form of the coefficients or contour integration to find
σn ∼ (πn)−1/2 log2 n. Next we need to “transfer” coefficients of remainder terms.
Theorem 3 [Transfer lemma]. If R(z) is analytic for |z| < 1 + δ for some δ > 0 (with
the possible exception
of a¢ sector around z = 1, where | Arg(z − 1)| < ² for some ² < π2 )
¡
and if R(z) = O (1 − z)r as z → 1 for some real r, then
¡
¢
[z n ]R(z) = O n−r−1 .
(8b)
The proof proceeds by choosing a contour of integration made of part of the circle
|z| = 1+δ and the boundary of the sector, except for a small notch of diameter 1/n around
z = 1. Furthermore, when r ≤ −1, we need only assume that the function is analytic for
|z| ≤ 1, z 6= 1.
Examples. 1. The EGF f (z) of 2–regular graphs is given in Eq. (3). We can expand the
exponential around z = 1 and get
2
e−z/2−z /4
f (z) = √
= e−3/4 (1 − z)−1/2 + O((1 − z)1/2 ),
1−z
as z → 1.
(9a)
Section 2.2. Singularity Analysis
/
19
The function f (z) is analytic in the complex plane slit along z ≥ 1, and Eq. (9a) holds
there in the vicinity of z = 1. Thus, by the transfer lemma with r = 12 , we have
µ
1¶
e−3/4
n
−3/4 n − 2
[z ]f (z) = e
+ O(n−3/2 ) = √ n−1/2 + O(n−3/2 ).
(9b)
n
π
2. The EGF of surjections was shown in (1.17) to be f (z) = (2 − ez )−1 . It is analytic
for |z| ≤ 3, except for a simple pole at z = log 2, where local expansions show that
1
1
·
+ O(1),
as z → log 2,
(10a)
f (z) =
2 log 2 1 − z/ log 2
so that
µ
¶n+1 µ
µ ¶¶
1
1
1
n
1+O
.
(10b)
[z ]f (z) =
2 log 2
n
3. A functional equation. The OGF of certain trees [Polya 1937] f (z) = 1 + z + z 2 +
3
2z + · · · is known only via the functional equation
1
f (z) =
.
1 − zf (z 2 )
It can be checked that f (z) is analytic at the origin. Its dominant singularity is a simple
pole ρ < 1 determined by cancellation of the denominator, ρf (ρ2 ) = 1. Around z = ρ =
0.59475 . . ., we have
¯
¯
1
1
d
2 ¯
f (z) =
=
+
O(1),
with
c
=
zf
(z
)
(11a)
¯ .
ρf (ρ2 ) − zf (z 2 )
c(ρ − z)
dz
z=ρ
Thus, with K = (cρ)−1 = 0.36071, we find that
µ
µ ¶¶
1
n
−n
.
1+O
[z ]f (z) = Kρ
n
(11b)
More precise expansions exist for coefficients of meromorphic functions (functions with
poles only), like the ones in the last two examples (for example, see [Knuth 1973, 5.3.1–3,4],
[Henrici 1977], and [Flajolet and Odlyzko 1989]). For instance, the error of approximation (11b) is less than 10−15 when n = 100. Finally, the OGF of 2–3 trees (6a) is amenable
to transfer methods, though extraction of singular expansions is appreciably more difficult
[Odlyzko 1982].
We conclude this subsection by citing the lemma at the heart of Darboux’s method
[Henrici 1977, page 447] and a classical Tauberian Theorem [Feller 1971, page 447].
Theorem 4 [Darboux’s method]. If R(z) is analytic for |z| < 1, continuous for |z| ≤ 1,
and d times continuously differentiable over |z| = 1, then
µ ¶
1
n
.
(12)
[z ]R(z) = o
nd
For instance, if R(z) = (1 − z)5/2 H(z), where H(z) is analytic for |z| < 1 + δ, then
we can use d = 2 for Theorem 4 and obtain [z n ]R(z) = o(1/n2 ). The theorem is usually
applied to derive expansions of coefficients of functions of the form f (z) = (1 − z) r H(z),
with H(z) analytic in a larger domain than f (z). Such functions can however be treated
directly by transfer methods (Theorem 3).
20
/
Average-Case Analysis of Algorithms and Data Structures
Theorem 5P
[Tauberian theorem of Hardy–Littlewood–Karamata]. Assume that the function f (z) = n≥0 fn z n has radius of convergence 1 and satisfies for real z, 0 ≤ z < 1,
1
L
f (z) ∼
(1 − z)s
µ
1
1−z
¶
,
as z → 1− ,
(13a)
where s > 0 and L(u) is a function varying slowly at infinity, like log b (u). If {fn }n≥0 is
monotonic, then
ns−1
L(n).
(13b)
fn ∼
Γ(s)
Q
k
An application to the function f (z) = k (1 + zk ) is given in [Greene and Knuth 1982,
page 52]; the function represents the EGF of permutations with distinct cycle lengths.
That function has a natural boundary at |z| = 1 and hence is not amenable to Darboux
or transfer methods.
Singularity analysis is used extensively in Sections 3–5 for asymptotics related to
sorting methods, plane trees, search trees, partial match queries, and hashing with linear
probing.
2.3. Saddle Point Methods
Saddle point methods are used for extracting coefficients of entire functions (which are
analytic in the entire complex
plane)
and functions that “grow fast” around their dominant
¡
¢
singularities, like exp 1/(1 − z) . They also play an important rôle in obtaining limit
distribution results and exponential tails for discrete probability distributions.
P
A Simple Bound. Assume that f (z) = n fn z n is entire and has positive coefficients.
Then by Cauchy’s formula, we have
1
fn =
2πi
Z
Γ
f (z)
dz.
z n+1
(14)
We refer to (14) as a Cauchy coefficient integral. If we take as contour Γ the circle |z| = R,
we get an easy upper bound
f (R)
fn ≤
,
(15)
Rn
since the maximum value of |f (z)|, for |z| = R, is f (R). The bound (15) is valid for
any R > 0. In particular,
we¢have
R−n }. We can find the minimum
¡
¡ f0 n ≤ minR>0n{f¢(R)
d
−n
−n
value by setting dR f (R)R
= f (R) − f (R)( R ) R = 0, which gives us the following
bound:
Theorem 6 [Saddle point bound]. If f (z) is entire and has positive coefficients, then for
all n, we have
f (R)
[z n ]f (z) ≤
,
(16)
Rn
Section 2.3. Saddle Point Methods
/
21
where R = R(n) is the smallest positive real number such that
R
f 0 (R)
= n.
f (R)
(17)
Complete Saddle Point Analysis. The saddle point method is a refinement of the
technique we used to derive (15). It applies in general to integrals depending upon a large
parameter, of the form
Z
1
I=
eh(z) dz.
(18a)
2πi Γ
A point z = σ such that h0 (z) = 0 is called a saddle point owing to the topography of
|eh(z) | around z = σ: There are two perpendicular directions at z = σ, one along which
the integrand |eh(z) | has a local minimum at z = σ, and the other (called the axis of the
saddle point) along which the integrand has a local maximum at z = σ. The principle
steps of the saddle point method are as follows:
1. Show that the contribution of the integral is asymptotically localized to a fraction Γ ²
of the contour around z = σ traversed along its axis. (This forces ² to be not too
small.)
2
h00 (σ).
2. Show that over this subcontour, h(z) is suitably approximated by h(σ) + (z−σ)
2
(This imposes a conflicting constraint that ² should not be too large.)
If points 1 and 2 can be established, then I can be approximated by
¶
µ
Z
eh(σ)
(z − σ)2 00
1
.
(18b)
h (σ) dz ≈ p
exp h(σ) +
I≈
2πi Γ²
2
2πh00 (σ)
Classes of functions such that the saddle point estimate (18b) applies to Cauchy coefficient integrals (14) are called admissible and have been described by several authors [Hayman 1956], [Harris and Schoenfeld 1968], [Odlyzko and Richmond 1985]. Cauchy coefficient
integrals (14) can be put into the form (18a), where h(z) = hn (z)
¡ = log f (z) − (n + 1) log
¢ z,
d
0
and a saddle point z = R is a root of the equation h (z) = dz log f (z) − (n + 1) log z =
f 0 (z)/f (z) − (n + 1)/z = 0. By the method of (18) we get the following estimate:
Theorem 7 [Saddle point method for Cauchy coefficient integrals]. If f (z) has positive
coefficients and is in a class of admissible functions, then
¯
¯
d2
f (R)
,
with C(n) = 2 log f (z)¯¯
+ (n + 1)R−2 ,
(19)
fn ∼ p
n+1
dz
2πC(n) R
z=R
where the saddle point R is the smallest positive real number such that
R
f 0 (R)
= n + 1.
f (R)
(20)
Examples. 1. We get Stirling’s formula (1) by letting f (z) = ez . The saddle point is
R = (n + 1), and by Theorem 7 we have
en+1
1
1 ³ e ´n
n z
= [z ] e ∼ p
.
∼√
n!
2πn n
2π/(n + 1) (n + 1)n+1
22
/
Average-Case Analysis of Algorithms and Data Structures
2. By (1.15), the number of involutions is given by
n
In = n! [z ] e
z+z 2 /2
n!
=
2πi
Z
2
Γ
ez+z /2
dz,
z n+1
√
√
and the saddle point is R = n + 1/2 + 5/(8 n ) + · · ·. We choose ² = n−2/5 , so that for
z = Rei² , we have (z − R)2 h00 (R) → ∞ while (z − R)3 h000 (R) → 0. Thus,
In
e3/4 −n/2 n/8
e .
∼ √ n
n!
2 π
The asymptotics of the Bell numbers can be done in the same way [De Bruijn 1981,
page 104].
3. A function with a finite singularity. For f (z) = ez/(1−z) , Theorem 7 gives us
n
fn ≡ [z ] exp
µ
z
1−z
¶
√
C ed
∼
nα
n
.
(21)
Q
A similar method can be applied to the integer partition function p(z) = n≥1 (1 − z n )−1
though it has a natural boundary, and estimates (21) are characteristic of functions whose
logarithm has a pole–like singularity.
Specializing some of Hayman’s results, we can define inductively a class H of admissible
functions as follows: (i) If p(z) denotes an arbitrary polynomial with positive coefficients,
then ep(z) ∈ H. (ii) If f (z) and g(z) are arbitrary functions of H, then ef (z) , f (z) · g(z),
f (z) + p(z), and p(f (z)) are also in H.
Several applications of saddle point methods appear in Section 5.1 in the analysis of
maximum bucket occupancy, extendible hashing, and coalesced hashing.
2.4. Mellin Transforms
The Mellin transform, a tool originally developed for analytic number theory, is useful
for analyzing sums where arithmetic functions appear or nontrivial periodicity phenomena
occur. Such sums often present themselves as expectations of combinatorial parameters or
generating functions.
Basic Properties. Let f (x) be a function defined for real x ≥ 0. Then its Mellin transform is a function f ∗ (s) of the complex variable s defined by
∗
f (s) =
Z
∞
f (x)xs−1 dx.
(22)
0
If f (x) is continuous and is O(xα ) as x → 0 and O(xβ ) as x → ∞, then its Mellin transform
is defined in the “fundamental strip” −α < <(s) < −β, which we denote by h−α; −βi.
For instance the Mellin transform of e−x is the well–known Gamma function Γ(s), with
Section 2.4. Mellin Transforms
/
23
P
fundamental strip h0; +∞i, and the transform of n≥k (−x)n /n!, for k > 0, is Γ(s) with
fundamental strip h−k; −k + 1i. There is also an inversion theorem à la Fourier:
1
f (x) =
2πi
Z
c+i∞
f ∗ (s)x−s ds,
(23)
c−i∞
where c is taken arbitrarily in the fundamental strip.
The important principle for asymptotic analysis is that under the Mellin transform,
there is a correspondence between terms of asymptotic expansions of f (x) at 0 (respectively, +∞) and singularities of f ∗ (s) in a left (respectively, right) half-plane. To see why
this is so, assume that f ∗ (s) is small at ±i∞ and has only polar singularities. Then, we can
close the contour of integration in (23) to the left (for x → 0) or to the right for (x → ∞)
and derive by Cauchy’s residue formula
X
¡
¢
f (x) = +
Res f ∗ (s)x−s ; s = σ + O(x−d ),
as x → 0.
σ
f (x) = −
X
σ
¡
¢
Res f ∗ (s)x−s ; s = σ + O(x−d ),
as x → ∞.
(24)
The sum in the first equation is extended to all poles σ where d ≤ <(σ) ≤ −α; the sum in
the second equation is extended to all poles σ with −β ≤ <(σ) ≤ d. Those relations have
the character of asymptotic expansions of f (x) at 0 and +∞: We observe that if f ∗ (s) has
a kth-order pole at σ, then a residue in (24) is of the form Qk−1 (log x)x−σ , where Qk−1 (u)
is a polynomial of degree k − 1.
There is finally an important functional property of the Mellin transform: If g(x) =
f (µx), then g ∗ (s) = µ−s f ∗ (s). Hence, transforms of sums
“harmonic sums” decomP called
s
pose into the product of a generalized Dirichlet series
λk µk and the transform f ∗ (s) of
the basis function:
µX
¶
X
−s
∗
F (x) =
λk f (µk x)
=⇒
F (s) =
λk µk f ∗ (s).
(25)
k
k
Asymptotics of Sums. The standard usage of Mellin transforms devolves from a combination of Eqs. (24) and (25):
Theorem 8 [Mellin asymptotic summation formula]. Assume that in (25) the transform f ∗ (s) of f (x) is exponentially small towards ±i∞ with only polar singularities and
that the Dirichlet series is
Pmeromorphic of finite order. Then the asymptotic behavior of
a harmonic sum F (x) = k λk f (µk x), as x → 0 (respectively, x → ∞), is given by
õ
!
¶
X
X
X
−s
∗
−s
λk f (µk x) ∼ ±
Res
(26)
λk µk f (s)x ; s = σ .
k
σ
k
For an asymptotic expansion of the sum as x → 0 (respectively, as x → ∞), the sign
in (26) is “+” (respectively, “−”), and the sum is taken over poles to the left (respectively,
to the right) of the fundamental strip.
24
/
Average-Case Analysis of Algorithms and Data Structures
P
2 2
Examples. 1. An arithmetical sum. Let F (x) be the harmonic sum k≥1 d(k)e−k x ,
where d(k) is the number of divisors of k. Making use of (25) and the fact that the
transform of e−x is Γ(s), we have
F ∗ (s) =
1 ³s´ 2
1 ³s´ X
d(k)k −s = Γ
ζ (s),
Γ
2 2
2 2
(27a)
k≥1
P
where ζ(s) = n≥1 n−s . Here F ∗ (s) is defined in the fundamental strip h1; +∞i. To the
left of this strip, it has a simple pole at s = 0 and a double pole at s = 1. By expanding
Γ(s) and ζ(s) around s = 0 and s = 1, we get for any d > 0
√
π log x
F (x) = −
+
2
x
µ
3γ
log 2
−
4
2
¶ √
π 1
+ + O(xd ),
x
4
as x → 0.
(27b)
P
k
2. A sum with hidden periodicities. Let F (x) be the harmonic sum k≥0 (1 − e−x/2 ).
The transform F ∗ (s) is defined in the fundamental strip h−1; 0i, and by (25) we find
F ∗ (s) = −Γ(s)
X
k≥0
2ks = −
Γ(s)
.
1 − 2s
(28a)
The expansion of F (x) as x → ∞ is determined by the poles of F ∗ (s) to the right of
the fundamental strip. There is a double pole at s = 0 and the denominator of (28a)
gives simple poles at s = χk = 2kπi/ log 2, for k 6= 0. Each simple pole χk contributes a
fluctuating term x−χk = exp(2kπi log2 x) to the asymptotic expansion of F (x). Collecting
fluctuations, we have
F (x) = log2 x + P (log2 x) + O(x−d ),
as x → ∞,
(28b)
where P (u) is a periodic function with period 1 and a convergent Fourier expansion.
Mellin transforms are the primary tool to study tries and radix exchange sort (Section 4.3). They are also useful in the study of certain plane tree algorithms (Section 4.1),
bubble sort (Section 3.4), and interpolation search and extendible hashing (Section 5.1).
Section 2.5. Limit Probability Distributions
/
25
2.5. Limit Probability Distributions
General references for this section are [Feller 1971] and [Billingsley 1986]. We recall that if X is a real-valued random variable (RV), then its distribution function is
2
F (x) = Pr{X ≤ x}, and its mean and variance are µX = X = E{X} and σX
= var(X) =
2
2
k
E{X } − (E{X}) , respectively. The kth moment of X is Mk = E{X }. We have
2
2
µX+Y = µX + µY , and when X and Y are independent, we have σX+Y
= σX
+ σY2 .
For a nonnegative
integer-valued RV, its probability generating function (PGF) is defined
P
by p(z) = k≥0 pk z k , where pk =
¡ 0Pr{X
¢2 = k}; the mean and variance are respectively
0
2
00
0
µ = p (1) and σ = p (1)+p (1)− p (1) . It is well known that the PGF of a sum of independent RVs is the product of their PGFs, and conversely, a product of PGFs corresponds
to a sum of independent RVs.
A problem that naturally presents itself in the analysis of algorithms is as follows:
Given a class C of combinatorial structures (such as trees, permutations, etc.), with X n∗ a
“parameter” over structures of size n (path length, number of inversions, etc.), determine
the limit (asymptotic) distribution of the normalized variable X n = (Xn∗ − µXn∗ )/σXn∗ .
Simulations suggest that such a limit does exist in typical cases. The limit distribution
usually provides more information than does a plain average-case analysis. The following
two transforms are important tools in the study of limit distributions:
1. Characteristic functions (or Fourier transforms), defined for RV X by
φ(t) = E{e
itX
}=
Z
+∞
eitx dF (x).
(29)
−∞
For a nonnegative integer-valued RV X, we have φ(t) = p(eit ).
2. Laplace transforms, defined for a nonnegative RV X by
Z +∞
−tX
g(t) = E{e
}=
e−tx dF (x).
(30)
0
The transform g(−t) is sometimes called the “moment generating function” of X since
it is essentially the EGF of X’s moments. For a nonnegative integer-valued RV X, we
have g(t) = p(e−t ).
Limit Theorems. Under appropriate conditions the distribution functions F n (x) of a
sequence of RVs Xn converge pointwise to a limit F (x) at each point of continuity of F (x).
Such convergence is known as “weak convergence” or “convergence in distribution,” and
we denote it by F = lim Fn [Billingsley 1986].
Theorem 9 [Continuity theorem for characteristic functions]. Let X n be a sequence
of RVs with characteristic functions φn (t) and distribution functions Fn (x). If there is a
function φ(t) continuous at the origin such that lim φn (t) = φ(t), then there is a distribution
function F (x) such that F = lim Fn . Function F (x) is the distribution function of the RV
with characteristic function φ(t).
Theorem 10 [Continuity theorem for Laplace transforms]. Let X n be a sequence of RVs
with Laplace transforms gn (t) and distribution functions Fn (x). If there is some a such that
26
/
Average-Case Analysis of Algorithms and Data Structures
for all |t| ≤ a there is a limit g(t) = lim gn (t), then there is a distribution function F (x)
such that F = lim Fn . Function F (x) is the distribution function of the RV with Laplace
transform g(t).
Similar limit conditions exist when the moments of the Xn converge to the moments
of a RV X, provided P
that the “moment problem” for X has a unique solution. A sufficient
condition for this is j≥0 E{X 2j }−1/(2j) = +∞.
Generating functions. If the Xn are nonnegative integer-valued RVs, then they define a
sequence pn,k = Pr{Xn = k}, and the problem is to determine the asymptotic behavior of
the distributions πn = {pn,k }k≥0 (or the associated cumulative distribution functions Fn ),
as n → ∞. In simple cases, such as binomial distributions, explicit expressions are available
and can be treated using the real analysis techniques of Section 2.1.
In several cases, either the “horizontal” GFs pn (u) or “vertical” GFs qk (z)
pn (u) =
∞
X
pn,k uk ,
qk (z) =
∞
X
pn,k z n ,
(31)
n=0
k=0
have explicit expressions, and complex analysis methods can be used to extract their
coefficients asymptotically.
Sometimes, only the bivariate generating function
X
P (u, z) =
pn,k uk z n
(32)
n,k≥0
has an explicit form, and a two-stage method must be employed.
Univariate Problems. The most well known application of univariate techniques is the
central limit theorem. If Xn = A1 + · · · + An is the
√ sum of independent identically distributed RVs with mean 0 and variance 1, then Xn / n tends to a normal distribution with
unit variance. The classical proof
515] uses characteristic functions: the
√ [Feller 1971, page
√
n
characteristic function of Xn / n is φn (t) = φ (t/ n ), where φ(t) is the characteristic
2
function of each Aj , and it converges to e−t /2 , the characteristic function of the normal
distribution.
Another proof that provides information on the rate of convergence and on densities
when the Aj are nonnegative integer-valued uses the saddle point method applied to
I
1
p(z)n
Pr{Xn = k} =
dz,
(33)
2πi
z k+1
where p(z) is the PGF of each Aj [Greene and Knuth 1982].
A general principle is that univariate problems can be often solved using either continuity theorems or complex asymptotics (singularity analysis or saddle point) applied to
vertical or horizontal generating functions.
Examples. 1. A horizontal generating function. The probability that a random permutation of n elements has k cycles is [uk ] pn (u), where
pn (u) =
1
u(u + 1)(u + 2) . . . (u + n − 1).
n!
Section 2.5. Limit Probability Distributions
/
27
Like for the basic central limit theorem above, either characteristic functions or saddle
point methods can be used to establish normality of the limiting distribution as n → ∞
(Goncharov’s theorem). The same normality result holds for the distribution of inversions
in permutations, for which
1 Y 1 − uj
.
pn (u) =
n!
1−u
1≤j≤n
Inversions will be studied in Section 3.1 in connection with sorting.
2. A vertical generating function. The probability that a random binary string with
length n has no “1-run” of length k is [z n ]qk (z), where
qk (2z) =
1 − zk
.
1 − 2z + z k+1
A singularity analysis [Knuth 1978] can be used: the dominant singularity of q k (z) is at
k+1
z = ζk ≈ 1 + 2−k−1 , and we have [z n ] qk (z) ≈ e−n/2 .
Bivariate Problems. For bivariate problems with explicit bivariate GFs (32), the following two-stage approach may be useful. First, we can often get a good approximation
to the Cauchy coefficient integral
I
1
P (z, u)
pn (u) =
dz
2πi
z n+1
by treating u as a parameter and applying singularity analysis or the saddle point method.
Second, if u is real and close to 1 (for example, u = e−t , with t close to 0), we may
be able to conclude the analysis using the continuity theorem for Laplace transforms. If
u is complex, |u| = 1 (that is, u = eit ), we try to use instead the continuity theorem for
characteristic functions. For instance, Bender [1973] and Canfield [1977] have obtained
general normality results for distributions corresponding to bivariate generating functions
of the form
1
and
eug(z) .
1 − ug(z)
These results are useful since they correspond to the distribution of the number of components in a sequence (or partitional complex) construct and an abelian partitional complex
construct, respectively.
Little is known about bivariate GFs defined only implicitly via nonlinear functional equations, a notable exception being [Jacquet and Régnier 1986], [Jacquet and
Régnier 1987]. Finally, other multivariate (but less analytical) techniques are used in the
analysis of random graph models [Bollobás 1985].
28
/
Average-Case Analysis of Algorithms and Data Structures
3. Sorting Algorithms
In this section we describe several important sorting algorithms, including insertion sort,
Shellsort, bubble sort, quicksort, radix-exchange sort, selection sort, heapsort, and merge
sort. Some sorting algorithms are more naturally described in an iterative fashion, while
the others are more naturally described recursively. In this section we analyze the performance of the “iterative” sorting algorithms in a unified way by using some basic notions
of inversion tables and lattice paths; we apply the combinatorial tools of Section 1 to the
study of inversions and left-to-right maxima, and we use the techniques of Section 2 to
derive asymptotic bounds.
We defer the analyses of the sorting algorithms that are more “recursive” in nature
until Section 4, where we exploit the close connections between their recursive structures
and the tree models studied in Section 4. Yet another class of sorting algorithms, those
based upon distribution sorting, will be described and analyzed in Section 5.
For purposes of average-case analysis of sorting, we assume that the input array (or
input file) x[1] x[2] . . . x[n] forms a random permutation of the n elements. A permutation
of a set of n elements is a 1–1 mapping from the set onto itself. Typically we represent
the set of n elements by {1, 2, . . . , n}. We use the notation σ = σ1 σ2 . . . σn to denote the
permutation that maps i to σi , for 1 ≤ i ≤ n. Our input data model is often justified
in practice, such as when the key values are generated independently from a common
continuous distribution.
3.1. Inversions
The common thread running through the the analyses of many sorting algorithms is the
connection between the running time of the algorithm and the number of inversions in the
input. An inversion in permutation σ is an “out of order” pair (σ k , σj ) of elements, in which
k < j but σk > σj . The number of inversions is thus a measure of the amount of disorder
in a permutation. Let us define the RV In to be the number of inversions; the number
of inversions in a particular permutation σ is denoted In [σ]. This concept was introduced
two centuries ago as a means of computing the determinant of a matrix A = (A i,j ):
X
det A =
(−1)In [σ] A1,σ1 A2,σ2 . . . An,σn ,
(1)
σ∈Sn
where Sn denotes the set of n! possible permutations.
Definition 1. The inversion table of the permutation σ = σ1 σ2 . . . σn is the ordered
sequence
¯
¯
(2a)
b1 , b 2 , . . . , b n ,
where bk = ¯{1 ≤ j < σk−1 | σj > k}¯.
(Here σ −1 denotes the inverse permutation to σ; that is, σk−1 denotes the index of k in σ.)
In other words, bk is equal to the number of elements in the permutation σ that precede k
but have value > k.
The number of inversions can be expressed in terms of inversion tables as follows:
X
In [σ] =
bk .
1≤k≤n
Section 3.1. Inversions
/
29
It is easy see that there is a 1–1 correspondence between permutations and inversion tables.
An inversion table has the property that
0 ≤ bk ≤ n − k,
for each 1 ≤ k ≤ n,
(2b)
and moreover all such combinations are possible. We can thus view inversion tables as a
cross product
Y
{0, 1, . . . , n − k}.
(2c)
1≤k≤n
We associate each inversion table b1 , b2 , . . . , bn with the monomial xb1 xb2 . . . xbn , and we
define the generating function
X
x b1 x b2 . . . x bn ,
(3)
F (x0 , x1 , . . . , xn−1 ) =
σ∈Sn
which is the sum of the monomials over all n! permutations. By (2b) the possibilities
for bk correspond to the term (x0 + x1 + · · · + xn−k ), and we get the following fundamental
formula, which will play a central rôle in our analyses:
Theorem 1. The generating function defined in (3) satisfies
F (x0 , x1 , . . . , xn−1 ) = x0 (x0 + x1 ) . . . (x0 + x1 + · · · + xn−1 ).
Theorem 1 is a powerful tool for obtaining statistics related to inversion tables. For
example, let us define In,k to be the number
of n elements having k inP of permutations
k
versions. By Theorem 1, the OGF I(z) = k In,k z is given by
In (z) = z 0 (z 0 + z 1 )(z 0 + z 1 + z 2 ) . . . (z 0 + z 1 + · · · + z n−1 ),
(4a)
b1 +b2 +···+bn
since each monomial xb1 xb2 . . . xbP
to In (z). We can conn in (3) contributes z
k
vert (4a) into the PGF Φn (z) = k Pr{In = k}z by dividing by |Sn | = n!:
Φn (z) =
X In,k
k
n!
zk =
Y
bk (z),
1≤k≤n
where bk (z) =
z 0 + z 1 + · · · + z n−k
.
n−k+1
(4b)
The expected number of inversions In and the variance var(In ) are thus equal to
n(n − 1)
;
4
¡
¢2
n(2n + 5)(n − 1)
.
var(In ) = Φ00n (1) + Φ0n (1) − Φ0n (1) =
72
In = Φ0n (1) =
(4c)
(4d)
The mean In is equal to half the worst-case value of In . Note from (4b) that Φn (z)
is a product of individual PGFs bk (z), which indicates by a remark at the beginning of
Section 2.5 that In can be expressed as the sum of independent RVs. This suggests another
way of looking at the derivation: The decomposition of In in question is the obvious one
30
/
Average-Case Analysis of Algorithms and Data Structures
based upon the inversion table (2a); we have In = b1 + b2 + · · · + bn , and the PGF of bk
is bk (z) given above in (4b). Eqs. (4c) and (4d) follow from summing b k and var(bk ), for
1 ≤ k ≤ n. By a generalization of the central limit theorem to sums of independent but
nonidentical RVs, it follows that (In − In )/σIn converges to the normal distribution, as
n → ∞.
Another RV important to our sorting analyses is the number of left-to-right minima,
denoted by Ln . For a permutation σ ∈ Sn , Ln [σ] is the number of elements in σ that are
less than all the preceding elements in σ. In terms of (2a), we have
¯
¯
Ln [σ] = ¯{1 ≤ k ≤ n | bk = σk−1 − 1}¯.
Let us define Ln,k to be the number of permutations
of n elements having k left-toP
k
right minima. By Theorem 1, the OGF Ln (z) = k Ln,k z is given by
Ln (z) = z(z + 1)(z + 2) . . . (z + n − 1),
(5a)
since the contribution to Ln (z) from the xj term in (x
P0 + x1 + . . . + xkk−1 ) in Theorem 3
is z if j = k − 1 and 1 otherwise. The PGF Λn (z) = k Pr{Ln = k}z is thus
Λn (z) =
X Ln,k
k
n!
zk =
Y
`k (z),
where `k (z) =
1≤k≤n
z+k−1
.
k
(5b)
Taking derivatives as above, we get
Ln = Λ0n (1) = Hn ;
¡
var(Ln ) = Λ00n (1) + Λ0n (1) − Λ0n (1)
¢2
(5c)
= Hn − Hn(2) ,
(5d)
P
P
(2)
2
where Hn is the nth harmonic number
=
1≤k≤n 1/k, and Hn
1≤k≤n 1/k . The
mean Ln is much less than the worst-case value of Ln , which is n. As above, we can look
at this derivation in the way suggested by the product decomposition of Λ n (z) in (5b):
We can decompose Ln into a sum of independent RVs `1 + `2 + · · · + `n , where `k [σ] is 1
if σk is a left-to-right minimum, and 0 otherwise. The PGF for `k is `k (z) given in (5b),
and summing `k and var(`k ) for 1 ≤ k ≤ n gives (5c) and (5d). The central limit theorem
shows that Ln , when normalized, converges to the normal distribution.
The above information about In and Ln suffices for our purposes of analyzing sorting
algorithms, but it is interesting to point out that Theorem 1 has further applications. For
example, let Tn,i,j,k be the number of permutations σ ∈ Sn such that In [σ] = i, Ln [σ] = j,
and there are k left-to-right maxima. By Theorem 1, the OGF of Tn,i,j,k is
Tn (x, y, z) =
X
Tn,i,j,k xi y j z k
i,j,k
= yz(y + xz)(y + x + x2 z) . . . (y + x + x2 + · · · + xn−1 z).
(6)
Section 3.3. Shellsort
/
31
3.2. Insertion Sort
Insertion sort is the method card players typically use to sort card hands. In the kth loop,
for 1 ≤ k ≤ n − 1, the first k elements x[1], . . . , x[k] are already in sorted order, and the
(k + 1)st element x[k + 1] is inserted into its proper place with respect to the preceding
elements.
In the simplest variant, called straight insertion, the correct position for x[k + 1] is
found by successively comparing x[k + 1] with x[k], x[k − 1], . . . until an element ≤ x[k + 1]
is found. The intervening elements are simultaneously bumped one position to the right
to make room. (For simplicity, we assume that there is a dummy element x[0] with
value −∞ so that an element ≤ x[k + 1] is always found.) When the values in the input
file are distinct, the number of comparisons in the kth loop is equal to 1 plus the number of
elements > x[k + 1] that precede x[k + 1] in the input. In terms of the inversion table (2a),
this is equal to 1+bx[k+1] . By summing on k, we find that the total number of comparisons
used by straight insertion to sort a permutation σ is In [σ] + n − 1. The following theorem
follows directly from (4c) and (4d):
Theorem 2. The mean and the variance of the number of comparisons performed
by straight insertion when sorting a random permutation are n2/4 + 3n/4 − 1 and
n(2n + 5)(n − 1)/72, respectively.
An alternative to straight insertion is to store the already-sorted elements in a binary
search tree; the kth loop consists of inserting element x[k + 1] into the tree. After all
n elements are inserted, the sorted order can be obtained via an inorder traversal. A
balanced binary search tree can be used to insure O(n log n) worst-case time performance,
but the overhead of the balancing operations slows down the algorithm in practice. When
the tree is not required to be balanced, there is little overhead, and the average running time
is faster. We defer the analysis until our discussion of binary search trees in Section 4.2.
3.3. Shellsort
The main reason why straight insertion is relatively slow is that the items are inserted
sequentially; each comparison reduces the number of inversions (which is Θ(n 2 ) on the
average) by at most 1. Thus, the average running time is Θ(n2 ). D. L. Shell [1959]
proposed an efficient variant (now appropriately called Shellsort) in which the insertion
process is done in several passes of successive refinements. For a given input size n, the
passes are determined by an “increment sequence” ht , ht−1 , . . . , h1 , where h1 = 1. The
hi pass consists of straight insertion sorts of each of the hi subfiles
subfile 1 :
x[1] x[1 + hi ] x[1 + 2hi ] . . .
subfile 2 :
···
x[2] x[2 + hi ] x[2 + 2hi ] . . .
···
(7)
subfile hi : x[hi ] x[2hi ] x[3hi ] . . .
In the early passes (when the increments are typically large), elements can be displaced far
from their previous positions with only a few comparisons; the later passes “fine tune” the
32
/
Average-Case Analysis of Algorithms and Data Structures
placement of elements. The last pass, when h1 = 1, consists of a single straight insertion
sort of the entire array; we know from Section 3.2 that this is fast when the number of
remaining inversions is small.
Two-Ordered Permutations. A good introduction to the average-case analysis of Shellsort is the two-pass version with increment sequence (2, 1). We assume that the input is
a random permutation. Our measure of complexity is the total number of inversions encountered in the subfiles (7) during the course of the algorithm. For simplicity, we restrict
ourselves to the case when n is even.
The first pass is easy to analyze, since it consists of two independent straight insertion
sorts, each of size n/2. We call a permutation k-ordered if x[i] < x[i + k], for all 1 ≤ i ≤
n − k. At the end of the first pass, the permutation
¡ n ¢is 2-ordered, and by our randomness
assumption it is easy to see that each of the n/2
possible 2-ordered permutations is
equally likely. The analysis of the last pass consists in determining the average number of
inversions in a random 2-ordered permutation.
Theorem 3. The mean and the variance of the number of inversions I 2n in a random
2-ordered permutation of size 2n are
¶
µ
√
π 3/2
n4n−1
7
π
I2n = ¡2n¢ ∼
n3 .
n ,
var(I2n ) ∼
−
4
30
16
n
Proof. The starting point of the proof is the important 1–1 correspondence between
the set of 2-ordered permutations of 2n elements and the set of monotone paths from the
upper-left corner (0, 0) to the bottom-right corner (n, n) of the n-by-n lattice. The kth
step of the path is ↓ if k appears in an odd position in the permutation, and it is → if
k appears in an even position. The path for a typical permutation σ is given in Figure 1.
The sorted permutation has the “staircase path” shown by dotted lines. The important
property of this representation is that the number I2n [σ] of inversions in σ is equal to the
area between the staircase path and σ’s path.
There is an easy heuristic argument to show why I2n should be Θ(n3/2 ): Intuitively,
the first n steps of a random path from (0, 0) to (n, n) are like a random walk, and
similarly for the last n steps. (The transition probabilities are slightly different from those
for a random walk since the complete path is constrained to have
√ exactly n ↓ moves and
n → moves.) It is well known that random walks tend to be Θ( n ) units away from the
diagonal after n steps, thus suggesting that the area between the walk and the staircase
path is Θ(n3/2 ).
An extension to the notions of admissibility in Section 1.2 provides an elegant and
precise way to count the area, cumulated among all possible 2-ordered permutations. Let
us define P to be the set of all possible paths of length ≥ 0 from (0, 0) to another point
b to be the subset of all “arches” (paths that meet the diagonal only
on the diagonal, and P
at the endpoints) of positive length. Each path in P can be decomposed uniquely into
b In the language of Theorem 1.2, P is the
a concatenation of zero or more arches in P.
b
sequence class of P:
b∗ .
P=P
(8a)
b from (0, 0) to (3, 3),
For example, the path p ∈ P in Figure 1 consists of four paths in P:
from (3, 3) to (4, 4), from (4, 4) to (6, 6), and from (6, 6) to (8, 8). For reasons of symmetry,
Section 3.3. Shellsort
/
33
Figure 1. Correspondence between 2-ordered permutations of 2n elements and monotone
paths from (0, 0) to (n, n), for n = 8. The dark path corresponds to the permutation
σ = 3 1 5 2 6 4 7 8 9 11 10 12 15 13 16 14. The dashed-line staircase path represents the sorted permutation. The number of inversions in σ (namely, 9) is equal to the shaded area between σ’s path
and the staircase path.
it is useful to look at paths that stay to one side (say, the right side) of the diagonal. The
b to the right side of the diagonal are denoted S and S,
b respectively.
restrictions of P and P
As above, we have
S = Sb∗ .
(8b)
Each path has the same number of ↓ moves as → moves. We define the size of path p,
denoted |p|, to be the number of ↓ moves in p. The other parameter of interest is the area
between p and the staircase path; we call this area¯ the
¯ weight of p and denote it by kpk.
¯
The size and weight functions are linear; that is, pq ¯ = |p| + |q| and kpqk = kpk + kqk,
where pq denotes the concatenation of paths p and q.
Let Pn,k (respectively, Pbn,k , Sn,k , Sbn,k ) be the number of paths p ∈ P (respectively,
b S, S)
b such that |p| = n and kpk = k. We define the OGF
P,
X
P (u, z) =
Pn,k uk z n ,
(9)
k,n
b z) similarly. The mean and variance
and we define the OGFs Pb(u, z), S(u, z), and S(u,
of I2n can be expressed readily in terms of P (u, z):
¯
¯
1
n ∂P (u, z) ¯
I2n = ¡2n¢ [z ]
;
(10a)
∂u ¯u=1
n
¯
2
¯
1
n ∂ P (u, z) ¯
;
(10b)
I2n (I2n − 1) = ¡2n¢ [z ]
∂u2 ¯
n
u=1
var(I2n ) = I2n (I2n − 1) + I2n − (I2n )2 .
(10c)
34
/
Average-Case Analysis of Algorithms and Data Structures
We can generalize our admissibility approach in Theorem 1.2 to handle OGFs with two
variables:
Lemma 1. The sequence construct is admissible with respect to the size and weight functions:
b∗
P=P
S = Sb∗
=⇒
P (u, z) =
=⇒
S(u, z) =
1
;
1 − Pb(u, z)
1
.
b z)
1 − S(u,
(11)
(12)
Proof of Lemma 1. A equivalent definition of the OGF P (u, z) is
X
P (u, z) =
ukpk z |p| .
p∈P
Each nontrivial path p ∈ P can be decomposed uniquely into a concatenation pb1 pb2 . . . pb`
b By the linearity of the size and weight functions, we have
of nontrivial paths in P.
X
P (u, z) =
ukpˆ1 k+kpˆ2 k+···+kpˆ` k z |pˆ1 |+|pˆ2 |+···+|pˆ` |
b
pˆ1 ,pˆ2 ,...,pˆ` ∈P
`≥0
=
Xµ X
`≥0
The proof of (12) is identical.
b
p̂∈P
u
kp̂k |p̂|
z
¶`
=
X³
`≥0
Pb(u, z)
´`
=
1
.
1 − Pb(u, z)
Lemma 1 gives us two formulæ relating the four OGFs P (u, z), Pb(u, z), S(u, z), and
b z). The following decomposition gives us another two relations, which closes the cycle
S(u,
and lets us solve for the OGFs: Every path sb ∈ Sb can be decomposed uniquely into the
path → s ↓, for some s ∈ S, and the size and weight functions of sb and s are related by
|b
s| = |s| + 1 and kb
sk = ksk + |s| + 1. Hence, we have
X
X
b z) =
S(u,
uksk+|s|+1 z |s|+1 = uz
uksk (uz)|s| = uz S(u, uz).
(13)
s∈S
s∈S
b is either in Sb or in the reflection of Sb about the diagonal, which we
Each path in P
b For sb ∈ S,
b we have | refl(b
call refl(S).
s)| = |b
s| and k refl(b
s)k = kb
sk − |b
s|, which gives us
´
´
³
X
X³
b z) + Sb u, z . (14)
Pb(u, z) =
ukŝk z |ŝ| =
ukŝk z |ŝ| + ukŝk−|ŝ| z |ŝ| = S(u,
u
ŝ∈S
b
b
ŝ∈S∪refl(
S)
Equations (13) and (14) can be viewed as types of admissibility reductions, similar to
those in Lemma 1, except that in this case the weight functions are not linear (the relations between the weight functions involve the size function), thus introducing the uz and
z/u arguments in the right-hand sides of (13) and (14).
Section 3.3. Shellsort
/
35
The four relations (11)–(14) allow us to solve for P (u, z). Substituting (13) into (12)
gives
S(u, z) = uzS(u, z)S(u, uz) + 1,
(15)
and substituting (13) into (14) and the result into (11), we get
¡
¢
P (u, z) = uzS(u, uz) + zS(u, z) P (u, z) + 1.
(16)
¯
¯
∂2
∂
¯
, which we need for (10),
P (u, z)¯u=1 and ∂u
Using (16), we can then express ∂u
2 P (u, z)
u=1
in terms of derivatives of S(u, z) evaluated at u = 1, which in turn can be calculated
from (15). These calculations are straightforward, but are best done with a symbolic
algebra system.
Alternate Proof. We can prove the first part of Theorem 3 in a less elegant way by
studying how the file is decomposed after the first pass into the two sorted subfiles
X1 = x[1] x[3] x[5] . . .
We can express I2n as
and
1
I2n = ¡2n¢
n
X2 = x[2] x[4] x[6] . . . .
X
Ai,j ,
(17a)
1≤i≤n
0≤j≤n−1
where Ai,j is the total number of inversions involving the ith element of X1 (namely,
x[2i − 1]), among all 2-ordered permutations in which there are j elements in X 2 less
than x[2i − 1]. The total number of such 2-ordered permutations is
¶
¶µ
µ
i + j − 1 2n − i − j
,
n−j
i−1
and a simple calculation shows that each contributes |i − j| inversions to A i,j . Substituting
this into (17a), we get
µ
¶µ
¶
X
1
i + j − 1 2n − i − j
I2n = ¡2n¢
|i − j|
,
(17b)
i
−
1
n
−
j
n
1≤i≤n
0≤j≤n−1
and the rest of the derivation consists of manipulation of binomial coefficients. The derivation of the variance is similar.
The increment sequence (2, 1) is not very interesting in practice, because the first pass
still takes quadratic time, on the average. We can generalize the above derivation to show
that the average number In of inversions in a random h-ordered permutation is
¶ ¶
µ
µ ¶
µµ ¶
h−r q
22q−1 q! q!
r
h
(q + 1) −
q(q + 1) +
In =
,
(18)
2
2
2
(2q + 1)!
2
where q = bn/hc and r = n mod h [Hunt 1967]. This allows us to determine (for large h and
larger n) the best two-pass increment sequence (h, 1). In the first pass, there are h insertion
36
/
Average-Case Analysis of Algorithms and Data Structures
sorts of size ∼ n/h; by (4c), the total number of inversions in the subfiles is ∼ n 2 /(4h), on
the average. By (18), we can √
approximate the average number of inversions encountered
1
3/2
in the second pass by
√ In ∼3/28 πh n . The total number of inversions in both passes1/3is
1
2
thus ∼ n /(4h) + 8 πh n , on the average, which is minimized when h ∼ (16n/π) .
The resulting expected running time is O(n5/3 ).
When there are more than two passes, an interesting phenomenon occurs: if an hsorted file is k-sorted, the file remains h-sorted. Yao [1980] shows how to combine this fact
with an extension of the approach used in (17a) and (17b) to analyze increments of the
form (h, k, 1), for constant values h and k. Not much else is known in the average case,
except when each increment is a multiple of the next. In that case, the running time can
be reduced to O(n1.5+²/2 ), where ² = 1/(2t − 1) and t is the number of increments.
V. Pratt discovered that we get an O(n log 2 n)-time algorithm in the worst case if we
use all the increments of the form 2p 3q , for p, q ≥ 0. For maximum efficiency, the increments need not be in decreasing order, but 2p+1 3q and 2p 3q+1 should precede 2p 3q . This
particular approach is typically not used in practice, since the number of increments (and
hence the number of passes) is O(log2 n). Sequences with only O(log n) increments that result in O(n1+² ) running time are reported in [Incerpi and Sedgewick, 1985]. Lower bounds
on the worst-case sorting time for various types of increment sequences with O(log n) increments are given in [Weiss and Sedgewick 1988], [Cypher 1989]. For example, Shellsort
requires Ω (n log2 n/ log log n) worst-case time when the increment sequence is monotonically decreasing [Cypher 1989].
A possible application of Shellsort is to the construction of efficient networks for
sorting. Sorting networks operate via a sequence of pairwise comparison/exchanges, where
the choice of which pair of elements to compare next is made independently of the outcomes
of the previous comparisons. Comparison/exchanges that involve different elements can
be done in parallel by the network, so up to n/2 operations can be done simultaneously
in one parallel step. Sorting networks thus require Ω (log n) parallel steps (or depth).
K. E. Batcher developed practical sorting networks of depth 12 k 2 , for n = 2k , based upon
his odd-even merge and bitonic sort networks [Batcher 1968] [Knuth 1973b]. Recently,
Ajtai, Komlós, and Szemerédi [1983] solved a longstanding open problem by constructing
a sorting network of depth O(log n); a complete coverage is given in [Pippenger 1989].
The AKS sorting network is a theoretical breakthrough, but in terms of practicality the
network is not very useful since the coefficient implicit in the Big-oh term is huge. If
an O(n log n)-time Shellsort is found, it might be possible to modify it to yield a sorting
network of depth O(log n) that is practical. However, the lower bound result quoted above
[Cypher 1986] shows that, in order to find an O(n log n)-time Shellsort, the increment
sequence will have to be fundamentally different from those used in practice.
Section 3.4. Bubble Sort
/
37
3.4. Bubble Sort
The bubble sort algorithm works by a series of passes. In each pass, some final portion
x[` + 1] x[` + 2] . . . x[n] of the array is already known to be in sorted order, and the largest
element in the initial part of the array x[1] x[2] . . . x[`] is “bubbled” to the right by a
sequence of ` − 1 pairwise comparisons
x[1] : x[2], x[2] : x[3], . . . , x[` − 1] : x[`].
In each comparison, the elements exchange place if they are out of order. The value of `
is reset to the largest t such that the comparison x[t] : x[t + 1] required an exchange, and
then the next pass starts.
The bubble sort algorithm itself is not very useful in practice, since it runs more slowly
than insertion sort and selection sort, yet is more complicated to program. However, its
analysis provides an interesting use of inversion statistics and asymptotic techniques. The
running time of bubble sort depends upon three quantities: the number of inversions I n ,
the number of passes An , and the number of comparisons Cn . The analysis of In has
already been given in Section 3.1.
Theorem 4 [Knuth, 1973b]. The average number of passes and comparisons done in
bubble sort, on a random permutation of size n, is
An = n −
r
πn
+ O(1),
2
Cn =
¢
√
1¡ 2
n − n log n − (γ + log 2 − 1)n + O( n ).
2
Proof. Each pass in bubble sort reduces all the nonzero entries in the inversion table
by 1. There are at most k passes in the algorithm when each entry in the original inversion
table is ≤ k. The number of such inversion tables can be obtained via Theorem 1 by
substituting xi = δi≤k into F (x0 , x1 , . . . , xn ), which gives k! k n−k . We use the notation
δR to denote 1 if relation R is true, and 0 otherwise. Plugging this into the definition of
expected value, we get
1 X
An = n + 1 −
k! k n−k .
(19)
n!
0≤k≤n
p
√
The summation can be shown to be equal to πn/2 − 2/3 + O(1/ n ) by an application
of the Euler-Maclaurin summation formula (Theorem 2.1).
The average number of comparisons Cn can be determined in a similar way. For the
moment, let us restrict our attention to the jth pass. Let cj be the number of comparisons
done in the jth pass. We have cj = ` − 1, where ` is the upper index of the subarray
processed in pass j. We can characterize ` as the last position in the array at the beginning
of pass j that contains an element which moved to the left one slot during the previous
pass. We denote the value of the element in position ` by i. It follows that all the elements
in positions ` + 1, ` + 2, . . . , n have value > i. We noted earlier that each nonzero entry
in the inversion table decreases by 1 in each pass. Therefore, the number of inversions for
element i at the beginning of the jth pass is bi − j + 1, and element i is located in array
position ` = i + bi − j + 1. This gives us cj = i + bi − j.
38
/
Average-Case Analysis of Algorithms and Data Structures
Without a priori knowledge of ` or i, we can calculate cj by using the formula
cj = max {i + bi − j | bi ≥ j − 1}.
1≤i≤n
(20)
The condition bi ≥ j − 1 restricts attention to those elements that move left one place in
pass j − 1; it is easy to see that the term in (20) is maximized at the correct i. To make
use of (20), let us define fj (k) to be the number of inversion tables (2a) such that either
bi < j − 1 or i + bi − j ≤ k. We can evaluate fj (k) from Theorem 1 by substituting xi = 1
if bi < j − 1 or i + bi − j ≤ k, and xi = 0 otherwise. A simple calculation gives
fj (k) = (j + k)! (j − 1)n−j+k ,
for 0 ≤ k ≤ n − j.
By the definition of expected value, we have
X
¡
¢
1
Cn =
k fj (k) − fj (k − 1)
n!
=
=
1≤j≤n
0≤k≤n−j
µ
¶
1
n+1
−
2
n!
µ
¶
1
n+1
−
2
n!
X
fj (k)
X
s! rn−s .
1≤j≤n
0≤k≤n−j
(21a)
0≤r<s≤n
The intermediate step follows by summation by parts. The summation in (21a) can be
simplified using the Euler-Maclaurin summation formula into series of sums of the form
1 X
(m − t)! (m − t)t tq ,
for q ≥ −1,
(21b)
m!
1≤t<m
where m = n+1. By applying Stirling’s approximation, we find that the summand in (21b)
decreases exponentially when t > m1/2+² , and we can reduce the problem further to that
of approximating Fq (1/m), where
X
2
Fq (x) =
e−k x/2 k q ,
for q ≥ −1,
(22a)
k≥1
as m → ∞. The derivation can be carried out using Laplace’s method for sums and the
Euler-Maclaurin summation formula (see Section 2.1), but things get complicated for the
case q = −1. A more elegant derivation is to use Mellin transforms. The function F q (x) is
a harmonic sum, and its Mellin transform Fq∗ (s) is
Fq∗ (s) = Γ(s) ζ(2s − q) 2s ,
(22b)
defined in the fundamental strip h0; +∞i. The asymptotic expansion of F q (1/m) follows
by computing the residues in the left half-plane <(s) ≤ 0. There are simple poles at
s = −1, −2, . . . because of the term Γ(s). When q = −1, the Γ(s) and ζ(2s − q) terms
combine to contribute a double pole at s = 0. When q ≥ 0, Γ(s) contributes a simple pole
at s = 0, and ζ(2s − q) has a simple pole only at s = (q + 1)/2 > 0. Putting everything
together, we find that
X
√
1
1
1
s! rn−s = n log n + (γ + log 2)n + O( n ).
(23)
n!
2
2
0≤r<s≤n
The formula for Cn follows immediately from (21a).
Section 3.6. Radix-Exchange Sort
/
39
After k passes in the bubble sort algorithm, the number of elements known to be
in their final place is typically larger than k; the variable ` is always set to be as small
as possible so as to minimize the size of the array that must be considered in a pass.
The fact that (23) is O(n log n) implies that the number of comparisons for large n is not
significantly less than that of the more naı̈ve algorithm in which the bubbling process in
the kth pass is done on the subarray x[1] x[2] . . . x[n − k + 1].
3.5. Quicksort
We can get a more efficient exchange-based sorting algorithm by using a divide-and-conquer
approach. In the quicksort method, due to C. A. R. Hoare, an element s is chosen (say,
the first element in the file), the file is partitioned into the part ≤ s and the part > s, and
then each half is sorted recursively. The recursive decomposition can be analyzed using the
sophisticated tools we develop in Section 4.2 for binary search trees, which have a similar
decomposition, and so we defer the analysis until Section 4.2. The expected number of
comparisons is ∼ 2n log n. Analysis techniques and results for quicksort can be found in
[Knuth 1973b] and [Sedgewick 1977b].
Quicksort is an extremely good general-purpose sorting routine. A big drawback of
the version described above is its worst-case performance: it requires Θ(n 2 ) time to sort
a file that is already sorted or nearly sorted! A good way to guard against guaranteed
bad behavior is to choose the partioning element s to be a random element in the current
subfile, in which case our above analysis applies. Another good method is to choose s to
be the median of three elements from the subfile (say, the first, middle, and last elements).
This also has the effect of reducing the average number of comparisons to ∼ 12
7 n log n.
If the smaller of the two subfiles is always processed first after each partition, then the
recursion stack contains at most log n entries. But by clever programming, we can simulate
the stack with only a constant amount of space, at a very slight increase in computing time
[D̆urian 1986], [Huang and Knuth 1986]. The analysis of quicksort in the presence of some
equal keys is given in [Sedgewick 1977a].
3.6. Radix-Exchange Sort
The radix-exchange sort algorithm is an exchange-based algorithm that uses divide-andconquer in a different way from quicksort. The recursive decomposition is based upon the
individual bits of the keys. In pass k, the keys are partitioned into two groups: the group
whose kth least significant bit is 0, and the group whose kth least significant bit is 1. The
partitioning is done in a “stable” manner so that the relative order of the keys within each
group is the same as before the partitioning. Then the 1-group is appended to the 0-group,
and the next pass begins. After t passes, where t is the number of bits in the keys, the
algorithm terminates.
In this case, the recursive decomposition is identical to radix-exchange tries, which
we shall study in a general context in Section 4.3, and the statistics of interest for radixexchange sorting can be expressed directly in terms of corresponding parameters of tries.
We defer the details to Section 4.3.
40
/
Average-Case Analysis of Algorithms and Data Structures
3.7. Selection Sort and Heapsort
Selection sort in some respects is the inverse of insertion sort, because the order in which
the elements are processed is based upon the output order than upon the input order. In
the kth pass, for 1 ≤ k ≤ n − 1, the kth smallest element is selected and is put into its
final place x[i].
Straight Selection. In the simplest variant, called straight selection sort, the kth smallest
element is found by a sequential scan of x[k] x[k + 1] . . . x[n], and it changes places with
the current x[k]. Unlike insertion sort, the algorithm is not stable; that is, two elements
with the same value might be output in the reverse of the order that they appear in the
input.
The number of comparisons performed is always
Cn = (n − 1) + (n − 2) + · · · + 1 =
n(n − 1)
.
2
The number of times a new minimum is found (the number of data movements) in the
kth pass is the number of left-to-right minima Ln−k+1 encountered in x[k] x[k + 1] . . . x[n],
minus 1. All permutations of {x[k], x[k + 1], . . . , x[n]} are equally likely. By (5c), the
average number of updates of the minimum over the course of the algorithm is
X ¡
¢
Ln−k+1 − 1 = (n + 1)Hn − 2n = n log n + (γ − 2)n + log n + O(1).
1≤k≤n−1
The variance is much more difficult to compute, since the contributions L n−k+1 from
the individual passes are not independent. If the contributions were independent, then
by (5d) the variance would be be ∼ n log n. Yao [1988] shows by relating the variance
to a geometric stochastic process that the variance is ∼ αn3/2 , and he gives the constant
α = 0.91 . . . explicitly in summation form.
Heapsort. A more sophisticated way of selecting the minimum, called heapsort, due to
J. W. J. Williams, is based upon the notion of tournament elimination. The n − k + 1 elements to consider in the kth pass are stored in a heap priority queue. A heap is a tree in
which the value of the root of each subtree is less than or equal to the values of the other
elements in the subtree. In particular, the smallest element is always at the root. The
heaps we use for heapsort have the nice property that the tree is always perfectly balanced,
except possibly for the rightmost part of the last level. This allows us to represent the
heap as an array h without need of pointers: the root is element h[1], and the children of
element h[i] are stored in h[2i] and h[2i + 1].
The kth pass of heapsort consists of outputing h[1] and deleting it from the array; the
element stored in h[n − k + 1] is then moved to h[1], and it is “filtered” down to its correct
place in O(log n) time. The creation of the initial heap can be done in linear time. The
worst-case time to sort the n elements is thus O(n log n). In the average case, the analysis
is complicated by a lack of randomness: the heap at the start of the kth pass, for k ≥ 2,
is not a random heap of n − k + 1 elements. Heaps and other types of priority queues are
discussed in Section 6.1 and [Mehlhorn and Tsakalidis 1989].
Section 3.8. Merge Sort
/
41
3.8. Merge Sort
The first sorting program ever executed on an electronic computer used the following
divide-and-conquer approach, known as merge sort: The file is split into two subfiles of
equal size (or nearly equal size), each subfile is sorted recursively, and then the two sorted
subfiles are merged together in the obvious linear fashion. When done from bottom-up,
the algorithm consists of several passes; in each pass, the sorted subfiles are paired off, and
each pair is merged together.
The linear nature of the merging process makes it ideal for input files in the form of
a linked list and for external sorting applications, in which the file does not fit entirely
in internal memory and must instead be stored in external memory (like disk or tape),
which is best accessed in a sequential manner. Typically, the merge is of a higher order
than 2; for example, four subfiles at a time might be merged together, rather than just
two. Considerations other than the number of comparisons, such as the rewind time on
tapes and the seek time on disks, affect the running time. An encyclopedic collection of
variants of merge sort and their analyses appears in [Knuth, 1973b]. Merge sort algorithms
that are optimal for external sorting with multiple disks are discussed in [Aggarwal and
Vitter, 1988], [Nodine and Vitter, 1990].
For simplicity, we restrict our attention to the number of comparisons performed
during a binary (order-2) merge sort, when n = 2j , for some j ≥ 0. All the comparisons
take place during the merges. For each 0 ≤ k ≤ j − 1, there are 2k merges of pairs of sorted
subfiles, each subfile of size n/2k+1 = 2j−k−1 . If we assume that all n! permutations are
equally likely, it is easy to see that, as far as relative order is concerned, the two subfiles in
each merge form a random 2-ordered permutation, independent of the other merges. The
number of comparisons to merge two random sorted subfiles of length p and q is p + q − µ,
where µ is the number of elements remaining to be output in one subfile when the other
subfile becomes exhausted. The probability that µ ≥ s, for s ≥ 1, is the probability that
the s largest elements are in the same subfile, namely,
ps
qs
+
,
(p + q)s
(p + q)s
where ab denotes the “falling power” a(a − 1) . . . (a − b + 1). Hence, we have
µ=
Xµ
s≥1
qs
ps
+
(p + q)s
(p + q)s
¶
=
p
q
+
.
q+1 p+1
By (24), the total number of comparisons used by merge sort, on the average, is
Cn = j2j −
X
2k
1≤k≤j−1
where
α=
X
n≥0
2j−k
= n log2 n − αn + O(1),
2j−k−1 + 1
2n
1
= 1.2645 . . . .
+1
(24)
42
/
Average-Case Analysis of Algorithms and Data Structures
The analysis when n is not a power of 2 involves arithmetic functions based upon
the binary representation of n and can be found in [Knuth, 1973b]. Batcher’s odd-even
merge and bitonic sort networks, which can be used to construct sorting networks of depth
1
2
2 (log2 n) , are analyzed in [Batcher 1968], [Knuth 1973b], and [Sedgewick 1978]. Other
merging algorithms are covered in [Mehlhorn and Tsakalidis 1989].
Section 4.1. Binary Trees and Plane Trees
/
43
4. Recursive Decompositions and Algorithms on Trees
In this section we develop a uniform framework for obtaining average-case statistics on
four classes of trees—binary and plane trees, binary search trees, radix-exchange tries,
and digital search trees. Our statistics, which include number of trees, number of nodes,
height, and path length, have numerous applications to the analysis of tree-based searching
and symbolic processing algorithms, as well as to the sorting algorithms whose analysis we
deferred from Section 2, such as quicksort, binary tree sort, and radix-exchange sort. Our
approach has two parts:
1. Each of the four classes of trees has a recursive formulation that lends itself naturally
to the symbolic generating function method described in Section 1. The statistic
of interest for each tree t corresponds naturally to a valuation function (VF) v[t].
The key idea which unifies our analyses is an extension of the admissibility concept
of Section 1: A recursive definition of the VF translates directly into a functional
equation involving the generating function. The type of generating function used
(either OGF or EGF) and the type of functional equation that results depend upon
the particular nature of the recursion.
2. We determine the coefficients of the GF based upon the functional equation resulting
from step 1. Sometimes an explicit closed form is obtained, but typically we apply the
asymptotic methods of Section 2, our particular approach depending upon the nature
of the functional equation.
4.1. Binary Trees and Plane Trees
Binary trees and plane trees provide a natural representation for many types of symbolic
expressions and recursive structures. This section studies statistical models under which all
trees of a given size are equally likely. Such models are not applicable to the study of binary
search trees, radix-exchange tries, and digital search trees, which we cover in Sections 4.2,
4.3, and 4.4, but when enriched slightly they provide good models for algorithms operating
on expression trees, term trees, and Lisp structures [Clark 1979].
We begin by considering the class B of binary trees defined in Section 1.1:
B = { } + ({°} × B × B),
(1)
where represents an external node and ° an internal node. The size of a binary tree is
the number of internal nodes in the tree.
The cartesian product decomposition in (1) suggests that we represent our statistic
of interest via an OGF. Further motivation for this choice is given in Eqs. (1.2) and (1.3).
We use v[t] to represent
the valuation function v applied to tree t. We define P
v n to be its
P
cumulated value |t|=n v[t] among all trees of size n, and v(z) to be the OGF n≥0 vn z n .
The recursive decomposition of B leads directly to the following fundamental relations:
44
/
Average-Case Analysis of Algorithms and Data Structures
Theorem 1. The sum and recursive product valuation functions are admissible for the
class B of binary trees:
v[t] = u[t] + w[t]
=⇒
v(z) = u(z) + w(z);
v[t] = u[tleft ] · w[tright ]
=⇒
v(z) = z · u(z) · w(z),
where tleft and tright denote the left and right subtrees of t.
The proof is similar to that of Theorem 1.1. The importance of Theorem 1 is due
to the fact that it provides an automatic translation from VF to OGF, for many VFs of
interest.
Examples. 1. Enumeration. The standard trick we shall use throughout this section for
counting the number of trees of size n in a certain class is to use the unit VF I[t] ≡ 1. For
example, let Bn , for n ≥ 0, be the number of binary trees with n internal nodes. By our
definitions above, Bn is simply equal to In , and thus the OGF B(z) is equal to I(z). We
can solve for B(z) via Theorem 1 if we use the following recursive definition of I[t],
I[t] = δ|t|=0 + I[tleft ] · I[tright ],
(2a)
which is a composition of the two types of VF expressions handled by Theorem 1. Here
δR denotes 1 if relation R is true, and 0 otherwise. Since B0 = 1, the OGF for δ|t|=0 is 1.
Theorem 1 translates (2a) into
B(z) = 1 + zB(z)2 .
(2b)
√
1
The solution B(z) = 2z
(1 − ¡ 1¢− 4z ) follows by the quadratic formula. By expanding
2n
1
coefficients, we get Bn = n+1
n , as in Section 1.1.
2. Internal Path Length. The standard recursive tree traversal algorithm uses a stack
to keep track of the ancestors of the current node in the traversal. The average stack size,
amortized over the course of the traversal, is related to the internal path length of the tree,
divided by n. The VF corresponding to the cumulated internal path lengths among all
binary trees with n nodes can be expressed in the following form suitable for Theorem 1:
p[t] = |t| − 1 + δ|t|=0 + p[tleft ] + p[tright ]
= |t| − 1 + δ|t|=0 + p[tleft ] · I[tright ] + I[tleft ] · p[tright ].
(3a)
We computed the OGFs for I[t] ≡ 1 and δ|t|=0
example, and the OGF for the
P in the last
n
size VF S[t] = |t| is easily seen to be S(z) = n≥0 nBn z = zB 0 (z). Applying Theorem 1,
we have
p(z) = zB 0 (z) − B(z) + 1 + 2z p(z)B(z),
which gives us
zB 0 (z) − B(z) + 1
1
1
p(z) =
=
−
1 − 2zB(z)
1 − 4z
z
µ
¶
1−z
√
−1 .
1 − 4z
(3b)
We get pn by expanding (3b). The result is given below; the asymptotics follow from
Stirling’s formula.
Section 4.1. Binary Trees and Plane Trees
/
45
Theorem 2. The cumulated internal path length over all binary trees of n nodes is
µ ¶
3n + 1 2n
,
pn = 4 −
n+1 n
n
√
√
and the expected internal path length pn /Bn is asymptotically n πn − 3n + O( n ).
Theorem
√ 2 implies that the time for a traversal from leaf to root in a random binary
tree is O( n ), on the average.
derivation, Knuth [1973a] considers the bivariate OGF B(u, z) =
P In a similar
k n
b
u
z
,
where
bn,k is the number of binary trees with size n and internal path
n,k≥0 n,k
length k. It satisfies the functional equation B(u, z) = 1 + zB(u, uz)2 (cf. (2b)). The
expected internal path length and the variance can be formed in the usual way in terms
of the coefficients of z n in the derivatives of B(u, z), evaluated at u = 1.
The two examples above illustrate our general philosophy that it is useful to compute
the OGFs for a standard catalogue of valuation functions, so as to handle a large variety
of statistics. The most important VFs are clearly I[t] and S[t].
Another important class of trees is the class G of plane trees (also known as ordered
trees). Each tree in G consists of a root node and an arbitrary number of ordered subtrees.
This suggests the recursive definition
G = { °} × G ∗ ,
(4)
P
where ° represents a node, and G ∗ = {°} × k≥0 G k is the sequence class of G, defined
in Section 1.2. The size of a tree is defined to be the number of its nodes. An interesting
subclass of G is the class T = T Ω of plane trees in which the degrees of the nodes are
constrained to be in some subset Ω of the nonnegative integers. We require that 0 ∈ Ω or
else the trees will be infinite. The class G is the special case of T Ω when Ω is the set of
nonnegative integers. It is possible to mimic (4) and get the corresponding representation
for T
X
T = { °} ×
T k,
(5)
k∈Ω
but we shall see that it is just as simple to deal directly with (4) by using the appropriate VF
to restrict the degrees. There are two important correspondences between B, G, and T {0,2} :
1. The set of binary trees of size n is isomorphic to the set of plane trees of size n + 1.
A standard technique in data structures illustrates the correspondence: We represent
a general tree of n + 1 nodes as a binary tree with no right subtree and with a binary
tree of n internal nodes as its left subtree; the left link denotes “first child” and the
right link denotes “next sibling.”
2. If we think of the bottommost nodes of trees in T {0,2} as “external nodes,” we get
a 1–1 correspondence between binary trees of size n and plane trees with degree
constraint {0, 2} of size 2n + 1.
Theorem 1 generalizes in a straightforward manner for G:
46
/
Average-Case Analysis of Algorithms and Data Structures
Theorem 3. The sum and recursive product valuation functions are admissible for the
class G of plane trees:
v[t] = u[t] + w[t]
v[t] = δdeg t∈Ω
Y
1≤i≤deg t
=⇒
ui,deg t [ti ]
=⇒
v(z) = u(z) + w(z);
X Y
v(z) = z
ui,k (z),
k∈Ω 1≤i≤k
where t1 , t2 , . . . , tdeg t represent the subtrees attached to the root of t.
Examples. Enumerations.
Q 1. The number Gn of plane trees with n nodes is obtained via
the unit VF I[t] ≡ 1 = 0≤i≤deg t I[ti ]. (Plane trees ¡are always
¢ non-empty, so |t| ≥ 1.) By
P
k
Theorem 3, we get G(z) = I(z) = z k≥0 I(z) = z/ 1−I(z) . This implies G(z) = I(z) =
√
1
1 − 4z ) = zB(z), and thus we have Gn+1 = Bn , which illustrates correspondence 1
(1
−
2
mentioned above.
2. For the number Tn of trees of size n withQdegree constraint Ω, we apply to G
the constrained unit VF I Ω [t] = δt∈T = δdeg t∈Ω 0≤i≤deg t I Ω [ti ]. For the special case
Ω = {0, 2}, Theorem 3 gives us T (z) √
= I Ω (z) = z +zI Ω (z)2 . The solution to this quadratic
1
(1 − 1 − 4z 2 ) = zB(z 2 ), and thus we have T2n+1 = Bn ,
equation is T (z) = I Ω (z) = 2z
illustrating correspondence 2.
3. For d ≥ 2, let us define the class D = Dd of d-ary trees to be D = { }+({°}×D d ).
Binary trees are the special case d = 2. The number Dn of d-ary trees can be obtained
by generalizing our derivation of Bn at the beginning of this section. The derivation
we present, though, comes from generalizing correspondence 2 and staying within the
framework of plane trees: Each d-ary tree corresponds to a plane tree of dn + 1 nodes
with degree constraint Ω = {0, d}. The same derivation used in the preceding example
gives T (z) = I(z) = z + zI(z)d . By Lagrange-Bürmann inversion, with f (z) = T (z),
ϕ(u) = 1 + ud , we get
Dn = Tdn+1 = [z
dn+1
¶
µ
¡
¢
dn + 1
1
1
dn
d dn+1
.
[u ] (1 + u )
=
] T (z) =
n
dn + 1
dn + 1
In each of the examples above, the functional equation involving the OGF was simple
enough so that either the OGF could be solved in explicit closed form or else the LagrangeBürmann inversion theorem could be applied easily (that is, the coefficients of powers
of ϕ(u) were easy to determine). More advanced asymptotic methods are needed, for
example, to determine the number Tn of plane trees with arbitrary degree constraint Ω.
Let us assume for simplicity that Ω is aperiodic, that is, Ω consists of 0 and any sequence
of positive integers with a greatest common divisor of 1.
To count Tn , we start out as in the second example above. By applying Theorem 3,
we get
¡
¢
T (z) = z ω T (z) ,
(6)
P
k
where ω(u) =
k∈Ω u . Lagrange-Bürmann inversion is of little help when ω(u) has
several terms, so we take another approach. The singularities of T (z) are of an algebraic
nature. We know from Section 2.2 that the asymptotic behavior of the coefficients T n are
Section 4.1. Binary Trees and Plane Trees
/
47
related to the dominant singularities of T (z), that is, the ones with smallest modulus. To
find the singularities of T (z), let us regard T (z) as the solution y of the equation
F (z, y) = 0,
where F (z, y) = y − zω(y).
(7)
The function y = T (z) is defined implicitly as a function of z. By the Implicit Function
Theorem, the solution y with y(z0 ) = y0 is analytically continuable at z0 if Fy (z0 , y0 ) 6= 0,
where Fy (z, y) denotes the partial derivative with respect to y. Hence, the singularities
of y (the values of z where y is not analytic) are the values ρ where
F (ρ, τ ) = τ − ρω(τ ) = 0,
Fy (ρ, τ ) = 1 − ρω 0 (τ ) = 0.
(8)
This gives ρ = τ /ω(τ ) = 1/ω 0 (τ ), where τ is a root of the equation
ω(τ ) − τ ω 0 (τ ) = 0.
(9)
We denote the dominant singularity of T (z) on the positive real line by ρ ∗ , and we
let τ be the (unique) corresponding value of τ from (9). Since T (ρ∗ ) = τ ∗ , it follows that
τ ∗ is real. If Ω is aperiodic, then by examining the power series equation corresponding
to (9), we see that τ ∗ is the unique real solution to (9), and any other solution τ must
have larger modulus.
Around the point (ρ∗ , τ ∗ ), the dependence between y and z is locally of the form
1
0 = F (z, y) = Fz (ρ∗ , τ ∗ )(z − ρ∗ ) + 0 · (y − τ ∗ ) + Fyy (ρ∗ , τ ∗ )(y − τ ∗ )2
2
+ smaller order terms.
(10)
∗
By iteration and bounding the coefficients, we can show that y(z) has the form
r
z
(11a)
y(z) = f (z) + g(z) 1 − ∗ ,
ρ
p
where f (z) and g(z) are analytic at z = ρ∗ , and g(ρ∗ ) = − 2ω(τ ∗ )/ω 00 (τ ∗ ). Hence, we
have
s
õ
¶3/2 !
z
z
(1 − ∗
y(z) = f (z) + g(ρ∗ ) 1 − ∗ + O
.
(11b)
ρ
ρ
Theorem 2.2 shows that the contribution of f (z) to Tn is insignificant. By applying the
transfer lemma (Theorem 2.3), we get our final result:
Theorem 4 [Meir and Moon 1978]. If Ω is aperiodic, we have
Tn ∼ cρ−n n−3/2 ,
p
where the constants c and ρ are given by c = ω(τ )/(2πω 00 (τ )) and 0 < ρ = τ /ω(τ ) < 1,
and τ is the smallest positive root of the equation ω(τ ) − τ ω 0 (τ ) = 0.
For brevity, we expanded only the first terms of y(z) in (11b), but we could easily
have expanded y(z) further to get the full asymptotic expansion of Tn . In the periodic
case, which is also considered in [Meir and Moon 1978], the generating function T (z) has
more than one dominant singularity, and the contributions of these dominant singularities
must be added together.
In the rest of this section we show how several other parameters of trees can be
analyzed by making partial use of this approach. The asymptotics are often determined
via the techniques described in Sections 2.2–2.4.
48
/
Average-Case Analysis of Algorithms and Data Structures
Height of Plane Trees. For example, let us consider the expected maximum stack size
during a recursive tree traversal. (We earlier considered the expected stack size amortized
over the course of the traversal.) The maximum stack size is simply the height of the tree,
namely, the length of the longest path from the root to a node.
Theorem 5 [De Bruijn, Knuth, and Rice 1972]. The expected
√ height H n of a plane tree
with n nodes, where each tree in G is equally likely, is Hn = πn + O(1).
[h]
Proof Sketch. The number Gn of plane trees with height ≤ h corresponds to using
the 0–1 VF G[h] , which is defined to be 1 if the height of t is ≤ h, and 0 otherwise. It has
a recursive formulation
Y
G[h] [ti ],
(12a)
G[h+1] [t] =
1≤i≤deg t
which is in the right form to apply Theorem 3. From it we get
´k
X ³
z
[h+1]
[h]
G
(z) =
z G (z) =
,
1 − G[h] (z)
k≥0
(12b)
where G[0] (z) = z.¡ Note the¢ similarity to the generating function for plane trees, which
satisfies G(z) = z/ 1 − G(z) . It is easy to transform (12b) into
Fh (z)
G[h+1] (z) = z
,
where Fh+2 (z) = Fh+1 (z) − zFh (z).
(13)
Fh+1 (z)
The polynomials Fh (z) are Chebyshev polynomials. From the linear recurrence that Fh (z)
satisfies, we can express Fh (z) as a rational function of G[h] (z). Then applying LagrangeBürmann inversion, we get the following expression:
¶¶
¶ µ
¶
µ
X µµ
2n
2n
2n
[h]
.
+
−2
Gn+1 − Gn+1 =
n − 1 − j(h + 2)
n − j(h + 2)
n + 1 − j(h + 2)
j≥0
(14)
The expected tree height Hn+1 is given by
´
´
1 X ³ [h]
1 X³
[h−1]
[h]
Hn+1 =
(15)
h Gn+1 − Gn+1 =
Gn+1 − Gn+1 .
Gn+1
Gn+1
h≥1
h≥0
By substituting (14), we see that the evaluation of (15) is related to sums of the form
¶Áµ ¶
µ
X
2n
2n
,
(16)
Sn =
d(k)
n
n−k
k≥0
where d(k) is the√number of divisors of k. By Stirling’s approximation, we can approximate Sn by T (1/ n ), where
X
2 2
T (x) =
d(k)e−k x .
(17)
k≥1
The problem is thus to evaluate T (x) asymptotically as x → 0. This is one of the expansions
we did in Section 2.4 using Mellin transforms, where we found that
µ
¶ √
√
3γ
π log x
π 1
log 2
T (x) ∼ −
+
−
+ + O(xm ),
(18)
2
x
4
2
x
4
for any m > 0, as x → 0. The theorem follows from the appropriate combination of terms
of the form (18).
Section 4.1. Binary Trees and Plane Trees
/
49
Theorem 6 [Flajolet and Odlyzko 1982]. The expected height H n of a plane tree with
√
degree constraint Ω, where Ω is aperiodic and each tree in T is equally likely, is ∼ λ n,
where λ is a constant that depends upon Ω.
Proof Sketch. By analogy with (12), we use the VF T [h] [t] = δheight t≤h . This VF can
be expressed recursively as
Y
T [h] [ti ].
(19a)
T [h+1] [t] = δdeg t∈Ω
1≤i≤deg t
Theorem 3 gives us
¡
¢
T [h+1] (z) = z ω T [h] (z) ,
(19b)
P
where T [0] (z) = z and ω(u) = k∈Ω uk . The generating function H(z) of the cumulated
height of trees is equal to
´
X³
H(z) =
T (z) − T [h] (z) .
(20)
h≥0
One way to regard (19b) is simply as the iterative approximation scheme to the fixed point
equation (6) that determines T (z). A delicate singularity analysis leads to the result.
To do the analysis, we need to examine the behavior of the iterative scheme near the
singularity z = ρ, which is an example of a singular iteration problem. We find in the
neighborhood of z = ρ that
d
1
H(z) ∼
log
,
1 − z/ρ
1 − z/ρ
where d is the appropriate constant. The theorem follows directly.
Methods similar to those used in the proof of this theorem had been used by Odlyzko
to prove the following:
Theorem 7 [Odlyzko 1982]. The number En of balanced√2–3 plane trees with n “external”
nodes is ∼ n1 φn W (log n), where φ is the golden ratio (1+ 5 )/2, and W (x) is a continuous
and periodic function.
This result actually extends to several families of balanced trees, which are used as
search structures with guaranteed O(log n) access time. The occurrence of the golden
ratio in Theorem 7 is not surprising, given our discussion in Section 2.2 of the equation
f (z) = z + f (z 2 + z 3 ), which is satisfied by the OGF of En .
Pattern Matching. Another important class of algorithms on trees deals with pattern
matching, the problem of detecting all occurrences of a given pattern tree inside a larger
text tree, which occurs often in symbolic manipulation systems. Unlike the simpler case
of string matching, where linear-time worst-case algorithms are known, it is conjectured
that no linear-time algorithm exists for tree pattern matching.
The following straightforward algorithm, called sequential matching, has quadratic
running time in the worst case, but can be shown to run in linear time on the average. For
each node of the tree, we compare the subtree rooted at that node with the pattern tree
by doing simultaneous preorder traversals. Whenever a mismatch is found, the preorder
traversal is aborted, and the next node in the tree is considered. If a preorder traversal
successfully finishes, then a match is found.
50
/
Average-Case Analysis of Algorithms and Data Structures
Theorem 8 [Steyaert and Flajolet 1983]. The expected running time of the sequential
matching algorithm, when applied to a fixed pattern P and all trees in T of size n, is ∼ cn,
where c is a function of the degree constraint Ω and the structure of pattern P and is
uniformly bounded by an absolute constant, for all P .
Proof Sketch. The proof depends upon a lemma that the probability that P occurs
at a random node in the tree is asymptotically τ e−1 ρi , where i and e are the numbers of
internal and external nodes in P . The algebraic part of the proof of the lemma is a direct
application of the method of Theorems 1 and 3 applied to multisets of trees. Generating
functions for the number of pattern matches have simple expressions in terms of T (z); a
singularity analysis finishes the proof.
The same type of analysis can be applied to a large variety of tree algorithms in a
semi-automatic way. One illustration is the following:
Theorem 9 [Flajolet and Steyaert 1987]. For any set Θ of operators and ∆ of differentiation rules with at least one “expanding rule,” the average-case complexity of the
symbolic differentiation algorithm is asymptotically cn3/2 + O(n), where the constant c
depends upon Θ and ∆.
Tree Compaction. A different kind of singular behavior occurs in the problem known
as common subexpression elimination or tree compaction, where a tree is compacted into
a directed acyclic graph by avoiding duplication of identical substructures. This has applications to the compaction of Lisp programs and to code optimization.
Theorem 10 [Flajolet, Sipala, and Steyaert 1987]. The expected √
size of the maximally
compacted dag representation of a random tree in T of size n is cn/ log n + O(n/ log n),
where the constant c depends upon Ω.
p
The dominant singularity in this case is of the form 1/ (1 − z) log(1 − z)−1 . The
theorem shows that the space savings to be expected when compacting trees approaches 100
percent as n → ∞, though convergence is slow.
Register Allocation. The register allocation problem consists of finding an optimal strategy for evaluating expressions that can be represented by a tree. The optimal pebbling
strategy, due to Ershov, requires only O(log n) registers to evaluate a tree of size n. The
following theorem determines the coefficient in the average case for evaluating expressions
involving binary operators:
Theorem 11 [Flajolet, Raoult, and Vuillemin 1979], [Kemp 1979]. The expected optimum
number of registers to evaluate a random binary tree of size n is log 4 n + P (log4 n) + o(1),
where P (x) is a periodic function with period 1 and small amplitude.
Proof Sketch. The analysis involves the combinatorial sum
¶
µ
X
2n
,
Vn =
v2 (k)
n−k
k≥1
where v2 (k) is the number of 2s in the decomposition of n into prime factors. If we
normalize and approximate the binomial coefficient by an exponential term, as in (17), we
Section 4.2. Binary Search Trees
/
51
can compute the approximation’s Mellin transform
1 ζ(s)
Γ
2 2s − 1
µ ¶
1
.
2
The set of regularly-spaced poles s = 2kπi/ log 2 corresponds to periodic fluctuations in
the form of a Fourier series.
4.2. Binary Search Trees
We denote by BST (S) the binary search tree formed by inserting a sequence S of elements.
It has the recursive decomposition
BST (S) =
½­
®
BST (S≤ ), s1 , BST (S> ) , if |S| ≥ 1;
h i,
if |S| = 0,
(21)
where s1 is the first element in S, S≤ is the subsequence of the other elements that
are ≤ s1 , and S> is the subsequence of elements > s1 . An empty binary search tree is
represented by the external node .
The search for an element x proceeds as follows, starting with the root s 1 as the
current node y: We compare x with y, and if x < y we set y to be the left child of y, and
if x > y we set y to be the right child of y. The process is repeated until either x = y
(successful search) or else an external node is reached (unsuccessful search). (Note that
this process finds only the first element with value x. If the elements’ values are all distinct,
this is no problem; otherwise, the left path should be searched until a leaf or an element of
smaller value is reached.) Insertion is done by inserting the new element into the tree at
the point where the unsuccessful search ended. The importance of binary search trees to
sorting and range queries is that a linear-time inorder traversal will output the elements
in sorted order.
Well-known data structures, such as 2-3 trees, AVL trees, red-black trees, and selfadjusting search trees, do some extra work to ensure that the insert, delete, and query
operations can be done in O(log n) time, where n is the size of the tree. (In the first
three cases, the times are logarithmic in the worst case, and in the latter case they are
logarithmic in the amortized sense.) Balanced trees are discussed further in [Mehlhorn
and Tsakalidis 1989].
In this section we show that the same logarithmic bounds hold in the average case
without need for any balancing overhead. Our probability model assumes that the sequence S of n elements s1 , s2 , . . . , sn is picked by random sampling from a real interval,
or equivalently, as far as relative ordering is concerned, the elements form a random permutation of size n. The dynamic version of the problem, which corresponds to an average-case
amortized analysis, appears in Section 6.
We define BST to be the class of all binary search trees corresponding to permutations,
BST = {BST (σ) | σ ∈ Sn }. We use K to denote the random variable describing the size
52
/
Average-Case Analysis of Algorithms and Data Structures
of the left subtree; that is, |S≤ | = K and |S> | = n − 1 − K. By our probability model, the
splitting probabilities become
1
for 0 ≤ k ≤ n − 1.
(22)
Pr{K = k} = ,
n
One consequence of this is that not all trees in BST are equally likely to occur. For example,
the perfectly balanced tree of three nodes (which occurs for σ = 2 1 3 and σ = 2 3 1) is
twice as likely to occur as the tree for σ = 1 2 3.
The powerful valuation function method that we introduced in the last section applies
equally well to binary search trees. In this case, however, the nature of recurrence (21)
suggests that we use EGFs of cumulative values (or equivalently OGFs of expected values).
For VF v[t],
P we let vn be its expected value for trees of size n, and we define v(z) to be
the OGF n≥0 vn z n .
Theorem 12. The sum and subtree product valuation functions are admissible for the
class BST of binary search trees:
v[t] = u[t] + w[t]
=⇒
v(z) = u(z) + w(z);
Z z
v[t] = u[t≤ ] · w[t> ]
=⇒
v(z) =
u(t)w(t) dt,
0
where t≤ and t> denote the left and right subtrees of t.
The subtree product VF typically results in an integral equation over the OGFs, which
by differentiation can be put into the form of a differential equation. This differs from the
equations that resulted from Theorems 1 and 3, which we used in the last section for binary
and plane trees.
A good illustration of these techniques is to compute the expected number of probes C n
per successful search on a random binary search tree of size n. We assume that each of
the n elements is equally likely to be the object of the search. It is easy to see that C n
is equal to the expected internal path length pn , divided by n, plus 1, so it suffices to
compute pn . The recursive definition of the corresponding VF p[t] is
p[t] = |t| − 1 + δ|t|=0 + p[t≤ ] + p[t> ]
= |t| − 1 + δ|t|=0 + p[t≤ ] · I[t> ] + I[t≤ ] · p[t> ],
(23a)
P
n
where I[t] ≡ 1 is the
Punit VF, whose OGF is I(z) = n≥0 z = 1/(1 − z). The size VF
S[t] = |t| has OGF n≥0 nz n = z/(1 − z)2 . Theorem 12 translates (23a) into
Z z
p(t)
z2
+2
dt.
(23b)
p(z) =
2
(1 − z)
0 1−t
Differentiating (23b), we get a linear first-order differential equation
2p(z)
2z
p0 (z) −
−
= 0,
(23c)
1−z
(1 − z)3
which can be solved using the variation-of-parameter method (1.20) to get
log(1 − z) + z
2(1 + z)
0
p(z) = −2
=
2H
(z)
−
,
(1 − z)2
(1 − z)2
where H(z) = − log(1 − z)/(1 − z) is the OGF of the Harmonic numbers. The following
theorem results by extracting [z n ] p(z):
Section 4.2. Binary Search Trees
/
53
Theorem 13. The expected internal path length of a random binary search tree with n
internal nodes is
pn = 2(n + 1)Hn − 4n ∼ 2n log n + (2γ − 4)n + O(log n).
Theorem 13 shows that the average search time in a random binary search tree is
about 39 percent longer than in a perfectly balanced binary search tree.
There is also a short ad hoc derivation of pn : In a random binary search tree, si
is an ancestor of sj when si is the first element of {smin{i,j} , smin{i,j}+1 , . . . , smax{i,j} }
inserted
P into the tree, which happens with probability 1/(|i − j| + 1). Thus we have
pn = 1≤i,j≤n 1/(|i − j| + 1), which readily yields the desired formula.
The expected internal path length pn has direct application to other statistics of
interest. For example, pn is the expected number of comparisons used to sort a sequence
of n elements by building a binary search tree and then performing an inorder traversal.
The expected number Un of probes per unsuccessful search (which is also the average
number of probes per insertion, since insertions are preceded by an unsuccessful search) is
the average external path length EP n , divided by n + 1. The well-known correspondence
EP n = IP n + 2n
(24a)
between the external path length EP n and the internal path length IP n of a binary tree
with n internal nodes leads to
n
(Cn + 1),
Un =
(24b)
n+1
which readily yields an expression for Un via Theorem 13. We can also derive Un directly
as we did for Cn or via the use of PGFs. Yet another alternative is an ad hoc proof that
combines (24b) with a different linear relation between Un and Cn , namely,
1 X
Cn = 1 +
Ui .
(25)
n
0≤i≤n−1
Eq. (25) follows from the observation that the n possible successful searches on a tree of
size n retrace the steps taken during the n unsuccessful searches that were done when the
elements were originally inserted.
Quicksort. We can apply our valuation function machinery to the analysis of quicksort, as
mentioned in Section 3.5. Let qn be the average number of comparisons used by quicksort
to sort n elements. Quicksort works by choosing a partitioning element s (say, the first
element), dividing the file into the part ≤ s and the part > s, and recursively sorting
each subfile. The process is remarkably similar to the recursive decomposition of binary
search trees. The version of quicksort in [Knuth 1973b] and [Sedgewick 1977b] uses n + 1
comparisons to split the file into two parts. (Only n − 1 comparisons are needed, but the
extra two comparisons help speed up the rest of the algorithm in actual implementations.)
The initial conditions are q0 = q1 = 0. The corresponding VF q[t] is
q[t] = |t| + 1 − δ|t|=0 − 2δ|t|=1 + q[t≤ ] + q[t> ]
= |t| + 1 − δ|t|=0 − 2δ|t|=1 + q[t≤ ] · I[t> ] + I[t≤ ] · q[t> ]
(26a)
54
/
Average-Case Analysis of Algorithms and Data Structures
As before, the OGFs for I[t] ≡ 1 and S[t] = |t| are 1/(1 − z) and z/(1 − z) 2 . By the
translation of Theorem 12, we get
Z z
q(t)
1
z
+
− 1 − 2z + 2
dt;
(26b)
q(z) =
2
(1 − z)
1−z
0 1−t
2
2q(z)
q 0 (z) =
−2+
.
(26c)
3
(1 − z)
1−z
The linear differential equation (26c) can be solved via the variation-of-parameter method
to get
q(z) = −2
2
2
2
log(1 − z)
8
−
+ (1 − z) = 2H 0 (z) −
+ (1 − z).
2
2
2
(1 − z)
3(1 − z)
3
3(1 − z)
3
We can then extract qn = [z n ] q(z) to get
¶
µ
¶
µ
4
4
∼ 2n log n + 2n γ −
+ O(log n).
qn = 2(n + 1) Hn+1 −
3
3
(27a)
(27b)
In practice, quicksort can be optimized by stopping the recursion when the size of the
subfile is ≤ m, for some parameter m. When the algorithm terminates, a final insertion
sort is done on the file. (We know from Section 3.2 that insertion sort is very efficient when
the number of inversions is small.) The analysis of quicksort can be modified to give the
average running time as a function of n and m. The optimum m can then be determined,
as a function of n. This is done in [Knuth 1973b] and [Sedgewick 1977], where it is shown
that m = 9 is optimum in typical implementations once n gets large enough. The average
number of comparisons can be derived using the truncated VF
qm [t] = δ|t|>m (|t| + 1) + qm [t≤ ] · I[t> ] + I[t≤ ] · qm [t> ]
(28)
(cf. (26a)). The truncated
Im [t] = δ|t|>m and the
size
[t] =
¡ VF Sm
Punit VF
P truncated
n
m+1
n
m+1
δ|t|>m |t|
OGFs n>m z = z
/(1 − z) and n>m nz = (m + 1)z
−
¢ have the
m+2
2
mz
/(1 − z) , respectively. The rest of the derivation proceeds as before (and should
be done with a symbolic algebra system); the result (cf. (27b)) is
¶
µ
1
qm,n = 2(n + 1) Hn+1 − Hm+2 +
.
(29)
2
Height of Binary Search Trees. The analysis of the height of binary search trees
[h]
involves interesting equations over generating functions. By analogy with (12), let G n
denote the probability that a random binary search tree of size n has height ≤ h. The
corresponding VF G[h] [t] = δheight t≤h is of the form
G[h+1] [t] = δ|t|=0 + G[h] [t≤ ] · G[h] [t> ].
Theorem 12 translates this into
G
[h+1]
(z) = 1 +
Z
z
0
³
G[h] (t)
´2
dt,
(30a)
(30b)
Section 4.2. Binary Search Trees
/
55
where G[0] (z) = 1 and G(z) = G[∞] (z) = 1/(1 − z). The sequence {G[h] (z)}h≥0 forms a
sequence of Picard approximants to G(z). The OGF for the expected height is
´
X³
H(z) =
G(z) − G[h] (z) .
(31)
h≥0
It is natural to conjecture that H(z) has the singular expansion
c
1
log
,
(32)
1−z
1−z
as z → 1, for some constant c, but no one has succeeded so far in establishing it directly.
Devroye [1986a] has determined the asymptotic form of Hn using the theory of branching
processes:
H(z) ∼
Theorem 14 [Devroye 1986a]. The expected height Hn of a binary search tree of size n is
∼ c log n, where c = 4.311070 . . . is the root ≥ 2 of the equation (2e/c) c = e.
Theorems 13 and 14 point out clearly that a random binary search tree is fairly
balanced, in contrast to the random binary trees B we studied in Section 4.1. The expected
height and path lengths of binary search trees are O(log n) and
√ O(n log n), √whereas by
Theorem 2 the corresponding quantities for binary trees are O( n ) and O(n n ).
Interesting problems in average-case analysis also arise in connection with balanced
search trees, but interest is usually focused on storage space rather than running time.
For example, a fringe analysis is used in [Yao 1978] and [Brown 1979] to derive upper and
lower bounds on the expected storage utilization and number of balanced nodes in random
2-3 trees and B-trees. These techniques can be extended to get better bounds, but the
computations become prohibitive.
Multidimensional Search Trees. The binary search tree structure can be generalized
in various ways to two dimensions. The most obvious generalization, called quad trees, uses
internal nodes of degree 4. The quad tree for a sequence S = s1 , s2 , . . . , sn of n inserted
elements is defined by
®
½­
s1 , Q(S>,> ), Q(S≤,> ), Q(S≤,≤ ), Q(S>,≤ ) , if |S| ≥ 1;
(33)
Q(S) =
if |S| = 0.
h i,
Here each element s in S is a two-dimensional number, and the four quadrants determined
by s are denoted S>,> , S≤,> , S≤,≤ , and S>,≤ . Quad trees support general range searching,
and in particular partially specified queries of the form “Find all elements s = (s x , sy ) with
sx = c.” The search proceeds recursively to all subtrees whose range overlaps the query
range.
Theorem 15 [Flajolet et al 1989]. The expected
√ number of comparisons C n for a partially
( 17−3)/2
specified query in a quad tree of size n is bn
+ O(1), where b is a positive real
number.
Proof Sketch. The splitting probabilities for quad trees are not in as simple a form as
in (22), but they can be determined readily. By use of the appropriate VF c[t], we get
X
X
4
ck .
(34a)
cn = 1 +
n(n + 1)
0≤`≤n−1 0≤k≤`
56
/
Average-Case Analysis of Algorithms and Data Structures
In terms of the OGF d(z) = zc(z), this becomes a second-order differential equation
d00 (z) −
2
4
d(z) =
.
2
z(1 − z)
(1 − z)3
(34b)
It is not clear how to solve explicitly for d(z), but we can get asymptotic estimates for d n
based upon the fact that d(z) ∼ a(1 − z)−α , as z → 1, for some positive real a and α.
We cannot determine a in closed form in general for this type of problem, but α can be
determined by substituting a(1 − z)−α into (34b) to get the “indicial equation”
α(α + 1) − 4 = 0,
(35)
√
whose positive solution is α = ( 17 − 1)/2. The transfer lemma (Theorem 2.3) gives us
our final result.
Quad trees can be generalized to k dimensions, k ≥ 2, but the degrees of the nodes
become 2k , which is too large to be practical. A better alternative, called k–d trees, is a
binary search tree in which the splitting at each node on level i, i ≥ 0, is based upon
ordinate i mod k + 1 of the element stored there.
Theorem 16 [Flajolet and Puech 1986]. The expected number of elementary comparisons needed for a partially specified query in a k–d tree of size n, in which s of the
k fields are specified, is ∼ an1−s/k+ϑ(s/k) , where ϑ(u) is the root ϑ ∈ [ 0, 1] of the equation
(ϑ + 3 − u)u (ϑ + 2 − u)1−u − 2 = 0.
Proof Sketch. The proof proceeds by first developing a system of integral equations for
the OGFs of expected costs using the appropriate VFs and applying Theorem 12. This
reduces to a differential system of order 2k − s. It cannot be solved explicitly in terms of
standard transcendental functions, but a singularity analysis can be done to get the result,
based upon a generalization of the approach for quad trees.
Data structures for multidimensional search and applications in computational geometry are given in [Yao 1989].
Heap-Ordered Trees. We conclude this section by considering heap-ordered trees, in
which the value of the root node of each subtree is ≤ the values of the other elements in
the subtree. We discussed the classical array representation of a perfectly balanced heap in
connection with the heapsort algorithm in Section 3.5. Heap-ordered trees provide efficient
implementations of priority queues, which support the operations insert, find min, and
delete min. Additional operations sometimes include merge and decrease key. Pagodas
[Françon, Viennot, Vuillemin 1978] are a direct implementation of heap-ordered trees that
also support the merge operation.
For the sequence S of n elements s1 , s2 , . . . , sn , we define HOT (S) to be the (canonical) heap-ordered tree formed by S. It has the recursive definition
®
½­
HOT (Sleft ), smin(S) , HOT (Sright ) , if |S| ≥ 1;
HOT (S) =
(36)
if |S| = 0.
h i,
where min(S) is the index of the rightmost smallest element in S, Sleft is the initial subsequence s1 , . . . , smin(S)−1 , and Sright is the final subsequence smin(S)+1 , . . . , sn . We
Section 4.3. Radix-Exchange Tries
/
57
assume in our probability model that S is a random permutation of n elements. Analysis of parameters of heap-ordered trees and pagodas is similar to the analysis of binary
search trees, because of the following equivalence principle due to W. Burge [Burge 1972],
[Vuillemin 1980]:
Theorem 17. For each pair of inverse permutations σ and σ −1 , we have
BST (σ) ≡shape HOT (σ −1 ),
where t ≡shape u means that the unlabeled trees associated with trees t and u are identical.
For purposes of analysis, any parameter of permutations defined inductively over
the associated heap-ordered tree can thus be analyzed using the admissibility rules of
Theorem 12 for binary search trees. Heap-ordered trees in the form of cartesian trees can
also be used to handle a variety of 2-dimensional search problems [Vuillemin 1980].
4.3. Radix-Exchange Tries
Radix-exchange tries are binary search trees in which the elements are stored in the external
nodes, and navigation through the trie at level i is based upon the ith bit of the search
argument. Bit 0 means “go left,” and bit 1 means “go right.” We assume for simplicity
that each element is a real number in [ 0, 1] of infinite precision. The trie T R(S) for a set S
of elements is defined recursively by
­
®

 T R(S0 ), °, T R(S1 ) , if |S| > 1;
T R(S) = h i,
(37)
if |S| = 1;


h∅i,
if |S| = 0,
where S0 and S1 are defined as follows: If we take the elements in S that have 0 as their
first bit and then throw away that bit, we get S0 , the elements in the left subtrie. The
set S1 of elements in the right subtrie is defined similarly for the elements starting with
a 1 bit. The elements are stored in the external nodes of the trie. When S has a single
element s, the trie consists of the external node with value s; an empty trie is represented
by the null external node ∅. The size of the trie is the number of external nodes.
The trie T R(S) does not depend upon the order in which the elements in S are
inserted; this is quite different from the case of binary search trees, where order can make
a big difference upon the shape of the tree. In tries, the shape of the trie is based upon
the distribution of the elements’ values.
We use the probability model that the values of the elements are independent and
uniform in the real interval [ 0, 1]. We define the class of all tries to be T R. The probability
that a trie of n elements has a left subtrie of size k and a right subtrie of size n − k is the
Bernoulli probability
µ ¶
1 n
.
(38)
pn,k = n
2 k
This suggests that we use EGFs of expected values to represent trie statistics.
P We denote
the expected value of VF v[t] among trees of size n by vn and the EGF n≥0 vn z n /n!
by v(z). The admissibility theorem takes yet another form:
58
/
Average-Case Analysis of Algorithms and Data Structures
Theorem 18. The sum and subtree product valuation functions are admissible for the
class T R of tries:
v[t] = u[t] + w[t]
=⇒
v[t] = u[t0 ] · w[t1 ]
=⇒
v(z) = u(z) + w(z);
³z ´ ³z ´
v(z) = u
w
,
2
2
where t0 and t1 represent the left and right subtries of t.
A framework for the analysis of tries via valuation functions is given in [Flajolet,
Régnier, and Sotteau 1985]. In typical cases, the EGFs for the VFs that we encounter are
in the form of difference equations that we can iterate.
The expected number of bit inspections per successful search in a trie with n external
nodes is equal to the expected external path length pn , divided by n. The following theorem
shows that the search times are logarithmic, on the average, when no balancing is done.
Theorem 19 [Knuth 1973b].
The average external
¢ path√length p n of a random trie of
¡
size n is pn = n log2 n + γ/ log 2 + 21 + R(log2 n) n + O( n ), where R(u) is a periodic
function of small amplitude with period 1 and mean value 0.
Proof. The VF corresponding to external path length is
p[t] = |t| − δ|t|=1 + p[t0 ] · I[t1 ] + I[t0 ] · p[t1 ].
(39a)
P
n
z
The
unit
VF
I[t]
≡
1
has
EGF
n≥0 z /n! = e , and the size VF S[t] = |t| has EGF
P
n
z
n≥0 nz /n! = ze . By Theorem 18, (39a) translates to
p(z) = zez − z + 2ez/2 p
³z ´
2
.
(39b)
By iterating the recurrence and then extracting coefficients, we get
´
X³
k
ez − ez(1−1/2 ) ;
p(z) = z
k≥0
pn = n
X
k≥0
Ã
µ
1− 1−
1
2k
¶n−1 !
.
(39c)
(39d)
It is easy to verify the natural approximation pn ∼ nP (n), where P (x) is the harmonic
sum
´
X³
k
P (x) =
1 − e−x/2 .
(40)
k≥0
We have already derived the asymptotic expansion of P (x), as x → ∞, by use of Mellin
transforms in (2.28). The result follows immediately.
Theorem 19 generalizes to the biased case, where the bits of each element are independently 0 with probability p and 1 with probability q = 1 − p. The average external path
length is asymptotically (n log n)/H, where H is the entropy function H = −p log p−q log q.
In the case of unsuccessful searches, a similar approach shows that the average number of
Section 4.3. Radix-Exchange Tries
/
59
bit inspections is ∼ (log n)/H. The variance is O(1) with fluctuation in the unbiased case
[Jacquet and Régnier 1986], [Kirschenhofer and Prodinger 1986]. Variance estimates in this
range of problems involve interesting connections with modular functions [Kirschenhofer
and Prodinger 1988]. The variance increases to ∼ c log n+O(1), for some constant c, in the
biased case [Jacquet and Régnier 1986]. The limiting distributions are studied in [Jacquet
and Régnier 1986] and [Pittel 1986]. The height of a trie has mean ∼ 2 log 2 n and variance O(1) [Flajolet 1983]. Limiting distributions of the height are studied in [Flajolet 1983]
and [Pittel 1986].
Another important statistic on tries, besides search time, is storage space. Unlike
binary search trees, the amount of auxiliary space used by tries, measured in terms of
the number of internal nodes, is variable. The following theorem shows that the average
number of internal nodes in a trie is about 44 percent more than the number of elements
stored in the trie.
Theorem 20 [Knuth 1973b]. The expected number
in of ¢internal
¡
√ nodes in a random
unbiased trie with n external nodes is (n/ log 2) 1 + Q(log2 n) + O( n ), where Q(u) is a
periodic function of small amplitude with period 1 and mean value 0.
Proof. The VF corresponding to the number of internal nodes is
i[t] = δ|t|>1 + i[t0 ] · I[t1 ] + I[t0 ] · i[t1 ].
(41a)
Theorem 18 translates this to
i(z) = ez − 1 − z + 2ez/2 i
³z ´
2
.
By iterating the recurrence and then extracting coefficients, we get
³
X ³
z ´ (1−1/2k )z ´
k
z
;
i(z) =
2 e − 1+ k e
2
k≥0
Ã
µ
¶n
µ
¶n−1 !
X
1
n
1
in =
2k 1 − 1 − k
.
− k 1− k
2
2
2
(41b)
(41c)
(41d)
k≥0
√
We can approximate in to within O( n ) in a natural way by S(n), where
³
X ³
k
x ´´
S(x) =
2k 1 − e−x/2 1 + k
.
2
(42a)
k≥0
Equation (42a) is a harmonic sum, and its Mellin transform S ∗ (s) can be computed readily:
S ∗ (s) =
(s + 1)Γ(s)
,
1 − 2s+1
(42b)
where the fundamental strip of S ∗ (s) is h−2; −1i. The result follows by computing the
residues in the right half-plane <(s) ≥ −1. There is a simple pole at s = 0 due to Γ(s)
and poles at −1 + 2kπi/ log 2 due to the denominator of (42b).
60
/
Average-Case Analysis of Algorithms and Data Structures
In the biased case, the expected number of internal nodes is ≈ n/H. The variance
for both the unbiased and biased case is O(n), which includes a fluctuating term [Jacquet
and Régnier 1987]; the distribution of the number of internal nodes is normal [Jacquet and
Régnier 1986].
Theorem 20, as do a number of the results about tries, generalizes to the case in which
each external node in the trie represents a page of secondary storage capable of storing
b ≥ 1 elements. Such tries are generally called b-tries. The analysis uses truncated VFs,
as in the second quicksort example in Section 4.2, to stop the recursion when the subtrie
has ≤ b elements. The result applies equally well to the extendible hashing scheme of [Fagin
et al 1979], where the trie is built upon the hashed values of the elements, rather than upon
the elements themselves. Extendible hashing will be considered further in Section 5.1.
Theorem 21 [Knuth 1973b]. The expected number of
b needed
¡ pages of capacity
¢¡
¢ to store
√ a
file of n records using b-tries or extendible hashing is n/(b log 2) 1 + R(log2 n) + O( n ),
where R(u) is periodic with period 1 and mean value 0.
Patricia Tries. Every external node in a trie of size ≥ 2 has a sibling, but that is not
generally the case for internal nodes. A more compact form of tries, called Patricia tries,
can be obtained by collapsing the internal nodes with no sibling. Statistics on Patricia
tries are analyzed in [Knuth 1973b] and [Kirschenhofer and Prodinger 1986].
Radix-Exchange Sorting. It is no accident that radix-exchange tries and the radixexchange sorting algorithm have a common name. Radix-exchange sorting is related to
tries in a way very similar to how quicksort is related to binary search trees, except that the
relationship is even closer. All the average-case analyses in this section carry over to the
analysis of radix-exchange sorting: The distribution of the number of partitioning stages
used by radix-exchange sorting to sort n numbers is the same as the distribution of the
number of internal nodes in a trie, and the distribution of the number of bit inspections
done by radix-exchange sorting is same as the distribution of the external path length of
a trie.
4.4. Digital Search Trees
Digital search trees are like tries except that the elements are stored in the internal nodes,
or equivalently they are like binary search trees except that the branching at level i is
determined by the (i + 1)st bit rather than by a full element-to-element comparison. The
digital search tree DST (S) for a sequence S of inserted elements is defined recursively by
®
½­
DST (S0 ), s1 , DST (S1 ) , if |S| ≥ 1;
(43)
DST (S) =
if |S| = 0,
h i,
where s1 is the first element of S, and the sequence S0 of elements in the left subtree is
formed by taking the elements in S − {s1 } that have 0 as the first bit and then throwing
away the first bit. The sequence S1 for the right subtree is defined symmetrically for the
elements with 1 as their first bit. Like binary search trees, the size of the tree is its number
of internal nodes, and its shape is sensitive to the order in which the elements are inserted.
Section 4.4. Digital Search Trees
/
61
The empty digital search tree is denoted by the external node . Our probability model
is the same as for tries, except that the probability that
of n elements has a left
¡n−1a¢ tree
n−1
. The class of all digital
subtree of size k and a right subtree of size n − k − 1 is k /2
search trees is denoted DST .
The nature of the decomposition in (43) suggests that we use EGFs of expectations
in our analysis, as in the last section, but the admissibility theorem takes a different form:
Theorem 22. The sum and subtree product valuation functions are admissible for the
class DST of digital search trees:
v[t] = u[t] + w[t]
=⇒
v[t] = u[t0 ] · w[t1 ]
=⇒
v(z) = u(z) + w(z);
Z z µ ¶ µ ¶
t
t
u
v(z) =
w
dt,
2
2
0
where t0 and t1 denote the left and right subtrees of t.
Tries are preferred in practice over digital search trees, since the element comparison
done at each node in a digital search tree takes longer than the bit comparison done in a trie,
and the elements in a trie are kept in sorted order. We do not have space in this manuscript
to include the relevant analysis; instead we refer the reader to [Knuth 1973b], [Konheim and
Newman 1973], [Flajolet and Sedgewick 1986], and [Kirschenhofer and Prodinger 1986].
The key difference between the analysis of digital search trees and the analysis of tries
in the last section is that the equations over the EGFs that result from Theorem 22 are
typically difference-differential equations, to which the Mellin techniques that worked so
well for tries cannot be applied directly. Instead the asymptotics come by an application
due to S. O. Rice of the following classical formula from the calculus of finite differences;
the proof of the formula is an easy application of Cauchy’s formula.
Theorem 23. Let C be a closed curve encircling the points 0, 1, . . . , n, and let f (z) be
analytic inside C. Then we have
Z
X µn ¶
1
k
(−1) f (k) =
B(n + 1, −z)f (z) dz,
k
2πi C
k
where B(x, y) = Γ(x)Γ(y)/Γ(x + y) is the classical Beta function.
Theorem 24 [Knuth 1973b], [Konheim and Newman 1973]. The expected internal path
length of a random digital search tree is
µ
¶
√
γ−1 1
+ − α + P (log2 n) n + O( n ),
(n + 1) log2 n +
log 2
2
1
where γ = 0.57721 . . . is Euler’s constant, α = 1 + 31 + 17 + 15
+ · · · = 1.606695 . . ., and
P (u) is a periodic function with period 1 and very small amplitude.
62
/
Average-Case Analysis of Algorithms and Data Structures
5. Hashing and Address Computation Techniques
In this section we consider several well-known hashing algorithms, including separate chaining, coalesced hashing, uniform probing, double hashing, secondary clustering, and linear
probing, and we also discuss the related methods of interpolation search and distribution
sorting. Our machine-independent model of search performance for hashing is the number
of probes made into the hash table during the search. We are primarily interested in the
expected number of probes per search, but in some cases we also consider the distribution of the number of probes and the expected maximum number of probes among all the
searches in the table.
With hashing, searches can be performed in constant time, on the average, regardless
of the number of elements in the hash table. All hashing algorithms use a pre-defined hash
function
hash : {all possible elements} → {1, 2, . . . , m}
(1)
that assigns a hash address to each of the n elements. Hashing algorithms differ from one
another in how they resolve the collision that results when an element’s hash address is
already occupied. The two main techniques for resolving collisions are chaining (in which
links are used to explicitly link together elements with the same hash address) and open
addressing (where the search path through the table is defined implicitly). We study these
two classes of hashing algorithms in the next two sections.
We use the Bernoulli probability model for our average-case analysis: We assume
that all mn possible sequences of n hash addresses (or hash sequences) are equally likely.
Simulation studies confirm that this is a reasonable assumption for well-designed hash
functions. Further discussion of hash functions, including universal hash functions, appears
in [Mehlhorn and Tsakalidis 1989]. We assume that an unsuccessful search can begin at
any of the m slots in the hash table with equal probability, and the object of a successful
search is equally likely to be any of the n elements in the table. Each insertion is typically
preceded by an unsuccessful search to verify that the element is not already in the table,
and so for simplicity we shall identify the insertion time with the time for the unsuccessful
search. We denote the expected number of probes per unsuccessful search (or insertion)
in a hash table with n elements by Un , and the expected number of probes per successful
search by Cn .
Section 5.1. Bucket Algorithms and Hashing by Chaining
/
63
5.1. Bucket Algorithms and Hashing by Chaining
Separate Chaining. One of the most obvious techniques for resolving collisions is to link
together all the elements with the same hash address into a list or chain. The generic name
for this technique is separate chaining. The first variant we shall study stores the chains in
auxiliary memory; the ith slot in the hash table contains a link to the start of the chain of
elements with hash address i. This particular variant is typically called indirect chaining,
because the hash table stores only pointers, not the elements themselves.
Search time clearly depends upon the number of elements in the chain searched. For
each 1 ≤ i ≤ m, we refer to the set of elements with hash address i as the ith bucket. We
define m
n Xi (or simply Xi ) in the Bernoulli model to be the RV describing the number of
elements in bucket i. This model is sometimes called the urn model, and the distribution
of Xi is called the occupancy distribution. Distributions of this sort appear in the analyses
of each of the chaining algorithms we consider in this section, and they serve to unify our
analyses. Urn models were discussed in Section 1.3.
An unsuccessful search on a chain of length k makes one probe per element, plus one
probe to find the link to the beginning of the chain. This allows us to express the expected
number of probes per unsuccessful search as
¾
½
1 X
(1 + Xi ) .
(2a)
Un = E
m
1≤i≤m
By symmetry, the expected values E{Xi } are the same for each 1 ≤ i ≤ m, so we can
restrict our attention to one particular bucket, say, bucket 1. (For simplicity, we shall
abbreviate X1 by X.) Eq. (2a) simplifies to
Un = 1 + E{X}.
(2b)
For successful searches, each chain of length k contributes 2+3+· · ·+(k +1) = 3k/2+k 2 /2
probes. The expected number of probes per successful search is thus
½
µ
µ
¶
¶¾
1 2
1
1 X
3
m 3
2
Cn = E
Xi + Xi
E{X} + E{X } .
(3)
=
n
2
2
n 2
2
1≤i≤m
We can compute (2b) and (3) in a unified way via the PGF
X
X(u) =
Pr{X = k}uk .
(4)
k≥0
Eqs. (2b) and (3) for Un and Cn are expressible simply in terms of derivatives of X(u):
¶
µ
m
1 00
0
0
(5)
Un = 1 + X (1);
Cn =
2X (1) + X (1) .
n
2
We shall determine X(u) by extending the admissible constructions we developed in
Section 1.3 for the urn model. This approach will be especially useful for our analyses
64
/
Average-Case Analysis of Algorithms and Data Structures
later in this section of the maximum bucket occupancy, extendible hashing, and coalesced
hashing. We consider the hash table as the m-ary partitional product of the individual
buckets
H = B ∗ B ∗ · · · ∗ B.
(6a)
The new twist here is that some of the elements in the table are “marked” according
to some rule. (We shall explain shortly how this relates to separate chaining.) We let
Hk,n,m be the number of mn hash sequences for which k of the elements are marked, and
we denote its EGF by
X
zn
b
H(u,
z) =
Hk,n,m uk .
(6b)
n!
k,n≥0
By analogy with Theorem 1.4 for EGFs, the following theorem shows that the partitional
product in (6a) translates into a product of EGFs; the proof is similar to that of Theorem 1.4 and is omitted for brevity.
Theorem 1. If the number of marked elements in bucket
P i is a function of only the
b
number of elements in bucket i, then the EGF H(u,
z) = k,n≥0 Hk,n,m uk z n /n! can be
decomposed into
b
b1 (u, z) · B
b2 (u, z) · . . . · B
bm (u, z),
H(u,
z) = B
(6c)
where
bi (u, z) =
B
X
ufi (t)
t≥0
zt
,
t!
(6d)
and fi (t) is the number of marked elements in bucket i when there are t elements in bucket i.
We are interested in the number of elements in bucket 1, so we adopt the marking rule
that all elements in bucket 1 are marked, and the other
are left unmarked. In
P elements
t t
b
terms of Theorem 1, we have f1 (t) = t and B1 (u, z) = t≥0 u z /t! = euz for bucket 1, and
t
z
bi (u, z) = P
fi (t) = 0 and B
t≥0 z /t! = e for the other buckets 2 ≤ i ≤ m. Substituting
this into Theorem 1, we have
b
H(u,
z) = e(m−1+u)z .
(7a)
We can obtain the EGF of X(u) by dividing each Hk,n,m term in (6b) by mn , or equivalently
by replacing z by z/m. Combining this with (7a) gives
³ z´ X
zn
b
=
X(u)
H u,
= e(m−1+u)z/m .
m
n!
(7b)
n≥0
Hence, we have
X(u) =
µ
m−1+u
m
¶n
;
X 0 (1) =
n
;
m
X 00 (1) =
n(n − 1)
.
m2
(7c)
We get the following theorem by substituting the expressions for X 0 (1) and X 00 (1) into (5);
the term α = n/m is called the load factor.
Section 5.1. Bucket Algorithms and Hashing by Chaining
/
65
Theorem 2. The expected number of probes per unsuccessful and successful search for
indirect chaining, when there are n elements in a hash table of m slots, is
Un = 1 +
n
= 1 + α;
m
Cn = 2 +
α
n−1
∼2+ .
2m
2
(8)
We can also derive (8) in an ad hoc way by decomposing X into the sum of n independent 0–1 RVs
X = x 1 + x2 + · · · + x n ,
(9)
where xi = 1 if the ith element goes into bucket i, and xi = 0 otherwise. Each xi has the
same distribution as m
1 X1 , and its PGF is clearly (m − 1 + u)/m. Since the PGF of a sum
of independent RVs is equal to the product of the PGFs, we get Eq. (7c) for X(u). We
can also derive the formula for Cn from the one for Un by noting that Eq. (4.25) for binary
search trees holds for separate chaining as well.
Examples. 1. Direct chaining. A more efficient version of separate chaining, called
direct chaining, stores the first element of each chain directly in the hash table itself. This
shortens each successful search time by one probe, and the expected unsuccessful search
time is reduced by Pr{X > 0} = 1 − X(0) = 1 − (1 − 1/m)n ∼ 1 − e−α probes. We get
µ
¶n
1
n
∼ e−α + α;
Un = 1 −
+
m
m
Cn = 1 +
n−1
∼ 1 + α.
2m
(10)
2. Direct chaining with relocation. The above variant is wasteful of hash table slots,
because auxiliary space is used to store colliders even though there might be empty slots
available in the hash table. (And for that reason, the load factor α = n/m defined above is
not a true indication of space usage.) The method of direct chaining with relocation stores
all elements directly in the hash table. A special situation arises when an element x with
hash address hash(x) collides during insertion with another element x 0 . If x0 is the first
element in its chain, then x is inserted into an empty slot in the table and linked to the end
of the chain. Otherwise, x0 is not at the start of the chain, so x0 is relocated to an empty
slot in order to make room for x; the link to x0 from its predecessor in the chain must
be updated. The successful search time is the same as before. Unsuccessful searches can
start in the middle of a chain; each chain of length k > 0 contributes k(k + 1)/2 probes.
A search starting at one of the m − n empty slots takes one probe. This gives us
1
n(n − 1)
α2
∼
1
+
.
Un = X 0 (1) + X 00 (1) + Pr{slot hash(x) is empty} = 1 +
2
2m2
2
(11)
The main difficulty with this algorithm is the overhead of moving elements, which can be
expensive for large record elements and might not be allowed if there are pointers to the
elements from outside the hash table. Updating the previous link requires either the use of
bidirectional or circular chains or recomputing the hash address of x 0 and following links
until x0 is reached. None of these alternatives is attractive, and we shall soon consider a
better alternative called coalesced hashing, which has nearly the same number of probes
per search, but without the overhead.
66
/
Average-Case Analysis of Algorithms and Data Structures
Distribution Sorting. Bucketing can also be used to sort efficiently in linear time when
the values of the elements are smoothly distributed. Suppose for simplicity that the n values are real numbers in the unit interval [ 0, 1). The distribution sort algorithm works by
1 2
1
), [ m
, m ), . . . , [ m−1
breaking up the range of values into m buckets [ 0, m
m , 1); the elements
are partitioned into the buckets based upon their values. Each bucket is sorted using selection sort (cf. Section 3.5) or some other simple quadratic sorting method. The sorted
buckets are then appended
¡k¢ together to get the final sorted output.
Selection sort uses 2 comparisons to sort k elements. The average number of comparisons Cn used by distribution sort is thus
½
¾
½ X µ ¶¾
X
Xi (Xi − 1)
Xi
1 X
=
E
=
Xi00 (1).
(12a)
Cn = E
2
2
2
1≤i≤m
1≤i≤m
1≤i≤m
By (7c), we have Cn = n(n − 1)/(2m) when the values of the elements are independently
and uniformly distributed. The other work done by the algorithm takes O(n + m) time,
so this gives a linear-time sorting algorithm when we choose m = Θ(n). (Note that this
does not contradict the well-known Ω(n log n) lower bound on the average sorting time in
the comparison model, since this is not a comparison-based algorithm.)
The assumption that the values are independently and uniformly distributed is not
always easy to justify, unlike for the case of hashing, because there is no hash function to
scramble the values; the partitioning is based upon the elements’ raw values. Suppose the
elements are independently distributed according to density function f (x). In the following
analysis,
by R. M. Karp [Knuth 1973b, 5.2.1–38], [Devroye 1986b], we assume
R 1 suggested
2
that 0 f (x) dx < ∞, which assures that f (x) is well-behaved. For each nR we choose m so
that n/m → α, for some positive constant α, as n → ∞. We define pi = Ai f (x) dx to be
i
the probability that an element falls into the ith bucket Ai = [ i−1
m , m ). For general f (x),
Eq. (7c) for Xi (u) becomes
¡
¢n
Xi (u) = (1 − pi ) + pi u .
(12b)
By (12a) and (12b) we have
µ ¶ X
µ ¶ X µZ
¶2
n
n
2
Cn =
pi =
f (x) dx .
2
2
Ai
1≤i≤m
(12c)
1≤i≤m
The last summation in (12c) can be bounded by an application of Jensen’s inequality,
treating f (x) as a RV with x uniformly distributed:
X µZ
1≤i≤m
f (x) dx
Ai
¶2
µZ
¶2
1 X
f (x) d(mx)
= 2
m
Ai
1≤i≤m
Z
1 X
f (x)2 d(mx)
≤ 2
m
1≤i≤m Ai
Z 1
1
=
f (x)2 dx.
m 0
(12d)
Section 5.1. Bucket Algorithms and Hashing by Chaining
/
67
We can show that the upper bound (12d) is asymptotically tight by computing a corresponding lower bound. We have
¶2
Z
X µZ
n 1
fn (x)2 dx,
n
f (x) dx =
m 0
Ai
1≤i≤m
where fn (x) = mpi , for x ∈ Ai , is the histogram approximation of f (x), which converges
to f (x) almost everywhere. By Fatou’s lemma, we get the lower bound
Z
Z 1
Z 1
Z 1
n 1
2
2
2
lim inf
fn (x) dx = α lim inf
fn (x) dx ≥ α
lim inf fn (x) dx = α
f (x)2 dx.
n→∞ m 0
n→∞
n→∞
0
0
0
Substituting this approximation and (12d) into (12c), we find that the average number of
comparisons is
Z
αn 1
f (x)2 dx.
(12e)
Cn ∼
2 0
R1
The coefficient of the linear term in (12e) is proportional to 0 f (x)2 dx, which can
be very large. The erratic behavior due to nonuniform f (x) can be alleviated by one level
of recursion, in which the above algorithm is used to sort the individual buckets: Let us
assume that m = n. If the number Xi of elements in bucket i is more than 1, we sort the
i
bucket by breaking up the range [ i−1
n , n ) into Xi subbuckets, and proceed as before. The
surprising fact, which can be shown by techniques similar to those above, is that C n is
bounded
by n/2 in the limit, regardless of f (x) (assuming of course that our assumption
R1
2
f (x) dx < ∞ is satisfied).
0
Theorem 3 [Devroye 1986b]. The expected number of comparisons C n done by two-level
bucketing to sort
that are independently distributed with density function f (x),
R 1n elements
2
which satisfies 0 f (x) dx < ∞, is
Z
n 1 −f (x)
n
Cn ∼
e
dx ≤ .
2 0
2
The variance and higher moments of the number of probes are also small. If the unit
interval assumption is not valid and the values of the elements are not bounded, we can
redefine the interval to be [xmin , xmax ] and apply the same basic idea given above. The
analysis becomes a little more complicated; details appear in [Devroye 1986b].
For the actual implementation, a hashing scheme other than separate chaining can be
used to store the elements in the table. Sorting with linear probing is discussed at the end
of the section. An application of bucketing to fast sorting on associative secondary storage
devices appears in [Lindstrom and Vitter 1985]. A randomized algorithm that is optimal
for sorting with multiple disks is given in [Vitter and Shriver, 1990].
Interpolation Search. Newton’s method and the secant method are well-known schemes
for determining the leading k bits of a zero of a continuous function f (x) in O(log k) iterations. (By “zero,” we mean a solution x to the equation f (x) = 0.) Starting with an initial
approximation x0 , the methods produce a sequence of refined approximations x1 , x2 , . . .
that converge to a zero x∗ , assuming that f (x) is well-behaved. The discrete analogue is
called interpolation search, in which the n elements are in a sorted array, and the goal is
to find the element x∗ with a particular value c. Variants of this method are discussed in
[Mehlhorn and Tsakalidis 1989].
68
/
Average-Case Analysis of Algorithms and Data Structures
Theorem 4 [Yao and Yao 1976], [Gonnet, Rogers, and George 1980]. The average
number of comparisons per successful search using interpolation search on a sorted array is ∼ log2 log2 n, assuming that the elements’ values are independently and identically
distributed.
This bound is similar to that of the continuous case; we can think of the accuracy k as
being log2 n, the number of bits needed to specify the array position of x∗ . A more detailed
probabilistic analysis connecting interpolation search with Brownian motion appears in
[Louchard 1983].
Proof Sketch. We restrict ourselves to considering the upper bound. Gonnet, Rogers,
and George [1980] show by some probabilistic arguments that the probability that at least
k probes are needed to find x∗ is bounded by
¶
Y µ
1 −t2−i
1− e
pk (t) =
,
(13a)
2
1≤i≤k
where t = log(πn/8). Hence, the expected number of probes is bounded by
F (t) =
X
pk (t),
(13b)
k≥0
which can be expressed in terms of the harmonic sum
F (t) =
1 X
Q(t2k ),
Q(t)
(13c)
k≥0
where
Q(t) =
Yµ
i≥1
−i
1
1 − e−t2
2
¶−1
.
(13d)
The sum in (13c) is a harmonic sum to which Mellin transforms can be applied to get
F (t) ∼ log2 t + α + P (log2 t) + o(1),
as t → ∞,
(13e)
where α is a constant and P (u) is a periodic function associated with the poles at χ k =
2kπi/ log 2. The log2 log2 n bound follows by substituting t = log(πn/8) into (13e).
Maximum Bucket Occupancy. An interesting statistic that lies between average-case
and worst-case search times is the expected number of elements in the largest bucket (also
known as the maximum bucket occupancy). It has special significance in parallel processing
applications in which elements are partitioned randomly into buckets and then the buckets
are processed in parallel, each in linear time; in this case, the expected maximum bucket
occupancy determines the average running time.
We can make use of the product decomposition (6a) and Theorem 1 to count the
number of hash sequences that give a hash table with ≤ b elements in each bucket. We
mark all the elements in a bucket if the bucket has > b elements; otherwise, the elements are
Section 5.1. Bucket Algorithms and Hashing by Chaining
/
69
left unmarked. In this terminology, our quantity of interest is simply the number H 0,n,m
of hash sequences with no marked elements:
¤
£
b
z)
H0,n,m = n! u0 z n H(u,
b z)
= n! [z n ] H(0,
¢
¡
b1 (0, z) · B
b2 (0, z) · . . . · B
bm (0, z) ,
= n! [z n ] B
where
bi (0, z) =
B
X zn
,
n!
(14a)
(14b)
0≤n≤b
for 1 ≤ i ≤ m. The sum in (14b) is the truncated exponential, which we denote by e b (z).
Hence, the number of hash sequences with ≤ b elements per bucket is
¡
¢m
H0,n,m = n! [z n ] eb (z) .
[b]
(14c)
We use qn to denote the probability that a random hash sequence puts at most b elements into each bucket. As in (7b), we can convert from the enumeration H0,n,m to the
[b]
probability qn by replacing z in (14c) by z/m:
³ ³ z ´´m
.
qn[b] = n! [z n ] eb
m
(14d)
There is a close relation between the EGF of Bernoulli statistics and the corresponding
Poisson statistic:
P
Theorem 5. If gb(z) = n≥0 gn z n /n! is the EGF for a measure gn (for example, probability, expectation, moment) in the Bernoulli model, then e−α gb(α) is the corresponding
measure if the total number of elements X1 + · · · + Xm is Poisson with mean α.
Proof. The measure in the Poisson model is the conditional expectation of the measure
in the Bernoulli model, namely,
X
n≥0
gn Pr{X1 + · · · + Xm = n} =
X
n≥0
gn
e−α αn
= e−α gb(α).
n!
We shall use Theorem 5 and direct our attention to the Poisson model, where the
number of elements in each bucket is Poisson distributed with mean α, and hence the
total number of elements is a Poisson RV with mean mα. (The analysis of the Bernoulli
case can be handled in much the same way that we shall handle the analysis of extendible
hashing later in this section, so covering the Poisson case will present us with a different
perspective.)
We let Mα denote the maximum number of elements per bucket in the Poisson model,
[b]
and we use pα to denote the probability that Mα ≤ b. By Theorem 5, we have
¡ −α
¢m
eb (α) .
p[b]
α = e
70
/
Average-Case Analysis of Algorithms and Data Structures
(This can also be derived directly by noting that the m buckets are independent and that
the Poisson probability that a given bucket has ≤ b elements is e−α eb (α).) What we want
is to compute the expected maximum bucket occupancy
Mα =
X
b≥1
[b−1]
b(p[b]
)=
α − pα
X
(1 − p[b]
α ).
(15)
b≥0
We shall consider the case α = o(log m) (although the basic principles of our analysis
will apply for any α). A very common occurrence in occupancy RVs is a sharp rise in the
[b]
distribution pα from ≈ 0 to ≈ 1 in the “central region” near the mean value. Intuitively,
[b̃]
a good approximation for the mean is the value b̃ such that pα is sufficiently away from 0
and 1. We choose the value b̃ > α such that
1
e−α αb̃+1
e−α αb̃
.
≤
<
m
b̃!
(b̃ + 1)!
(16a)
When α = Θ(1), it is easy to see from (16) that b̃ ∼ Γ−1 (m) ∼ (log m)/ log log m. (Here
Γ−1 (y) denotes the inverse of the Gamma function Γ(x).) We define λ using the left-hand
side of (16a) so that
λ
e−α αb̃+1
(16b)
= .
m
(b̃ + 1)!
In particular we have α/(b̃+1) < λ ≤ 1. The following bound illustrates the sharp increase
[b]
in pα as a function of b in the vicinity b ≈ b̃:
p[αb̃+k]
¡
= e
∼
Ã
−α
eb̃+k (α)
¢m
=
e−α αb̃+k+1
1−
(b̃ + k + 1)!
µ
1−e
!m
∼
−α
µ
X α b ¶m
b!
b>b̃+k
λαk
1−
mb̃k
¶m
∼ e−λα
k
/ b̃k
.
(17)
¡p ¢
[b̃+k]
continues
The approximation is valid uniformly for k = o b̃ . The expression 1 − pα
to decrease exponentially as k → ∞. The maximum bucket size is equal to b̃ with probability ∼ e−λ and to b̃ + 1 with probability ∼ 1 − e−λ . The net effect is that we can get an
[b]
asymptotic approximation for Mα by approximating 1 − pα in (15) by a 0–1 step function
with the step at b = b̃:
X
X
Mα ∼
(1) +
(0) ∼ b̃.
(18)
0≤b<b̃
b≥b̃
The same techniques can be applied for general α. The asymptotic behavior of M α for the
Bernoulli and Poisson models is the same.
Theorem 6 [Kolchin et al 1978]. In the Bernoulli model with n elements inserted in
m buckets (α = n/m) or in the Poisson model in which the occupancy of each bucket is an
Section 5.1. Bucket Algorithms and Hashing by Chaining
/
71
independent Poisson RV with mean α, the expected maximum bucket occupancy is

log m



 log log m , if α = Θ(1);
Mα ∼

b̃,
if α = o(log m);



α,
if α = ω(log m),
where b̃ is given by (16a).
When α gets large, α = ω(log m), the bucket√occupancies become fairly uniform; the
difference Mα − α converges in probability to ∼ 2α log m, provided that α = mO(1) .
Extendible Hashing. A quantity related to maximum bucket occupancy is the expected
directory size used in extendible hashing, in which the hash table is allowed to grow and
shrink dynamically [Fagin et al 1979], [Larson 1978]. Each slot in the hash table models
a page of secondary storage capable of storing up to b elements. If a bucket overflows,
the number of buckets in the table is successively doubled until each bucket has at most
b elements. The directory acts as a b-trie, based upon the infinite precision hash addresses
of the elements (cf. Section 4.3). For this reason, the analyses of directory size and trie
height are very closely related.
At any given time, the directory size is equal to the number of buckets in the table,
[h]
which is always a power of 2. The probability πn that the directory size is ≤ 2h is
πn[h]
³ z ´´2h
.
= n! [z ] eb h
2
n
³
(19a)
This is identical to (14d) with m = 2h , except that in this case m is the parameter that
varies, and the bucket capacity b stays fixed. We can also derive (19a) via the admissibility
[h]
theorem for tries (Theorem 4.18): πn is the probability that the height of a random b-trie
P
[h]
is ≤ h. The EGF π [h] (z) = n≥0 πn z n /n! satisfies
π
Hence, (19a) follows.
[h]
³
³ ´´2
[h−1] z
(z) = π
;
2
π [0] (z) = eb (z).
(19b)
Theorem 7 [Flajolet 1983]. In the Bernoulli model, the expected directory size in extendible hashing for bucket size b > 1 when there are n elements present is
Ã
¡
¢
¶
¶!
µµ
Γ 1 − 1b
1
Sn ∼
log2 n
+Q
1+
n1+1/b ,
b
(log 2)(b + 1)!1/b
where Q(u) is a periodic function with period 1 and mean value 0.
Proof Sketch.
similar to (15):
[h]
We can express the average directory size Sn in terms of πn in a way
Sn =
X
h≥1
2h (πn[h] − πn[h−1] ) =
X
h≥0
2h (1 − πn[h] ).
(20)
72
/
Average-Case Analysis of Algorithms and Data Structures
The first step in the derivation is to apply the saddle point method of Section 2.3. We
[h]
omit the details for brevity. As for the maximum bucket occupancy, the probabilities π n
change quickly from ≈ 0 to ≈ 1 in a “central region,” which in this case is where h = h̃ =
(1 + 1/b) log2 n. By saddle point, we get the uniform approximation
¶
µ
−nb+1
[h]
,
(21)
πn ∼ exp
(b + 1)! 2bh
for |h − h̃| < log2 log n. When h ≥ h̃ + log2 log n, approximation (21) is no longer valid,
[h]
but the terms 1 − πn continue to decrease exponentially with respect to h. Hence, we can
substitute
approximation
¡ b+1
¢ (21) into (20) without affecting the leading terms of S n . We get
Sn ∼ T n /(b + 1)! , where T (x) is the harmonic sum
´
X ³
bh
T (x) =
2h 1 − e−x/2
,
(22a)
h≥0
whose Mellin transform is
−Γ(s)
,
(22b)
1 − 2bs+1
in the fundamental strip h−1; −1/bi. The asymptotic expansion of T (x), as x → ∞, is
given by the poles of T ∗ (s) to the right of the strip. There is a simple pole at s = 0 due
to Γ(s) and simple poles at s = −1/b + 2kπi/ log 2 due to the denominator. The result
follows immediately from (2.24).
T ∗ (s) =
Theorem 7 shows that the leading term of Sn oscillates with (1 + 1/b) log2 n. An
intuition as to why there is oscillation can be found in the last summation in (20). The
[h]
sum samples the terms 1 − πn at each nonnegative integer h using the exponential weight
function 2h . The value of h where the approximation (21) changes quickly from ≈ 0 to ≈ 1
is close to an integer value only when (1 + 1/b) log 2 n is close to an integer, thus providing
the periodic effect.
It is also interesting to note from Theorem 7 that the directory size is asymptotically superlinear, that is, the directory becomes larger than the file itself when n is large!
Fortunately, convergence is slow, and the nonlinear growth of S n is not noticeable in practice when b is large enough, say b ≥ 40. Similar results for the Poisson model appear in
[Régnier 1981].
The same techniques apply to the analysis of the expected height Hn of tries:
X
X
Hn =
h(πn[h] − πn[h−1] ) =
(1 − πn[h] ).
(23)
h≥1
h≥0
This is the same as (20), but without the weight factor 2h . (When trie height grows by 1,
directory size doubles.)
Theorem 8 [Flajolet 1983]. The expected height in the Bernoulli model of a random
b-trie of n elements is
¡
¢
µ
¶
µµ
¶
¶
1
1 γ − log (b + 1)!
1
Hn = 1 +
log2 n + +
+P
1+
log2 n + o(1),
b
2
b log 2
b
Section 5.1. Bucket Algorithms and Hashing by Chaining
/
73
where P (u) is periodic with period 1, small amplitude, and mean value 0.
In the biased case, where 0 occurs with probability p and 1 occurs with probability q =
1 − p, we have
π [h] (z) = π [h−1] (pz) · π [h−1] (qz),
which gives us
π [h] (z) =
Y ¡
1≤k≤h
eb (pk q n−k z)
¢(hk)
(cf. (19a) and (19b)). Multidimensional versions of extendible hashing have been studied
in [Régnier 1985].
Coalesced Hashing. We can bypass the problems of direct chaining with relocation by
using hashing with coalescing chains (or simply coalesced hashing). Part of the hash table is
dedicated to storing elements that collide when inserted. We redefine m0 to be the number
of slots in the hash table. The range of the hash function is restricted to {1, 2, . . . , m}. We
call the first m slots the address region; the bottom m0 − m slots, which are used to store
colliders, comprise the cellar.
When a collision occurs during insertion (because the element’s hash address is already
occupied), the element is inserted instead into the largest-numbered empty slot in the hash
table and is linked to the end of the chain it collided with. This means that the colliding
record is stored in the cellar if the cellar is not full. But if there are no empty slots in
the cellar, the element ends up in the address region. In the latter case, elements inserted
later could collide with this element, and thus their chains would “coalesce.”
If the cellar size is chosen so that it can accommodate all the colliders, then coalesced hashing reduces to separate chaining. It is somewhat surprising that average-case
performance can be improved by choosing a smaller cellar size so that coalescing usually
occurs. The intuition is that by making the address region larger, the hash addresses of
the elements are spread out over a larger area, which helps reduce collisions. This offsets
the disadvantages of coalescing, which typically occurs much later. We might be tempted
to go the the extreme and eliminate the cellar completely (this variant is called standard
coalesced hashing), but performance deteriorates. The theorem below gives the expected
search times as a function of the load factor α = n/m0 and the address factor β = m/m0 .
We can use the theorem to determine the optimum β. It turns out that βopt is a function
of α and of the type of search, but the compromise value β = 0.86 gives near optimum
search performance for a large range of α and is recommended for general use.
Theorem 9 [Vitter 1983]. The expected number of probes per search for coalesced hashing
in an m0 -slot table with address size m = βm0 and with n = αm0 elements is

α
−α/β

+ ,

e
β
µ
¶µ
¶
µ
¶
Un ∼
α
1
2
1
1

2(α/β−λ)

e
− 1 3 − + 2λ −
−λ ,
 +
β
4
β
2 β
if α ≤ λβ;
if α ≥ λβ;
74
/
Average-Case Analysis of Algorithms and Data Structures


1+






Cn ∼ 1 +







α
,
2β
µ
¶
³α
´¶µ
β
2
2(α/β−λ)
e
−1−2
−λ
3 − + 2λ
8α
β
β
µ
µ
¶
¶
λβ
1 α
λ
1−
,
+
+λ +
4 β
4
α
if α ≤ λβ;
if α ≥ λβ,
where λ is the unique nonnegative solution to the equation e −λ + λ = 1/β.
The method described above is formally known as late-insertion coalesced hashing
(LICH). Vitter and Chen [1987] also analyze two other methods called early-insertion coalesced hashing (EICH) and varied-insertion coalesced hashing (VICH). In EICH, a colliding
element is inserted immediately after its hash address in the chain, by rerouting pointers.
VICH uses the same rule, except when there is a cellar slot in the chain following the
element’s hash address; in that case, the element is linked into the chain immediately after
the last cellar slot in the chain. VICH requires slightly fewer probes per search than the
other variants and appears to be optimum among all possible linking methods. Deletion
algorithms and implementation issues are also covered in [Vitter and Chen 1987].
Proof Sketch. We first consider the unsuccessful search case. We count the number
mn+1 Un of probes needed to perform all possible unsuccessful searches in all possible hash
tables of n elements. Each chain of length ` that has t elements in the cellar (which we
call an (`, t)-chain) contributes
¶
µ
`−t
(24a)
δ`=0 + ` + (` − t − 1) + (` − t − 2) + · · · + 1 = δ`=0 + ` +
2
probes, and we have
m
n+1
Un = m
n+1
pn +
X
`,t
X µ` − t ¶
` cn (`, t) +
cn (`, t),
2
(24b)
`,t
where pn is the probability that the hash address of the element is unoccupied, and c n (`, t)
is the number of (`, t)-chains. The second term is easily seen to be nmn , since there are
n elements in each hash table. The evaluations of the first term mn+1 pn and the third
term
X µ` − t ¶
cn (`, t)
(24c)
Sn =
2
`,t
are similar; for brevity we restrict our attention to the latter. There does not seem to be
a closed form expression for cn (`, t), but we can develop a recurrence for cn (`, t) which we
can substitute into (24c) to get
X µ m ¶j
Fj
n−1
Sn = (m + 2)
(j − m0 + m) j ,
(25)
m+2
m
0≤j≤n−1
where Fj is the number of hash sequences of j elements that yield full cellars. In terms of
probabilities, Fj /mj is the probability that the cellar is full after j insertions.
Section 5.1. Bucket Algorithms and Hashing by Chaining
/
75
This provides a link to the occupancy distributions studied earlier. We define the
RV Nj to be the number of elements that collide when inserted, in a hash table of j elements. The cellar is full after k insertions if and only if Nj ≥ m0 −m; by taking probabilities,
we get
Fj
= Pr{Nj ≥ m0 − m}.
(26)
mj
This expresses Fj /mj as a tail of the probability distribution for Nj . We can determine
the distribution by applying Theorem 1. We let Hk,j,m be the number of hash sequences
of n elements for which k elements are marked. Our marking rule is that all elements that
collide when inserted are marked; that is, for each bucket (hash address) i with s elements,
there are fi (s) = s − 1 + δs=0 marked elements. By Theorem 1, we have
where
b
H(u,
z) =
X
Hk,j,m uk
(27a)
1≤i≤m
k,j≥0
bi (u, z) =
B
Y
zj
bi (u, z),
B
=
j!
X
ut−1+δt=0
t≥0
zt
euz − 1 + u
=
,
t!
u
for each 1 ≤ i ≤ m. Substituting (27b) into (27a), we get
µ uz
¶m
e −1+u
b
.
H(u, z) =
u
(27b)
(27c)
This allows us to solve for Fj /mj = Pr{Nj ≥ m0 − m}:
£ k j¤ ³ z ´
Hk,j,m
b u,
=
j!
u z H
;
mj
m
i 1
h 0
´
³
b u, z .
Fj /mj = 1 − j! um −m−1 z j
H
1−u
m
Pr{Nj = k} =
(28a)
(28b)
We can get asymptotic estimates for (28b) by use of the saddle point method as in our
analysis of extendible hashing or by Chernoff bounds as in [Vitter 1983] and [Vitter and
Chen 1987]; the details are omitted for brevity. The distribution F j /mj increases sharply
from ≈ 0 to ≈ 1 in the “central region” where
Nj ≈ m0 − m.
(29a)
The expected number of collisions Nj is
¶j
µ
³ z ´ ¯¯
£ j¤ ∂
1
b
¯
Nj = j! z
H u,
.
=j−m+m 1−
∂u
m ¯u=1
m
(29b)
We can solve for the value of j = ̃ that satisfies (29a) by combining (29a) and (29b) and
using the ratios α̃ = ̃/m0 and β = m/m0 . We have
e−α̃ + α̃ ≈
1
.
β
(29c)
76
/
Average-Case Analysis of Algorithms and Data Structures
In a way similar to (18), we approximate Fj /mj by a 0–1 step function with the step at
j = ̃, and we get
Sn ∼ (m + 2)
n−1
X
̃≤j≤n−1
µ
m
m+2
¶j
(j − m0 + m),
(30)
which can be summed easily. The error of approximation can be shown to be negligible,
as in the analysis of maximum bucket occupancy and extendible hashing. The analysis of
the term mn+1 pn in (24b) is based upon similar ideas and is omitted for brevity.
For the case of successful searches, the formula for Cn follows from a formula similar
to Eq. (4.25) for binary search trees and separate chaining:
Cn = 1 +
1
n
X
0≤i≤n−1
(Ui − pi ),
(31)
where pi is the term in (24b).
A unified analysis of all three variants of coalesced hashing—LICH, EICH, and
VICH—is given in [Vitter and Chen, 1987]. The maximum number of probes per search
among all the searches in the same table, for the special case of standard LICH (when there
is no cellar and m = m0 ), is shown in [Pittel 1987b] to be log c n − 2 logc log n + O(1) with
probability ∼ 1 for successful searches and log c n − logc log n + O(1) with probability ∼ 1
for unsuccessful searches, where c = 1/(1 − e−α ).
5.2. Hashing by Open Addressing
An alternative to chaining is to probe an implicitly defined sequence of locations while
looking for an element. If the element is found, the search is successful; if an “open”
(empty) slot is encountered, the search is unsuccessful, and the new element is inserted
into the empty slot that terminated the search.
Uniform Probing. A simple scheme to analyze, which serves as a good approximation
to more practical methods, is uniform probing. The probing sequence for each element x is
a random permutation h1 (x)h2 (x) . . . hm (x) of {1, 2, . . . , m}. For an unsuccessful search,
the probability that k probes are needed when there are n elements in the table is
nk−1 (m − n)
pk =
=
mk
Hence, we have
Un =
X
k≥0
µ
m−k
m−n−1
¶Á µ ¶
m
.
n
µ
¶
m−k
1 X
kpk = ¡m¢
k
.
m−n−1
n
k≥0
(32a)
(32b)
Section 5.2. Hashing by Open Addressing
/
77
We split k into two parts k = (m + 1) − (m − k + 1), because we can handle each separately;
the m − k + 1 term gets “absorbed” into the binomial coefficient.
µ
µ
¶
¶
1 X
m−k
m−k
1 X
Un = ¡m¢
(m + 1)
(m − k + 1)
− ¡ m¢
m−n−1
m−n−1
n k≥0
n k≥0
µ
¶
m−n X m−k+1
= m + 1 − ¡m¢
m−n
n
k≥0
¶
µ
m+1
m−n
= m + 1 − ¡m¢
m−n+1
n
m+1
=
.
(32c)
m−n+1
Successful search time for open addressing algorithms satisfy a relation similar to (4.25):
1 X
Cn =
Ui .
(33)
n
0≤i≤n−1
Putting this all together, we get the following theorem:
Theorem 10. The expected number of probes per unsuccessful and successful search for
uniform probing, when there are n = mα elements in a hash table of m slots, is
m+1
m+1
1
1
1
Un =
Cn =
∼
;
(Hm+1 − Hm−n+1 ) ∼ log
.
m−n+1
1−α
n
α
1−α
The asymptotic formula Un ∼ 1/(1 − α) = 1 + α + α2 + . . . has the following intuitive
interpretation: With probability α we need more than one probe, with probability α 2 we
need more than two probes, and so on. The expected maximum search time per hash table
is studied in [Gonnet 1981].
Double Hashing and Secondary Clustering. The practical limitation on uniform
probing is that computing several hash functions is very expensive. However, the performance of uniform probing can be approximated well by use of just two hash functions. In
the double hashing method, the ith probe is made at slot
h1 (x) + ih2 (x) mod m,
for 0 ≤ i ≤ m − 1;
(34)
that is, the probe sequence starts at slot h1 (x) and steps cyclically through the table
with step size h2 (x). (For simplicity, we have renumbered the m table slots to be 0, 1,
. . . , m − 1.) The value of the second hash function must be relatively prime to m so
that the probe sequence (34) gives a full permutation. Guibas and Szemerédi [1978] show
using interesting probabilistic techniques that when α < 0.319 the number of probes per
search is asymptotically equal to that of uniform probing, and this has been extended to
all fixed α < 1 [Lueker and Molodowitch 1988].
It is often faster in practice to use only one hash function and to define h 2 (x) implicitly
in terms of h1 (x). For example, if m is prime, we could set
½
1,
if h1 (x) = 0;
h2 (x) =
(35)
m − h1 (x), if h1 (x) > 0.
A useful approximation to this variant is the secondary clustering model, in which the
initial probe location h1 (x) uniquely determines the remaining probe locations h2 h3 . . . hm ,
which form a random permutation of the other m − 1 slots.
78
/
Average-Case Analysis of Algorithms and Data Structures
Theorem 11 [Knuth 1973b]. The expected number of probes per unsuccessful and successful search for hashing with secondary clustering, when there are n = mα elements in a
hash table of m slots, is
Un ∼
1
1
− α + log
;
1−α
1−α
Cn ∼ 1 + log
1
α
− .
1−α
2
The proof is a generalization of the method we use below to analyze linear probing.
The number of probes per search for secondary clustering is slightly more than for double
hashing and uniform probing, but slightly less than for linear probing.
Linear Probing. Perhaps the simplest implementation of open addressing is a further
extension of (34) and (35), called linear probing, in which the unit step size h 2 (x) ≡ 1 is
used. This causes primary clustering in the table, because all the elements with the same
hash address follow the same probing sequence.
Theorem 12 [Knuth 1973b]. The expected number of probes per unsuccessful and successful search for linear probing, when there are n = mα elements in a hash table of m slots,
is
¢
¢
1¡
1¡
(36)
Un = 1 + Q1 (m, n) ;
Cn = 1 + Q0 (m, n − 1) ,
2
2
where
Qr (m, n) =
X µr + k ¶ n k
k≥0
k
mk
.
If α = n/m is a constant bounded away from 1, then we have
1
Un ∼
2
µ
1
1+
(1 − α)2
¶
µ
;
1
Cn ∼
2
1
1+
1−α
r
πm 1
1
+ +
2
3 24
¶
.
For full tables, we have
Um−1
m+1
;
=
2
Cm
1
∼
2
r
π
+ O(1/m).
2m
Proof. The derivation of the exact formulæ for Un and Cn is an exercise in combinatorial
manipulation. The number of the mn hash sequences such that slot 0 is empty is
³
n´ n
f (m, n) = 1 −
m .
m
(37)
By decomposing the the hash table into two separate parts, we find that the number of
hash sequences such that position 0 is empty, positions 1 through k are occupied, and
position k + 1 is empty is
µ ¶
n
g(m, n, k) =
f (k + 1, k)f (m − k − 1, n − k).
k
(38)
Section 5.2. Hashing by Open Addressing
The probability that k + 1 probes are needed for an unsuccessful search is thus
1 X
pk = n
g(m, n, j).
m
/
79
(39)
k≤j≤n
P
1
The formulæ (36) for Un =
0≤i≤n−1 Ui in terms
0≤k≤n (k + 1)pk and Cn = n
of Q0 (m, n − 1) and Q1 (m, n) follow by applications of Abel’s identity (cf. Section 1.4).
When α is bounded below 1, then we can evaluate Qr (m, n) asymptotically by approximating nk /mk in the summations by nk/mk = αk . The interesting case as far as
analysis is concerned is for the full table when α ≈ 1. It is convenient to define the new
notation
X (m − 1)k
Q{bk } =
bk
.
(40a)
mk
P
k≥0
The link between our new and old notation is
Q{(r+k)} (m) = Qr (m, m − 1).
(40b)
k
Note that the Q functions each have only a finite number of nonzero terms. The following
powerful theorem provides asymptotic expansions for several choices of {b k }:
Theorem 13 [Knuth and Schönhage 1978]. We have
¡
¢
m!
[z m ]A y(z) ,
m−1
m
where y(z) is defined implicitly by the equation
(41a)
Q{bk } (m) =
y(z) = zey(z) ,
and A(u) =
P
k≥0 bk u
k+1
/(k + 1) is the antiderivative of B(u) =
(41b)
P
k≥0 bk u
k
.
Proof of Theorem 13. The proof uses an application of Lagrange-Bürmann inversion
(Theorem 1.6) applied to y(z). The motivation is based upon the fact that sums of the
form (40a) are associated with the implicitly defined function y(z) in a natural way; the
number of labeled oriented trees on m vertices is mm−1 , and its EGF is y(z). Also, y(z)
was used in Section 1.4 to derive Abel’s identity, and Abel’s identity was used above to
derive (36). By applying Lagrange-Bürmann inversion to the right-hand side of (41a), with
ϕ(u) = eu , g(u) = A(u), and f (z) = y(z), we get
¡
¢
m!
m! 1
m
[z
]A
y(z)
= m−1 [um−1 ]eum A0 (u)
m−1
m
m
m
(m − 1)! m−1 um
[u
]e B(u)
=
mm−1
mm−k−1
(m − 1)! X
b
=
k
mm−1
(m − k − 1)!
0≤k≤m−1
=
X
bk
0≤k≤m−1
= Q{bk } (m).
(m − 1)k
mk
80
/
Average-Case Analysis of Algorithms and Data Structures
By (40b) the sequences {bk } corresponding to Q0 (m, m − 1) and Q1 (m, m − 1) have
generating functions B0 (u) = ¡1/(1 − u)¢and B1 (u) = 1/(1 − u)2 , and the corresponding
antiderivatives are A0 (u) = log 1/(1−u) and A1 (u) = 1/(1−u). By applying Theorem 13,
we get
m!
1
[z m ] log
;
m−1
m
1 − y(z)
1
m!
.
Q1 (m, m − 1) = m−1 [z m ]
m
1 − y(z)
Q0 (m, m − 1) =
(42a)
(42b)
It follows from the method used in Theorem 4.4 to count the number Tn of plane trees
with degree constraint Ω that the dominant singularity
√ √ of y(z) implicitly defined by (41b)
is z = 1/e, where we have the expansion
¡ y(z) = 1− ¢ 2 1 − ez +O(1−ez). Hence, z = 1/e
is also the dominant singularity of log 1/(1 − y(z)) , and we get directly
log
√
1
1
1
1
= log
− log 2 + O( 1 − ez ).
1 − y(z)
2
1 − ez
2
(43)
The approximation for Cm follows by extracting coefficients from (43). The formula
Um−1 = (m + 1)/2 can be obtained by an analysis of the right-hand side of (42), or
more directly by counting the number of probes needed for the m unsuccessful searches in
a hash table with only one empty slot. This completes the proof of Theorem 12.
The length of the maximum search per table for fixed α < 1 is shown in [Pittel 1987a]
to be Θ(log n), on the average. Amble and Knuth [1974] consider a variation of linear
probing, called ordered hashing, in which each cluster of elements in the table is kept in
the order that would occur if the elements were inserted in increasing order by element
value. Elements are relocated within a cluster, if necessary, during an insertion. The
average number of probes per unsuccessful search decreases to Cn−1 .
Linear probing can also be used for sorting, in a way similar to distribution sort
described earlier, except that the elements are stored in a linear probing hash table rather
than in buckets. The n elements in the range [a, b) are inserted into the table using the
hash function h1 (x) = b(x − a)/(x − b)c. The elements are then compacted and sorted via
insertion sort. The hash table size m should be chosen to be somewhat larger than n (say,
n/m ≈ 0.8), so that insertion times for linear probing are fast. A related sorting method
described in [Gonnet and Munro 1981] uses ordered hashing to keep the elements in sorted
order, and thus the final insertion sort is not needed.
Section 6.1. Integrated Cost and the History Model
/
81
6. Dynamic Algorithms
In this section we study performance measures that reflect the dynamic performance of
a data structure over an interval of time, during which several operations may occur. In
Section 6.1, we consider priority queue algorithms and analyze their performance integrated
over a sequence of operations. The techniques apply to other data structures as well,
including dictionaries, lists, stacks, and symbol tables. In Section 6.2 we analyze the
maximum size attained by a data structure over time, which has important applications
to preallocating resources. We conclude in Section 6.3 with the probabilistic analysis of
union-find algorithms.
6.1. Integrated Cost and the History Model
Dynamic data structures such as lists, search trees, and heap-ordered trees can be analyzed
in a dynamic context under the effect of sequences of operations. Knuth [1977] considers
various models of what it means for deletions to “preserve randomness,” and this has been
applied to the study of various data structures (for example, see [Vitter and Chen, 1987]).
Françon [1978] proposed a model called the “history model” which amounts to analyzing
dynamic structures under all possible evolutions up to order-isomorphism. Using combinatorial interpretations of continued fractions and orthogonal polynomials [Flajolet 1980],
several data structures, including dictionaries, priority queues, lists, stacks, and symbol
tables, can be analyzed under this model [Flajolet, Françon, and Vuillemin 1980]. In
this section, we shall present an overview of the theory, with special emphasis on priority
queues.
A priority queue (see Section 4.2) supports the operations of inserting an element x
into the queue (which we denote by I(x)) and deletion of the minimum element (which we
denote by D). An example of a particular sequence of operations is
s = I(3.1) I(1.7) I(2.9) I(3.7) D D I(3.4).
(1)
Such a sequence s consists of a schema IIIIDDI, from which we see that s causes the
queue size to increase by 3. If we restrict attention to structures that operate by comparison
between elements, the effect of s on an initially empty queue is fully characterized by the
following information: The second operation I(1.7) inserts an element smaller than the
first, the third operation I(2.9) inserts an element that falls in between the two previous
ones, and so on. We define the history associated with a sequence s to consist of the
schema of s in which each operation is labeled by the rank of the element on which it
operates (relative to the current state of the structure). If we make the convention that
ranks are numbered starting at 0, then all delete minimum operations must be labeled
by 0, and each insert is labeled between 0 and k, where k is the size of the priority queue
at the time of the insert. The history associated with (1) is
I 0 I 0 I 1 I 3 D0 D0 I 1 .
(2)
We let H denote the set of all histories containing as many inserts as delete minimum
operations, and
P we letnHn be the subset of those that have length n. We define hn = |Hn |
and h(z) = n≥0 hn z .
82
/
Average-Case Analysis of Algorithms and Data Structures
Theorem 1. The OGF of priority queue histories has the continued fraction expansion
1
h(z) =
1−
.
1 · z2
1−
(3)
2 · z2
1−
3 · z2
4 · z2
···
1−
Theorem 1 is a special case of a general theorem of [Flajolet 1980] that expresses
generating functions of labeled schemas in terms of continued fractions. Another special
case is the enumeration of plane trees, given in Section 4.1.
Returning to priority queue histories, from a classical theorem of Gauss (concerning the continued fraction expansion of quotients of hypergeometric functions) applied to
continued fraction (3), we find
h2n = 1 × 3 × 5 × · · · × (2n − 1),
with h2n+1 = 0. Thus the set of histories has an explicit and simple counting expression.
Let us define the height of a history to be the maximum size that the priority queue attains
over the course of the operations. From the same theory, it follows that the OGF h [k] (z)
for histories with height bounded by an integer k is the kth convergent of (3):
h[k] (z) =
Pk (z)
,
Qk (z)
(4)
where Pk and Qk are closely related to Hermite polynomials. From (3) and (4), it is
possible to determine generating functions for extended sets of histories (such as the set
of histories Hhki that start at size 0 and end at size k) and then to find the number
of times a given operation is performed on a priority queue structure of size k in the
course of all histories of Hn . The expressions involve the continued fraction h(z) in (3)
and its convergents given in (4). From there, for a given priority queue structure, we
can compute the integrated cost Kn defined as the expected cost of a random history
in Hn : If CI k (respectively, CD k ) is the individual expected cost of an insert (respectively,
delete minimum) operation on the priority queue structure when it has size k, then we
have
¢
1 X¡
Kn =
CI k · NI n,k + CD k · ND n,k ,
(5)
hn
k
where NI n,k (respectively, ND n,k ) is the number of times operation insert (respectively,
delete minimum) occurs at size k inside all histories in Hn . Manipulations with EGFs make
it possible to express the final result in simple form. For instance, we have the following
two EGFs of histories H with a simple expression:
X
n≥0
hn
2
zn
= ez /2
n!
and
X
n≥0
h2n
zn
1
=√
.
n!
1 − 2z
Section 6.2. Size of Dynamic Data Structures
/
83
The main theorem is that the following two GFs
C(x) =
X
(CI k + CD k+1 )xk
and
K(z) =
X
K2n h2n
n≥0
k≥0
zn
,
n!
(6)
where C(x) is an OGF of individual costs and K(z) is a modified EGF of integrated costs
(after normalization by hn ), are closely related:
Theorem 2. The GFs C(x) and K(z) defined above satisfy
¶
µ
1
z
.
K(z) = √
C
1−z
1 − 2z
(7)
If we plug into (7) the OGF C(x) corresponding to a particular implementation of
priority queues, and then extract coefficients, we get the integrated cost for that implementation. For instance, for histories of length 2n for sorted lists (SL) and binary search
trees (BST), we have
SL =
K2n
n(n + 5)
6
and
BST = n log n + O(n).
K2n
(8)
A variety of other dynamic data structures, including dictionaries, lists, stacks, and symbol
tables, can be analyzed under the history model with these techniques. Each data type
is associated with a continued fraction of the form (3), a class of orthogonal polynomials
(such as Laguerre, Meixner, Chebyshev, and Poisson-Charlier) related to (4), and finally a
transformation analogous to (6) that describes the transition from a GF of individual costs
to the corresponding GF of integrated costs and that is usually expressed as an integral
transform.
6.2. Size of Dynamic Data Structures
We can model the effect of insertions and deletions upon the size of a dynamic data
structure by regarding the ith element as being a subinterval [si , ti ] of the unit interval;
the ith element is “born” at time si , “dies” at time ti , and is “living” when t ∈ [si , ti ]. At
time t, the data structure must store the elements that are “living” at time t.
It is natural to think of the data structure as a statistical queue, as far as size is
concerned. Let us denote the number of living elements at time t by Size(t). If we think
of the elements as horizontal intervals, then Size(t) is just the number of intervals “cut”
by the vertical line at position t. In many applications, such as in VLSI artwork analysis,
for example, the number of living elements at any given time tends to be the square root
of the total number of elements; thus for purposes of storage efficiency the data structure
should expunge dead elements.
In the hashing with lazy deletion (HwLD) data structure, we assume that each element
has a unique key. The data structure supports dynamic searching of elements by key
value, which is useful in several applications. The elements are stored in a hash table of H
buckets, based upon the hash addresses of their keys. Typically separate chaining is used.
84
/
Average-Case Analysis of Algorithms and Data Structures
The distinguishing feature of HwLD is that an element is not deleted as soon as it dies;
the “lazy deletion” strategy deletes a dead element only when a later insertion accesses the
same bucket. The number H of buckets is chosen so that the expected number of elements
per bucket is small. HwLD is thus more time-efficient than doing “vigilant-deletion,” at a
cost of storing some dead elements.
Expected Queue Sizes. We define Use(t) to be the number of elements in the HwLD
data structure at time t; that is, Use(t) = Size(t) + Waste(t), where Waste(t) is the
number of dead elements stored in the data structure at time t. Let us consider the
M/M/∞ queueing model, in which the births form a Poisson process, and the lifespans of
the intervals are independently and exponentially distributed.
Theorem 3 [Feller 1968], [Van Wyk and Vitter 1986]. In the stationary M/M/∞ model,
both Size and Use − H are identically Poisson distributed with mean λ/µ, where λ is the
birth rate of the intervals and 1/µ is the average lifetime per element.
Proof. We define the notation pm,n (t) = Pr{Size(t) = m, Waste(t) = n} for m, n ≥ 0,
and pm,n (t) = 0 otherwise. In the M/M/∞ model, we have
¡
¢
pm,n (t + ∆t) = (1 − λ ∆t)(e−µ ∆t )m + o(∆t) pm,n (t)
¡
¢
+ (1 − λ ∆t)(m + 1)(1 − e−µ ∆t )(e−µ ∆t )m + o(∆t) pm+1,n−1 (t)
¡
¢X
+ δn=0 λ ∆t + o(∆t)
pm−1,j (t) + o(∆t).
(9a)
j≥0
By expanding the exponential terms in (9a) and rearranging, and letting ∆t → 0, we get
X
p0m,n (t) = (−λ − mµ) pm,n (t) + (m + 1)µpm+1,n−1 (t) + δn=0 λ
pm−1,j (t).
(9b)
j≥0
In the stationary model, the probabilities pm,n (t) are independent of t, and thus the lefthand side of (9b) is 0. For notational simplicity we shall drop the dependence upon t.
The
the derivation proceeds by considering the multidimensional OGF P (z, w) =
P rest of m
p
z
wn . Equation (9b) becomes
m,n m,n
µ(z − w)
∂P (z, w)
= −λP (z, w) + λzP (z, 1).
∂z
(9c)
This provides us with the distribution of Size:
Pr{Size = m} = [z m ]P (z, 1) = [z m ]e(z−1)λ/µ =
(λ/µ)m λ/µ
e .
m!
(10)
To find the distribution of Use, we replace w by z in (9c), which causes the left-hand-side
of (9c) to become 0. We get
Pr{Use = k} = [z k ]P (z, z) = [z k ]zP (z, 1).
The rest follows from (10).
(11)
Section 6.2. Size of Dynamic Data Structures
/
85
Maximum Queue Size. A more interesting statistic, which has direct application to
matters of storage preallocation, is the maximum values of Size(t) and Use(t) as the
time t varies over the entire unit interval.
Orthogonal polynomials arise in an interesting way when considering the more general model of a stationary birth-and-death process, which is a Markov process in which
transitions from level k are allowed only to levels k + 1 and k − 1. The infinitesimal birth
and death rates at level k are denoted by λk and µk :


 λk ∆t + o(∆t), if j = k + 1;
Pr{Size(t + ∆t) = j | Size(t) = k} = µk ∆t + o(∆t), if j = k − 1;


o(∆t),
otherwise.
For the special case of the M/M/∞ model, we have λ0 = λ1 = · · · = λ and µk = kµ; for
the M/M/1 model, we have λ0 = λ1 = · · · = λ and µ0 = µ1 = · · · = µ.
Theorem 4 [Mathieu and Vitter 1988]. The distribution of max 0≤t≤1 {Size(t)} can be
expressed simply in terms of Chebyshev polynomials (for the M/M/1 process) and PoissonCharlier polynomials (for the M/M/∞ process). For several types of linear birth-and-death
processes, of the form λk = αk + β, µk = γk + δ, Qj (x) can be expressed in terms of
either Laguerre polynomials or Meixner polynomials of the second kind.
It is interesting to note that orthogonal polynomials arose in a similar way in Section 6.1. The formulæ referred to in Theorem 4 can be used to calculate the distributions
of max0≤t≤1 {Size(t)} numerically, but they do not seem to yield asymptotics directly.
Instead we rely upon a different approach:
Theorem 5 [Kenyon-Mathieu and Vitter 1989]. In the stationary M/M/∞ probabilistic
model, assuming either that µ → 0 or that µ = Ω(1) and λ → ∞, we have
λ




µ



©
ª
λ
E max {Size(t)} ∼ d
µ

t∈[ 0,1]





 f (λ) λ
ln f (λ) µ
if f (λ, µ) → 0;
if f (λ, µ) → c;
(12)
if f (λ, µ) → ∞,
where f (λ, µ) = (ln λ)/ µλ and the constant d is defined implicitly from the constant c by
d ln d − d = c − 1. In the first case ln λ = o(λ/µ), we also have
©
ª λ
E max {Use(t)} ∼ + H.
µ
t∈[ 0,1]
(13)
When ln λ = o(λ/µ), Theorem 5 says that the expected maximum value of Size(t)
(respectively, Use(t)) is asymptotically equal to the maximum of its expected
value. For
√
example, in VLSI artwork applications, we might have λ = n,√µ = n, so that the
average number of living elements at any given time is λ/µ = n; by Theorem 5, the
expected maximum data structure size is asymptotically the same. Kenyon-Mathieu and
86
/
Average-Case Analysis of Algorithms and Data Structures
Vitter [1989] also study the expected maximum under history models as in Section 6.1 and
under other probabilistic models.
It is no accident that the theorem is structurally similar to Theorem 5.6 for maximum
bucket occupancy. The quantity maxt∈[ 0,1] {Size(t)} can be regarded as the continuous
counterpart of the maximum bucket occupancy. The proof below makes use of that relation.
©
ª
Proof Sketch. For brevity, we consider only the analysis of E maxt∈[ 0,1] {Size(t)} .
We shall concentrate primarily on case 1, in which ln λ = o(λ/µ). The lower bound follows
immediately by Theorem 3:
E
©
ª
λ
max {Size(t)} ≥ E{Size(0)} = .
µ
t∈[ 0,1]
We get the desired upper bound by looking at a the following discretized version of
the problem: Let us consider a hash table with m = gµ slots, where gµ is an integer and
g¡ → ∞ very slowly, as¤ λ → ∞. The jth slot, for 1 ≤ j ≤ gµ, represents the time interval
(j − 1)/(gµ), j/(gµ) . For each element we place an entry into each slot whose associated
time interval intersects the element’s lifetime. If we define N (j) to be the number of
elements in slot j, we get the following upper bound:
max {Size(t)} ≤ max {N (j)}.
0≤t≤1
1≤j≤gµ
The slot occupancies N (j) are Poisson distributed with mean µλ (1 + g1 ). However they are
not independent, so our analysis of maximum bucket occupancy in Theorem 5.6 does not
apply to this case. The main point of the proof is showing that the lack of independence
does not significantly alter the expected maximum:
E
©
ª λ
max {N (j)} ∼ .
1≤j≤gµ
µ
This gives
us the desired upper
bound, which completes the proof for case 1. The formula
©
ª
for E maxt∈[ 0,1] {Use(t)} can be derived in the same way.
This
approach when ªapplied to cases 2 and 3 of Theorem 5 gives upper bounds
©
on E maxt∈[ 0,1] {Size(t)} that are off by a factor of 2. To get asymptotic bounds,
different techniques are used, involving probabilistic arguments that the distribution of
maxt∈[ 0,1] {Size(t)} is peaked in some “central region” about the mean. The technique
is similar in spirit to those used in Section 5.1 for the analyses of extendible hashing,
maximum bucket occupancy, and coalesced hashing.
The probabilistic analysis of maximum size has also been successfully carried out for a
variety of combinatorial data structures, such as dictionaries, linear lists, priority queues,
and symbol tables, using the history model discussed in Section 6.1.
Section 6.3. Set Union-Find Algorithms
/
87
6.3. Set Union-Find Algorithms
The set union-find data type is useful in several computer applications, such as computing minimum spanning trees, testing equivalence of finite state machines, performing
unification in logic programming and theorem proving, and handling COMMON blocks
in FORTRAN compilers. The operation union(x, y) merges the equivalence classes (or
simply components) containing x and y and chooses a unique “representative” element for
the combined component. Operation find (x) returns the representative of x’s component,
and make set(x) creates a singleton component {x} with representative x.
Union-find algorithms have been studied extensively in terms of worst-case and amortized performance.¡ Tarjan and van Leeuwen
[1984] give matching upper and lower amor¢
tized bounds of Θ n + mα(m + n, n) for the problem, where n is the number of makeset
operations, m is the number of finds, and α(a, b) denotes a functional inverse of Ackermann’s function. The lower bound holds in a separable pointer machine model, and
the upper bound is achieved by the well-known tree data structure that does weighted
merges and path compression. A more extensive discussion appears in [Mehlhorn and
Tsakalidis 1989].
In this section we study the average-case running time of more simple-minded algorithms, called “quick find” (QF) and “quick find weighted” (QFW). The data structure
consists of an array called rep, with one slot per element; rep[x] is the representative for
element x. Each find can thus be done in constant time. In addition, the elements in each
component are linked together in a list. In the QF algorithm, union(x, y) is implemented
by setting rep[z] := rep[x], for all z in y’s component. The QFW algorithm is the same,
except that when x’s component is smaller than y’s, we take the quicker route and set
rep[z] := rep[y], for all z in x’s component. An auxiliary array is used to keep track of the
size of each component.
Since all finds take constant time, we shall confine our attention to the union operations. We consider n − 1 unions performed on a set of n elements, so that we end up
with a single component of size n. Our performance measure, which we denote by T nQF
and TnQFW , is the total number of updates to slots of rep made during the unions. We
consider three models of “random” input.
Random Graph Model. Probably the most realistic model was proposed¡in¢ [Yao 1976],
based upon the random graph model of [Erdös and Rényi 1960]. Each of the n2 undirected
edges between n vertices “fires” independently, governed by a Poisson process. Each order
of firings is thus equally likely. When an edge {x, y} fires, we execute union(x, y) if x and y
are in different components.
Theorem 6 [Knuth and Schönhage 1978], [Bollobás and Simon 1985].
number of updates done by QF and QFW in the random graph model is
TnQF
¡
¢
n2
=
+ o n(log n)2 ;
8
TnQFW = cn + o(n/ log n),
The average
where c = 2.0847 . . . .
We shall restrict ourselves to showing that TnQF ∼ n2 /8 and TnQFW = O(n) using the
derivation from [Knuth and Schönhage 1978]. The techniques in [Bollobás and Simon 1985]
88
/
Average-Case Analysis of Algorithms and Data Structures
are needed to determine the coefficient c and to get better bounds on the second-order
terms. In addition, [Bollobás and Simon 1985] consider sequences of fewer than n unions.
They show that, on the average, QF performs ( 21 − ²)n unions in O(n log n) time, and
QFW does k unions in O(k) time, for any k ≤ n.
Proof Sketch. The proof is based upon the intuition from [Erdös and Renyi, 1959] that
with probability 1 − O(1/ log n) a random graph on n vertices with 21 n log n + 12 cn edges,
where c is a constant, consists of one giant connected component of size ≥ n − log log n
−c
and a set of isolated vertices. The graph is connected with probability e −e . In terms
of union operations, it is very likely that the last few unions joined the giant component
to singleton components; the cost for each such union would be O(n) for QF and O(1)
for QFW. The proof of Theorem 6 consists in showing that this behavior extends over the
entire sequence of unions.
For QFW, we find by recurrences and asymptotic approximations that
En,k,m = O
µ
n
k 3/2 m3/2 (k + m)3/2
¶
,
for k, m < n2/3 and k, m > n2/3 ,
(14)
where En,k,m is the expected number of times a component of size k is merged with one
of size m. Hence,
TnQFW =
X
1≤k,m<n
min{k, m}En,k,m ≤
X
k(En,k,m + En,m,k ).
(15)
1≤k≤m<n
For the portion of the sum to which (14) applies, we can bound (15) by O(n). For the rest
of the range, in which 1 ≤ k ≤ n2/3 ≤ m < n, the sum is bounded by n, since each element
can be merged at most once from a component of size < n2/3 into one of size ≥ n2/3 . The
analysis of QF is similar.
Several combinatorial algorithms have been designed and analyzed using the random
graph model. For example, Babai, Erdös, and Selkow [1980] give an algorithm for testing
graph isomorphism that runs in O(n2 ) average time, though all known algorithms require
exponential time in the worst case.
Random Spanning Tree Model. Each sequence of union operations corresponds to a
“union tree,” in which the directed edge hx, yi means that the component with representative y is merged into the component with representative x. In the random spanning tree
model, all possible union trees are equally likely; there are n n−2 possible unoriented trees
and (n − 1)! firing orders of the edges in each tree.
Theorem 7 [Yao 1976], [Knuth and Schönhage 1978]. The average number of updates
done by QF and QFW in the random spanning tree model is
TnQF
=
TnQFW =
r
π 3/2
n + O(n log n);
8
1
n log n + O(n).
π
Acknowledgment
/
89
Proof Sketch. An admissibility argument similar to those in Section 4 allows us to
compute the probability pn,k that the last union does a merge of components of sizes k
and n − k:
µ ¶ µ ¶k−1 µ
¶n−k−1
k
n
n−k
1
pn,k =
.
(16)
2(n − 1) k
n
n
And it is easy to show that
Tn = c n + 2
X
pn,k Tk ,
(17)
0<k<n
P
P
where cn = 0<k<n
P kpn,k for QF and cn = 0<k<n min{k, n − k}pn,k for QFW. By symmetry we have
0<k<n kpn,k = n/2, and arguments similar to those used for Theorem 6
P
show that 0<k<n min{k, n − k}pn,k = (2n/π)1/2 + O(1). Recurrence (17) is in a special
linear form that allows us to solve it “by repertoire”: the solution of (17) for c n = an + bn
is the sum of the solutions for cn = an and for cn = bn . Hence, if we can find a “basis” of
different cn for which (17) can be solved easily, then we can solve (17) for QF and QFW
by linear combinations of the basis functions. It turns out that the basis in our case is
the set of Q-functions Q{1/kr } (n), for r = −1, 0, 1, 2, . . . , which we studied in connection
with linear probing in Section 5.2.
Random Components Model. In the simplest model, and also the least realistic, we
assume that at any given time each pair of existing components is equally likely to be
merged next. The union tree is this framework is nothing more than a random binary
search tree, which we studied extensively in Section 4.2. Admissibility arguments lead
directly to the following result:
Theorem 8 [Doyle and Rivest 1976]. The average number of updates done by QF and
QFW in the random components model is
TnQF = n(Hn − 1) = n log n + O(n);
1
1
TnQFW = nHn − nHbn/2c − dn/2e = n log n + O(n).
2
2
Acknowledgment
We thank Don Knuth for several helpful comments and suggestions.
90
/
Average-Case Analysis of Algorithms and Data Structures
References
A. Aggarwal and J. S. Vitter [1988]. The Input/Output Complexity of Sorting and
Related Problems,” Communications of the ACM 31(9), September 1988, 1116–1127.
O. Amble and D. E. Knuth [1974]. “Ordered Hash Tables,” The Computer Journal
17(2), May 1974, 135–142.
M. Ajtai, J. Komlós, and E. Szemerédi [1983]. “An O(n log n) Sorting Network,” Proceedings of the 15th Annual Symposium on Theory of Computer Science, Boston, May 1983,
1–9.
L. Babai, P. Erdös, and S. M. Selkow [1980]. “Random Graph Isomorphisms,” SIAM
Journal on Computing 9, 628–635.
K. E. Batcher [1968]. “Sorting Networks and Their Applications,” Proceedings of the
AFIPS Spring Joint Computer Conference 32, 1968, 307-314.
E. Bender [1973]. “Central and Local Limit Theorems Applied to Asymptotic Enumeration,” Journal of Combinatorial Theory, Series A 15, 1973, 91-111.
E. Bender [1974]. “Asymptotic Methods in Enumeration,” SIAM Review 16, 1974, 485–
515.
C. Bender and S. Orszag [1978]. Advanced Mathematical Methods for Scientists and
Engineers. McGraw-Hill, 1978.
J. Berstel and L. Boasson [1989]. “Context-Free Languages,” in this handbook.
P. Billingsley [1986]. Probability and Measure, Academic Press, 1986.
B. Bollobás [1985]. Random Graphs, Academic Press, New York 1985.
B. Bollobás and I. Simon [1985]. “On the Expected Behavior of Disjoint Set Union
Algorithms,” Proceedings of the 17th Annual Symposium on Theory of Computing, Providence, R. I., May 1985, 224–231.
M. R. Brown [1979]. “A Partial Analysis of Random Height-Balanced Trees,” SIAM
Journal on Computing 8(1), February 1979, 33–41.
W. H. Burge [1972]. “An Analysis of a Tree Sorting Method and Some Properties of a
Set of Trees,” Proceedings of the 1st USA–Japan Computer Conference, 1972, 372–378.
E. R. Canfield [1977]. “Central and Local Limit Theorems for Coefficients of Binomial
Type,” Journal of Combinatorial Theory, Series A 23, 1977, 275-290.
N. Chomsky and M. P. Schützenberger [1963]. “The Algebraic Theory of ContextFree Languages,” Computer Programming and Formal Languages, edited by P. Braffort
and D. Hirschberg, North-Holland, 1963, 118–161.
D. W. Clark [1979]. “Measurements of Dynamic List Structure Use in Lisp,” IEEE
Transactions on Software Engineering SE–5(1), January 1979, 51–59.
L. Comtet [1969]. “Calcul pratique des coefficients de Taylor d’une fonction algébrique,”
L’Enseignement Mathématique 10, 1969, 267-270.
L. Comtet [1974]. Advanced Combinatorics. Reidel, Dordrecht, 1974.
References
/
91
R. Cypher [1989]. “A Lower Bound on the Size of Shellsort Sorting Networks,” Proceedings
of the 1st Annual ACM Symposium on Parallel Algorithms and Architectures, Santa Fe,
NM, June 1989, 58–63.
N. G. De Bruijn [1981]. Asymptotic Methods in Analysis. Dover, New York, 1981.
N. G. De Bruijn, D. E. Knuth, and S. O. Rice [1972]. “The Average Height of
Planted Plane Trees,” in Graph Theory and Computing, edited by R.-C. Read, Academic
Press, New York, 1972, 15–22.
L. Devroye [1986a]. “A Note on the Height of Binary Search Trees,” Journal of the ACM
33, 1986, 489–498.
L. Devroye [1986b]. Lecture Notes on Bucket Algorithms, Birkhäuser, Boston, 1986.
G. Doetsch [1955]. Handbuch der Laplace Transformation. Volumes 1–3. Birkhäuser Verlag, Basel, 1955.
J. Doyle and R. L. Rivest [1976]. “Linear Expected Time of a Simple Union-Find
Algorithm,” Information Processing Letters 5, 1976, 146–148.
B. D̆urian [1986]. “Quicksort without a Stack,” Proceedings of the 12th Annual Symposium on Mathematical Foundations of Computer Science, Lecture Notes in Computer
Science 233, 1986, 283–289.
P. Erdös and A. Rényi [1960]. “On the Evolution of Random Graphs,” Publ. of the
Math. Institute of the Hungarian Academy of Sciences 5, 1960, 17–61.
R. Fagin, J. Nievergelt, N. Pippenger, and H. R. Strong [1979]. “Extendible
Hashing—A Fast Access Method for Dynamic Files,” ACM Transactions on Database
Systems 4(3), September 1979, 315–344.
W. Feller [1968]. An Introduction to Probability Theory and Its Applications, Volume 1,
Wiley, New York, third edition 1968.
W. Feller [1971]. An Introduction to Probability Theory and Its Applications, Volume 2,
Wiley, New York, second edition 1971.
Ph. Flajolet [1980]. “Combinatorial Aspects of Continued Fractions,” Discrete Mathematics 32, 1980, 125-161.
Ph. Flajolet [1981]. Analyse d’algorithmes de manipulation d’arbres et de fichiers, in
Cahiers du BURO 34–35, 1981, 1–209.
Ph. Flajolet [1983]. “On the Performance Evaluation of Extendible Hashing and Trie
Searching,” Acta Informatica 20, 1983, 345–369.
Ph. Flajolet [1987]. “Analytic Models and Ambiguity of Context-Free Languages,” Theoretical Computer Science 49, 1987.
Ph. Flajolet [1988]. “Mathematical Methods in the Analysis of Algorithms and Data
Structures,” Trends in Theoretical Computer Science, edited by E. Börger, Computer Science Press, 1988, 225–304.
Ph. Flajolet, J. Françon, and J. Vuillemin [1980]. “Sequence of Operations Analysis
for Dynamic Data Structures,” Journal of Algorithms 1(2), 1980, 111–141.
Ph. Flajolet, G. Gonnet, C. Puech, and M. Robson [1989]. “Analytic Variations
on Quad Trees,” draft, 1989.
92
/
Average-Case Analysis of Algorithms and Data Structures
Ph. Flajolet and A. M. Odlyzko [1982]. “The Average Height of Binary Trees and
Other Simple Trees,” J. Computer and System Sciences 25, 1982, 171-213.
Ph. Flajolet and A. M. Odlyzko [1989]. “Singularity Analysis of Generating Functions,” SIAM Journal on Discrete Mathematics 3(1), 1990.
Ph. Flajolet and C. Puech [1986] “Partial Match Retrieval of Multidimensional Data,”
Journal of the ACM 33(2), April 1986, 371–407. 1986.
Ph. Flajolet, J.-C. Raoult, and J. Vuillemin [1979]. “The Number of Registers
Required to Evaluate Arithmetic Expressions,” Theoretical Computer Science 9, 1979,
99–125.
Ph. Flajolet, M. Régnier and R. Sedgewick [1985]. “Some Uses of the Mellin
Integral Transform in the Analysis of Algorithms,” Combinatorics on Words, Springer
NATO ASI Series F, Volume 12, Berlin, 1985.
Ph. Flajolet, M. Régnier and D. Sotteau [1985]. “Algebraic Methods for Trie
Statistics,” Annals of Discrete Mathematics 25, 1985, 145–188.
Ph. Flajolet and R. Sedgewick [1986]. “Digital Search Trees Revisited,” SIAM Journal on Computing 15(3), August 1986, 748–767.
Ph. Flajolet, P. Sipala, and J.-M. Steyaert [1987]. “Analytic Variations on
the Common Subexpression Problem,” Proceedings of the 17th Annual International
Colloquium on Automata, Languages, and Programming (ICALP), Warwick, England,
July 1990. Published in Lecture Notes in Computer Science, Springer-Verlag, Berlin.
Ph. Flajolet and J.-M. Steyaert [1987]. “A Complexity Calculus for Recursive Tree
Algorithms,” Mathematical Systems Theory 19, 1987, 301–331.
D. Foata [1974]. La série génératrice exponentielle dans les problèmes d’énumération,
Series S. M. S., Montreal University Press, 1974.
J. Françon [1978]. “Histoire de fichiers,” RAIRO Inform. Theor. 12, 1978, 49–67.
J. Françon, G. Viennot, and J. Vuillemin [1978]. “Description and Analysis of an
Efficient Priority Queue Representation,” Proceedings of the 19th Annual Symposium on
Foundations of Computer Science, Ann Arbor, October 1978, 1–7.
G. H. Gonnet [1981]. “Expected Length of the Longest Probe Sequence in Hash Code
Searching,” Journal of the ACM 28(2), April 1981, 289–304.
G. H. Gonnet [1984] Handbook of Algorithms and Data Structures. Addison-Wesley,
Reading, 1984.
G. H. Gonnet and J. I. Munro [1981]. “A Linear Probing Sort and its Analysis,”
Proceedings of the 13th Annual Symposium on Theory of Computing, Milwaukee, WI,
May 1981, 90–95.
G. H. Gonnet, L. D. Rogers, and A. George [1980]. “An Algorithmic and Complexity
Analysis of Interpolation Search,” Acta Informatica 13(1), January 1980, 39–46.
I. Goulden and D. Jackson [1983]. Combinatorial Enumerations. Wiley, New York,
1983.
D. H. Greene [1983]. “Labelled Formal Languages and Their Uses,” Stanford University,
Technical Report STAN-CS-83-982, 1983.
References
/
93
D. H. Greene and D. E. Knuth [1982]. Mathematics for the Analysis of Algorithms.
Birkhäuser, Boston, second edition 1982.
L. J. Guibas and E. Szemerédi [1978]. “The Analysis of Double Hashing,” Journal of
Computer and System Sciences 16(2), April 1978, 226–274.
B. Harris and L. Schoenfeld [1968]. “Asymptotic Expansions for the Coefficients of
Analytic Functions,” Illinois J. Math. 12, 1968, 264-277.
W. K. Hayman [1956]. “A Generalization of Stirling’s Formula,” J. Reine und Angewandte
Mathematik 196, 1956, 67-95.
P. Henrici [1974]. Applied and Computational Complex Analysis. Volume 1. Wiley, New
York, 1974.
P. Henrici [1977]. Applied and Computational Complex Analysis. Volume 2. Wiley, New
York, 1977.
B.-C. Huang and D. E. Knuth [1986]. “A One-Way, Stackless Quicksort Algorithm,”
BIT 26, 1986, 127–130.
D. H. Hunt [1967]. Bachelor’s thesis, Princeton University, April 1967.
J. M. Incerpi and R. Sedgewick [1985]. “Improved Upper Bounds on Shellsort,” Journal of Computer and System Sciences 31, 1985, 210–224.
Ph. Jacquet and M. Régnier [1986]. “Trie Partitioning Process: Limiting Distributions,” in Proceedings of CAAP ’86, Lecture Notes in Computer Science 214, 1986, 196–
210.
Ph. Jacquet and M. Régnier [1987]. “Normal Limiting Distribution of the Size of
Tries,” in Performance ’87, Proceedings of 13th International Symposium on Computer
Performance, Brussels, December 1987.
R. Kemp [1979]. “The Average Number of Registers Needed to Evaluate a Binary Tree
Optimally,” Acta Informatica 11(4), 1979, 363–372.
R. Kemp [1984]. Fundamentals of the Average Case Analysis of Particular Algorithms.
Teubner-Wiley, Stuttgart, 1984.
C. M. Kenyon-Mathieu and J. S. Vitter [1989]. “General Methods for the Analysis
of the Maximum Size of Data Structures,” Proceedings of the 16th Annual International
Colloquium on Automata, Languages, and Programming (ICALP), Stresa, Italy, July 1989.
Published in Lecture Notes in Computer Science, Springer-Verlag, Berlin.
P. Kirschenhofer and H. Prodinger [1986]. “Some Further Results on Digital Search
Trees,” Proceedings of the 13th Annual International Colloquium on Automata, Languages,
and Programming (ICALP), Rennes, France, July 1986. Published in Lecture Notes in
Computer Science, Volume 226, Springer-Verlag, Berlin.
P. Kirschenhofer and H. Prodinger [1988]. “On Some Applications of Formulæ of
Ramanujan in the Analysis of Algorithms,” preprint, 1988.
D. E. Knuth [1978]. “The Average Time for Carry Propagation,” Indagationes Mathematicæ 40, 1978, 238–242.
D. E. Knuth [1973a]. The Art of Computer Programming. Volume 1: Fundamental Algorithms. Addison-Wesley, Reading, MA, second edition 1973.
94
/
Average-Case Analysis of Algorithms and Data Structures
D. E. Knuth [1973b]. The Art of Computer Programming. Volume 3: Sorting and Searching. Addison-Wesley, Reading, MA, 1973.
D. E. Knuth [1977]. “Deletions that Preserve Randomness,” IEEE Transactions on Software Engineering SE-3, 1977, 351–359.
D. E. Knuth [1981]. The Art of Computer Programming. Volume 2: Semi-Numerical
Algorithms. Addison-Wesley, Reading, MA, second edition 1981.
D. E. Knuth and A. Schönhage [1978]. “The Expected Linearity of a Simple Equivalence Algorithm,” Theoretical Computer Science 6, 1978, 281–315.
V. F. Kolchin, B. A. Sevast’yanov, and V. P. Chistyakov [1978]. Random Allocations. V. H. Winston & Sons, Washington, 1978.
A. G. Konheim and D. J. Newman [1973]. “A Note on Growing Binary Trees,” Discrete
Mathematics 4, 1973, 57–63.
P.-A. Larson [1978]. “Dynamic Hashing,” BIT 20(2), 1978, 184–201.
E. E. Lindstrom and J. S. Vitter [1985]. “The Design and Analysis of BucketSort
for Bubble Memory Secondary Storage,” IEEE Transactions on Computers C–34(3),
March 1985.
G. Louchard [1983]. “The Brownian Motion: a Neglected Tool for the Complexity Analysis of Sorted Table Manipulation,” RAIRO Theoretical Informatics 17, 1983, 365–385.
G. S. Lueker and M. Molodowitch [1988]. “More Analysis of Double Hashing,” Proceedings of the 20th Annual Symposium on Theory of Computing, Chicago, IL, May 1988,
354–359.
C. M. Mathieu and J. S. Vitter [1988]. “Maximum Queue Size and Hashing with
Lazy Deletion,” Proceedings of the 20th Annual Symposium on the Interface of Computing
Science and Statistics, Reston, VA, April 1988, 743–748.
K. Mehlhorn and A. Tsakalidis [1989]. “Data Structures,” in this handbook.
A. Meir and J. W. Moon [1978]. “On the Altitude of Nodes in Random Trees,” Canadian
Journal of Mathematics 30, 1978, 997–1015.
M. H. Nodine and J. S. Vitter [1990]. “Greed Sort: Optimal Sorting with Multiple
Disks,” Technical Report CS–90–04 Brown University, February 1990.
A. M. Odlyzko [1982]. “Periodic Oscillations of Coefficients of Power Series that Satisfy
Functional Equations,” Advances in Math. 44, 1982, 180-205.
A. M. Odlyzko and L. B. Richmond [1985]. “Asymptotic Expansions for the Coefficients of Analytic Generating Functions,” Æquationes Mathematicæ 28, 1985, 50-63.
F. W. J. Olver [1974]. Asymptotics and Special Functions, Academic Press, 1974.
N. J. Pippenger [1989]. “Communication Networks,” in this handbook.
B. Pittel [1986]. “Paths in a Random Digital Tree: Limiting Distributions,” Advances in
Applied Probability 18, 1986, 139–155.
B. Pittel [1987a]. “Linear Probing: The Probable Largest Search Time Grows Logarithmically with the Number of Records,” Journal of Algorithms 8(2), June 1987, 236–249.
B. Pittel [1987b]. “On Probabilistic Analysis of a Coalesced Hashing Algorithm,” Annals
of Probability 15(2), July 1987.
References
/
95
G. Pólya [1937]. “Kombinatorische Anzahlbestimmungen für Gruppen, Graphen und
chemische Verbindungen,” Acta Mathematics 68, 1937, 145–254. Translated in G. Pólya
and R. C. Read, Combinatorial Enumeration of Groups, Graphs, and Chemical Compounds, Springer Verlag, New York, 1987.
P. W. Purdom and C. A. Brown [1985]. The Analysis of Algorithms. Holt, Rinehart
and Winston, New-York, 1985.
M. Régnier [1981]. “On the Average Height of Trees in Digital Search and Dynamic
Hashing,” Information Processing Letters 13, 1981, 64–66.
M. Régnier [1985]. “Analysis of Grid File Algorithms,” BIT 25, 1985, 335–357.
V. N. Sachkov [1978]. Verojatnosnie Metody v Kombinatornom Analize, Nauka, Moscow,
1978.
R. Sedgewick [1977a]. “Quicksort with Equal Keys,” SIAM Journal on Computing 6(2),
June 1977, 240–267.
R. Sedgewick [1977b]. “The Analysis of Quicksort Programs,” Acta Informatica 7, 1977,
327–355.
R. Sedgewick [1978]. “Data Movement in Odd-Even Merging,” SIAM Journal on Computing 7(3), August 1978, 239–272.
R. Sedgewick [1983]. “Mathematical Analysis of Combinatorial Algorithms,” in Probability Theory and Computer Science, edited by G. Louchard and G. Latouche, Academic
Press, London, 1983.
R. Sedgewick [1988]. Algorithms. Addison-Wesley, Reading, 1988, second edition.
D. L. Shell [1959]. “A High-Speed Sorting Procedure,” Communications of the ACM
2(7), July 1959, 30–32.
J.-M. Steyaert and Ph. Flajolet [1983]. “Patterns and Pattern-Matching in Trees:
an Analysis,” Information and Control 58, 1983, 19–58.
R. E. Tarjan and J. van Leeuwen [1984]. “Worst-Case Analysis of Set Union Algorithms,” Journal of the ACM 31(2), April 1984, 245–281.
E. C. Titchmarsh [1939]. The Theory of Functions, Oxford University Press, London,
second edition 1939.
C. J. Van Wyk and J. S. Vitter [1986]. “The Complexity of Hashing with Lazy
Deletion,” Algorithmica 1(1), March 1986, 17–29.
J. S. Vitter [1983]. “Analysis of the Search Performance of Coalesced Hashing,” Journal
of the ACM 30(2), April 1983, 231–258.
J. S. Vitter and W.-C. Chen [1987]. Design and Analysis of Coalesced Hashing. Oxford
University Press, New York, 1987.
J. S. Vitter and E. A. M. Shriver [1990]. Optimal I/O with Parallel Block Transfer,
Proceedings of the 22nd Annual Symposium on Theory of Computing, Baltimore, MD,
May 1990.
J. Vuillemin [1980]. “A Unifying Look at Data Structures,” Communications of the ACM
23(4), April 1980, 229–239.
96
/
Average-Case Analysis of Algorithms and Data Structures
M. A. Weiss and R. Sedgewick [1988]. “Tight Lower Bounds for Shellsort,” Technical
Report CS–TR–137–88, Princeton University, February 1988.
A. C.-C. Yao [1976]. “On the Average Behavior of Set Merging Algorithms,” Proceedings
of the 8th Annual ACM Symposium on Theory of Computing, 1976, 192–195.
A. C.-C. Yao [1978]. “On Random 2-3 Trees,” Acta Informatica 9(2), 1978, 159–170.
A. C.-C. Yao [1980]. “An Analysis of (h, k, 1) Shellsort,” Journal of Algorithms 1(1),
1980, 14–50.
A. C.-C. Yao [1988]. “On Straight Selection Sort,” Technical Report CS–TR–185–88,
Princeton University, October 1988.
A. C.-C. Yao and F. F. Yao [1976]. “The Complexity of Searching an Ordered Random
Table,” Proceedings of the 17th Annual Symposium on Foundations of Computer Science,
Houston, October 1976, 173–177.
F. F. Yao [1989]. “Computational Geometry,” in this handbook.
(page 97)
Index and Glossary
Each index term is followed by a list of the sections relevant to that term. If a large
number of sections are relevant, only one or two sections are included, typically where
that term is first defined or used.
Admissible constructions, 1, 4, 5.1.
Analysis of algorithms, 0.
asymptotics, 2.
average case, 0.
dynamic, 6.
enumerations, 0,1.
hashing, 5, 6.2.
sorting, 3, 4.2, 4.3, 5.1.
searching, 4, 5, 6.
trees, 1, 2, 3.7, 4, 6.1, 6.3.
Analytic function, 2.
Asymptotic analysis, 2.
Average-case complexity, 0.
Binary trees, 1, 4.1.
Binary search trees, 3.7, 4.2, 6.1, 6.3.
Bubble sort, 3.4.
Bucket algorithms, 5.1, 6.2.
Characteristic functions, 2.5.
Coalesced hashing, 5.1.
Combinatorial enumerations, 0,1.
Complex analysis, 2.
Complexity measure, 0.
Continued fractions, 6.1.
Counting, 1.
Darboux’s method, 2.2.
Data model, 0.
Data structures, 0.
Digital search, 4.3, 4.4, 5.1.
Direct chaining, 5.1.
Distribution sort, 5.1.
Double hashing, 5.2.
Dynamic data model, 6.
Enumerations, 0,1.
Extendible hashing, 5.1.
Euler-Maclaurin summation formula, 2.1.
Fourier transforms, 2.5.
Functional equations, 0, 1.4, 2.2.
Generating function (GF), 0, 1.
exponential generating function
(EGF), 1.3.
ordinary generating function (OGF),
1.2.
Hash function, 5.
Hashing, 5, 6.2.
coalesced hashing, 5.1.
extendible hashing, 5.1.
hashing with lazy deletion, 6.2.
direct chaining, 5.1.
double hashing, 5.2.
linear probing, 5.2.
maximum bucket occupancy, 5.1, 6.2.
open addressing, 5.2.
separate chaining, 5.1.
Heaps, 3.7, 4.2, 6.1.
Heapsort, 3.7.
History model, 6.1, 6.2.
Insertion sort, 3.2.
Integrated cost, 6.1.
Interpolation search, 5.1.
98
/
Index and Glossary
Inversions, 3.
Register allocation, 4.1.
k-d trees, 4.2.
Saddle point method, 2.3, 2.5, 5.1.
Searching, 4, 5, 6.
Search trees, 3.7, 4.2, 6.1, 6.3.
Selection sort, 3.7.
Separate chaining, 5.1.
Set union-find, 6.3.
Shellsort, 3.3.
Singularity, 2.2.
Sorting, 3, 4.2, 4.3, 5.1.
bubble sort, 3.4.
distribution sort, 5.1.
heapsort, 3.7.
insertion sort, 3.2.
merge sort, 3.3, 3.8.
networks, 3.3, 3.8.
quicksort, 3.5, 4.2.
radix-exchange sort, 3.6, 4.3.
selection sort, 3.7.
Shellsort, 3.3.
Stirling numbers, 1.3, 1.4.
Stirling’s formula, 2.1, 2.3.
Symbolic differentiation, 4.1.
Labeled combinatorial structures, 1.3.
Lagrange-Bürmann inversion, 1.4, 4.1,
5.2.
Laplace’s method, 2.1.
Laplace transforms, 2.5.
Limit distribution, 2.5, 4.2, 4.3, 5.1.
Linear probing, 5.2.
Load factor, 5.1, 5.2.
Maximum bucket occupancy, 5.1, 6.2.
Maximum size of data structures, 5.1,
6.2.
Mellin transforms, 2.4, 3.4, 4.1, 4.3, 5.1.
Merge sort, 3.3, 3.8.
Meromorphic functions, 2.2, 2.4.
Moment generating functions, 2.5.
Multidimensional search, 4.2.
Networks, 3.3, 3.8.
Occupancy distribution, 5.1, 6.2.
Open addressing, 5.2.
Path length, 4.
Patricia tries, 4.3.
Pattern matching, 4.1.
Permutations, 1.3, 3.
Priority queues, 3.7, 4.2, 6.1, 6.2.
Probabilistic analysis, 6.3.
Quad trees, 4.2.
Quicksort, 3.5, 4.2.
Radix-Exchange sort, 3.6, 4.3.
Tauberian theorems, 2.2.
Taylor expansion, 1.1, 1.4.
Transfer lemma, 2.2, 4.1, 4.2.
Trees, 1, 2, 3.7, 4, 6.1, 6.3.
Tree compaction, 4.1.
Tries, 4.3, 5.1.
Urn model, 1.3, 5.1.
Valuation function (VF), 4.