Download Average Running Time of the Fast Fourier Transform

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Big O notation wikipedia , lookup

Addition wikipedia , lookup

Law of large numbers wikipedia , lookup

Georg Cantor's first set theory article wikipedia , lookup

Algorithm wikipedia , lookup

Mathematical proof wikipedia , lookup

List of important publications in mathematics wikipedia , lookup

Nyquist–Shannon sampling theorem wikipedia , lookup

Fermat's Last Theorem wikipedia , lookup

Four color theorem wikipedia , lookup

Wiles's proof of Fermat's Last Theorem wikipedia , lookup

Mathematics of radio engineering wikipedia , lookup

Brouwer fixed-point theorem wikipedia , lookup

Fundamental theorem of calculus wikipedia , lookup

Karhunen–Loève theorem wikipedia , lookup

Fundamental theorem of algebra wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Theorem wikipedia , lookup

Proofs of Fermat's little theorem wikipedia , lookup

Transcript
JOURNAL
OF ALGORITHhfS
Average
1, 187-208 (1980)
Running
Time of the Fast Fourier
Transform
PERSI DIACONIS
Bell Laboratories,
Murray
Hill, New Jersey;
Stanforcr! California
and Stanford
Uniwrsity,
Received May 9, 1979; and in revised form October 29, 1979
We compare several algorithms for computing the discrete Fourier transform of
n numbers. The number of “operations” of the original Cooley-Tukey algorithm is
approximately 2n A(n), where A(n) is the sum of the prime divisors of n. We show
that the average number of operations satisfies (l/x)Z,,,2n
A(n) (n2/9)(x2/log
x). The average is not a good indication of the number of operations. For example, it is shown that for about half of the integers n less than x, the
number of “operations” is less than n i 61. A similar analysis is given for Good’s
algorithm and for two algorithms that compute the discrete Fourier transform in
O(n log n) operations: the chirp-z transform and the mixed-radix algorithm that
computes the transform of a series of prime length p in O(p log p) operations.
1. INTRODUCTION
The main results of this paper give approximations to the running time
of several algorithms for computation of the discrete Fourier transform
(DFT) of n numbers. In Section 2 we discuss the need for exact computation of the DFT versus “padding.” We also describe the available algorithms for computing the DFT. Direct computation of the DFT is shown
to involve approximately 2n2 operations-multiplications
and additions. If
an algorithm is to be used for many different values of n, the average
running time is of interest. For direct computation, the average is
Several variants of the fast Fourier transform (FFT) involve approximately 2n ,4(n) operations. Here A(n) = Z,+J is the sum of the prime
divisors of n counted with multiplicity
(so A( 12) = 2 + 2 + 3 = 7). In
Section 3 we show that the average number of operations satisfies
187
0196-6774/80/020187-22$02.00/O
Copyright
0 1980 by Academic
F’rcss, Inc.
Ai: rigirij of ~sprduciion
in any iorm reserved.
188
PERSI DIACONIS
Thus, on the average, these versions of the FFT do not seem to speed
things up very much. We will argue that the average is a bad indication of
the size of n A(n). Theorem 3 shows that the proportion of integers n less
than x such that n A(n) is smaller than n’+Y tends to a limit L(y):
$i{n
I x : n A(n) 5 n’+Y}I-
L(y).
The distribution function L(y) is supported on 0 I y I 1 and, for example, L(0.61) = 0.5. Thus, approximately half of the integers less than or
equal to x have n A(n) I n t6’ . The results in Section 3 show that, up to
lower-order terms, Good’s version of the FFT has the same average case
behavior as n A(n).
Section 4 analyzes two algorithms for computing the DFT in O(n log n)
operations. These are the chirp-z algorithm and the mixed-radix algorithm
which uses the chirp-z (or number theoretic) transform for series of prime
length. Neither approach dominates. Both algorithms have average running time proportional to x log, x. For the chirp-z approach, the “constant” of proportionality
is a bounded oscillating function of x which
oscillates around the constant of proportionality
of the mixed-radix approach. For individual n, the better algorithm can speed things up by a
factor of lf to 2.
Some Notation
Throughout this paper, p is a prime, Z,,,, means a sum over the distinct
prime divisors of n, Ep,+, means a sum over the prime divisors of n
counted with multiplicity. The 0,o notation will be used with, for example,
0, meaning that the implied constant depends on k; f(x) - g(x) means
f(x)/g(x) + 1. We write 1x1 for the largest integer less than or equal to
x, 1x1 for the smallest integer greater than or equal to x, and {x} for the
fractional part of x. The number of elements in a finite set S is denoted
IS/.
2. THE FAST FOURIER TRANSFORM
The discrete Fourier transform of n real numbers x0, x1, . . . , x,,-, is the
sequence
n-l
+tk) = 2
xjdk?
k = 0, 1, 2, . . . , n - 1; q, = e2=jln.
(2.1)
j-0
The usual assumption
is that the numbers dk are stored (or available for
FAST
FOURIER
TRANSFORM
189
free). Then, for each k, direct computation of c+(k) involves n multiplications and n additions to good approximation.
Computing +(k) for k =
and n2 addi0, 1, 2, . . . ) n - 1 involves approximately nz multiplications
tions. We will say that approximately
2n2 operations are involved for
direct computation.
The FFT is a collection of algorithms for computing the DFT. The basic
papers on the FFT are collected together in [ 141. A discussion from a
modern algorithmic point of view with applications and references is in [ 1,
Chap. 71.
It is useful to divide the ideas behind FFT algorithms into two types.
Type 1 concerns methods of “pasting together” transforms of shorter
series. Type 2 concerns methods of transforming the sum in (2.1) into a
convolution.
First consider the Type 1 ideas. When n is composite, some of the
products +qjk are calculated many times. Suppose n = pq. The CooleyTukey and Tukey-Sandy
algorithms allow computation of the DFT via
computation of p transforms of length q and q transforms of length p. If
the shorter transforms are computed directly, this leads top2q2 + q2p2 =
2n(p + q) operations approximately.
In general, when n = II:,,pF,
the
number of operations is
2nE,,+p
= 2n A(n).
(2.2)
We will see later that it is possible to compute the shorter transforms in
O(p logp) operations instead of O(p2) operations that direct computation
entails. Direct computation is suggested by many writers and implemented
in published algorithms such as that of Singleton [18].
Another way of linking together shorter transformations, suggested by
Good [20], also falls under the Type 1 ideas. Good’s algorithm requires
that the length of the shorter series be relatively prime. For n = II:,,piq,
the algorithm computes the DFT of series of lengthpg. If the transforms of
length pi%are computed directly using 2p,? operations, then the number of
operations is approximately
2n i
pi” = 2n G(n).
(2.3)
i=l
The equality in (2.3) defines the function G(n).
Expressions (2.2) and (2.3) may be regarded as the dominant term of the
result of a more careful count of operations as presented by Rose [16].
When n is a power of 2, n = 2k, A(n) = 2k, and from (2.2) the number
of operations is 4n log, n. This is the oft-quoted result “the FFT allows
computation of the DFT in O(n log n) operations.” As we have seen, this
statement holds only when n is a power of 2.
190
PERU
DIACONIS
When n is not a power of 2, the technique of padding a series by zeros to
the next highest power of 2 can be used. If m is the smallest power of 2
larger
than n, the DFT
of the sequence
of length
m:
This yields &k) =
(x0, x,, x*, * *. 3 xn-,, 0, 0, . . . 9 0) is computed.
Z;p-;xjsjmk, k = 0, 1,2, . . . , m - 1, instead of +(k) as defined by (2.1). In
many applications, (i, can be used as effectively as +.
The difference between C+and 6 is sometimes important. One example
occurs in spectral analysis where one looks at +(k) in the hope of detecting
periodic oscillations in the sequence xi. If the period divides n, the
difference between + and 4 can matter. To give a simple example, let a be
a positive integer and for 0 I j < n, define xi by
xj = 1
if j is a multiple
= 0
Then +(k) = ~~“/a’J-‘q~k
that
J-0
’
*
of a
otherwise.
If a divides n, then an easy computation
+(k) = k
if k = n/a
= 0
if k # n/a.
shows
Thus, Q(k) clearly identifies a series of period a. This clear identification
destroyed if n is not a multiple of a, for then
is
akt”/aJ
+(k)
=
’ ;
”
qak
n
2
and e(k) is never 0. In the language of spectral analysis, there is leakage at
all frequencies. Such leakage can cause problems, and n is often chosen as
a multiple of a period of interest instead of a convenient power of 2.
Further discussion of the need for exact computation of the DFT can be
found in [6, 181.
We next turn to Type 2 ideas. These involve transforming the summation index in the sum (2.1) for cp(k) to convert the sum into a convolution.
Convolutions for a series of any length n can be computed exactly by using
the FFT on an appropriately extended series.
For example, the chirp-z approach discussed by Rabiner et al. [13] and
Aho et al. [2] makes the change of variables jk = (j2 + k2 - (j - k)2)/2.
Then
n-l
c$(k) = c$‘~
This is a convolution
of the sequence (xjd’i2)
(2.4)
with the sequence (q,_‘2/2),
191
FAST FOCJRIER TRANSFORM
premultiplied
by q,k2/2. The idea
n is prime, Rader [ 151 proposed
transformation.
By definition,
n),k= 1,2 )...) n-2,gn-‘=
g”,k = gb (mod n). Then
does not depend
using a primitive
g is an integer
1 (mod n). Make
on n being prime. When
root g (mod n) to do the
such that gk f 1 (mod
the transformation j =
n-l
cp(k) = c#(g”) = xg + 2 Xgaq#fa+b.
a=1
(2.5)
The sum is a convolution of the sequence (x,+) with the sequence (q,f’). It
is worth pointing out that an integer n can be factored in less than O(n’/‘)
operations and that primitive roots can be found in less than O(n’/‘) time.
Lehmer [12] gives the number theoretic details.
After transforming to a convolution, the FFT can be used to compute
the convolution on an extended series. The extended series must be of
length approximately the smallest power of 2 larger than 2n. For definiteness, we will use the algorithm given by Rader [ 151. This requires the use of
the FFT on a series of length the smallest power 2 larger than or equal to
2n - 4. Three FFTs are required to perform convolutions. To get a simple
approximation
to the number of operations of these algorithms we will
neglect lower-order terms.
(2.6) Define T(n) to be the smallest power of 2 larger than 2n - 4. Let
C(n) = 3 T(n) log, z-(n)
= n log, n
if n # 2k
if n = 2k.
(2.7) ASSUMPTION.
The number of operations of the chirp-z transform
(or the approach using (2.5)) is well approximated by @Z(n) for some fixed
j3 > 0. A reasonable further assumption is to take p = 2 (for additions
and multiplications).
Since C(n) is O(n log n), when Assumption (2.7) is valid, the DFT of n
numbers can be computed in O(n log n) operations, even when n is prime.
The availability
of methods for computing the DFT in O(n log n)
operations immediately suggests a question. Which is more efficient: direct
use of the chirp-z idea on a series of length n or splitting a series of length
n into pieces of prime length, computation of the shorter transforms by a
fast algorithm, and pasting the pieces back together using a mixed-radix
algorithm? A detailed analysis of this problem is given in Section 4. To
compare the approaches we make a simple approximation
to the number
of operations required by the mixed-radix approach. The approximation
is
based on (2.7) and (2.2).
192
PERSI DIACONIS
(2.8) ASSUMPTION.
The mixed-radix approach which uses a fast algorithm to compute series of prime length uses /3F(n) = /3nZ,.,,(C(p)/p)
operations, with j3 as in (2.7).
Some numerical examples of C(n) and F(n) are given in Section 4. Note
that in making comparison of the relative size of PC(n) and PF(n),
fi cancels out. We also note that none of the asymptotic comparisons
depend on the use of 2n - 4 in (2.6). Using 2n - c for any fixed c leads to
essentially the same results.
3. AVERAGE RUNNING TIME OF MIXED-RADIX
THAT COMPUTE THE FFT OF PRIMES IN O(p’)
ALGORITHMS
OPERATIONS
In this section, approximations for the mean, variance, and distribution
of the number of operations of some mixed-radix algorithms are derived.
As explained in Section 2, 2n A(n) and 2n G(n) are reasonable approximations to the number of operations used by the Cooley-Tukey
and Good
algorithms, respectively.
Theorems 1 and 2 provide approximations
to the first and second
moments of n A(n) and n G(n). Here, if H(n) is any function, the first and
second moments of H(n) are
and
The variance of H(n) can be computed from the first and second moments
via
4X4=-IJnixW(n)- CL&))*
=W(X)- (pH(x))*.
In Theorems
1 and 2 we write S(s) for Riemann’s
zeta function.
THEOREM 1. Let n A(n) be defined by (2.2), n G(n) be defined by (2.3).
As x tends to infinity, there is a c > 0 such that the first moment of n A(n) or
n G(n) equals
/ I/*1S(s+ 1)g&s+*(
x2 exp( - c(log x)““)).
(3.1)
For any fixed k 2 1,
+
‘k(
(log:;*+l)’
(3*2)
FAST
FOURIER
193
TRANSFORM
with
m
a1=--5-’
fi
THEOREM
2. As x tends to infin@,
moment of n A(n) or n G(n) equals
there is a c > 0 such that the second
I l/2l S(s+ 2) g+Y-+o(
x4 exp( - c(log x)““)).
(3.3)
For any fixed k 2 1,
s+3
’ s(s+2)~ds=x4i
/ l/2
j=
b,=ip,
bj = (-
b,_
I (log
+
Ok(
(log;)k+l)9
1) j( sI”;;‘)“‘~,-,.
The first moment and variance are not good indicators of
and n G(n), for the variance is close to the square
moment. This suggests that these functions have fluctuations
mean which are of the same size as their mean.
The next theorem gives the limiting distribution of n A(n)
The results show that the proportion of numbers such that
n G(n)) is smaller than n’+’ tends to a limit.
n A(n)
THEOREM
(3.4)
X) ’
the sizes of
of the first
about their
and n G(n).
n A(n) (or
3. As x + oo, for any fired z, 0 I z I 1,
${n
5 x : n A(n) I ~J’+~}I ry L(z),
(3.5)
-${n
5 x : n G(n) I n’+‘}l
(3.6)
w L(z),
where L(z) is the distribution function of an absolutely continuous measureon
[0, I] with density L’(z). The density satisfies L’(z) = (l/z)p(l/z
- 1),
where p(y) = 1 for 0 < y 5 1 and p(y) satisfies the differential dsfference
equation yp’(y) = - p(y - 1).
Remarks. A few values of L(z) are
z
/
0.33
0.47
0.61
0.78
0.95
L(z)
1
0.05
0.25
0.50
0.75
0.95
The density L’(y) is drawn in Fig. 1.
PERSI DIACONIS
194
0
.2
.4
.6
.8
1
FIG. 1.Graphof L’(y).
The function p(y) was introduced by Dickman [lo] in connection with
the largest prime divisor of n. It is thoroughly discussed by de Bruijn [8, 91,
Billingsley [7], and Knuth and Trabb-Pardo [ 111. Bellman and Kotkin [5]
and Van de Lune and Wattel [19] give tables of p(y) which were used to
compute L(z) and L’(z) as given above.
Theorems 1, 2, and 3 are closely related to two other number theoretic
functions which we now define:
Let n = II:, ,p,” be the prime decomposition
of n. Suppose pr < p2
< . . . -c P,.
(3.7) Define A*(n) = i
pi. We also write A*(n) = xp.
i=l
(3.8) Define P,(n)
Pb
to be the kth largest prime
divisor
of n. Thus,
PI(n) = P,, P*(n) = p,(n/p,(n)),
f’,(n) = p,(n/[p,(n)
. P,(n)]) . . . with
the convention that p,(n) = 1 if n has fewer than i prime divisors.
The functions A(n) and A l (n) have been discussed by Alladi and Erdos
[3]. They prove the following theorem, which will be needed in the proof of
Theorem 3.
THEOREM
(Alladi and Erdbs).
For all m 2 1, as x -+ 00,
where k, is a rational multiple of {( 1 + (1 /m)).
195
FAST FOURIER TRANSFORM
When m = 1, this gives X,,,/(n)
- k,(x2/log
show that k, = r2/12. They also show that
ns*A(n)
- A*(n)
x). Alladi
and Erdos
= x log log x + O(x).
(3.9)
Saffari [17, p. 2131 has given an asymptotic expansion for the mean of
A*(n). The techniques of this paper yield somewhat more precise results:
THEOREM 4. As x ten& to infini@, there is a c > 0 such that the first
moment of A(n), A*(n), G(n), or PI(n) equals
JI/*’ S(s+ 1)&is+o(
x exp( - c(log x)““))
asx+ao.
(3.10)
For any fixed k 2 1,
+
‘k(
~,og~~k+,)’
(3*11)
with
c,=Yj--’
W)
cj
=
(_
1)’
Hs
(
+
s +
1)
1
O’)
1
/I_,-
THEOREM 5. As x ten& to infinity, there is a c > 0 such that the second
moment of A(n), A*(n), G(n), or P,(n) equals
s+l
/
’ [(s + 2)%
ds + 0(x2 exp( - c(log x)““)).
I/*
(3.12)
For any fixed k 2 1,
JI/2’ {(s + 2)Sds
= x2x
(log4x)j
+
‘k(
~~ogx:)k+‘).
(3’13)
with
Remarks. Theorems
modifications
1, 2, 4, and 5 are all proved by using slight
of the proof of the prime number theorem. By using the
196
PERSI DIACONIS
usual modifications of this proof, it is possible to improve the error terms.
Using the Riemann hypothesis, the error terms can be further improved.
For example, I believe that, on the Riemann hypothesis, the error term in
(3.10) becomes O(x”2(log
x)‘). The results are given with the error
involving (log x) I/” to allow the proof to rely on the proof of the prime
number theorem given by Ayoub [4] without modification.
As with n A(n) and n G(n), the mean and variance are not good
indicators of the sizes of A(n), A*(n), G(n), and P,(n). Asymptotically,
these functions all have the same distribution.
The next result implies
Theorem 3.
THEOREM 6. Let H(n) be any one of the functions A(n), A*(n), G(n), or
P,(n). As x tendr to infinity, for any fixed y, 0 5 y I 1,
-${n
Ix
: H(n)
I ny}I-
L(y),
where L(y) was defined in Theorem 3.
Proof of Theorems 1,2,4, and 5. The approach used here is the classical
technique using Dirichlet series. The following identities are needed:
LEMMA 7. Let A(n), G(n), A*(n), and P,(n) be as defined in (2.2), (2.3),
(3.7), and (3.8), respectively.
m A*(n)
x-;;;tZ=l
re s > 2,
g
(3.14)
(A*(n))’
ns
n-1
re s > 3, (3.15)
re s > 2, (3.16)
2IX Ay’ - &)EP
n-1
ps+2 + SW 7 fq)z.
(
(p” - 1)2
res > 3, (3.17)
5
n-1
G(n)
-=,,&(l
n*
--$)(l
p PS
-+-‘=&,(S),
p”- 1
res > 2, (3.18)
FAST
FOURIER
197
TRANSFORM
2O”F =S(s)?(-+(I --$)(l -pLJ-’
tI=l
-&(l
-+i)i(l- j$-‘}
P
+ W)( y(s))*,
4s)
(3.19)
> 3.
(3.20)
(3.21)
Lemma 7 can be proved by a number of arguments.
approach seems to be useful somewhat generally.
The following
Proof of Lemma 7. For fixed real s > 1, define a probability
P, on the space D = { 1, 2, 3, . . . } by P,(j) = (1 /{(s))(l/j’),
measure
where
S(s) = ZJ’Z.,1 /j”. F or each prime number p, let X, : Q + 52 u { 0} be defined
by X,,(n) = p if pin, X, = 0 otherwise. Thus P,(X, = p) = I/p’, P,(X, =
0) = 1 - l/p’. The random variables X,, are easily seen to be independent
var(X,) = (l/p’-*)(l
- l/p’).
Let A* = x,X,.
with E(X,) = l/ps-‘,
For s > 1, the Borel-Cantelli
lemma implies that A* is almost surely
finite. A* has finite mean if and only ifs > 2. A* has finite variance if and
only if s > 3. For s > 2,
1-g
I(s)
n-l
A*(n)
ns
=
qA*)
= 2
E(X,)
= 2
P
1
P
ps-’
This implies (3.14). To prove (3.15) note that for s > 3,
Lx
S(s)
(A::)‘2
- (E(A*))*
= var(A*)
= 2 var(X,)
P
=
- 1
=4
P
ps
*
1-L
PS 1 *
To prove (3.16) and (3.17) consider the random variables YP : SI + S2u
{0}, where, if n = IIp 4(“), Y,(n) = a,(n)p. Thus for a = 0, 1, 2, . . . ; P,( Yp
= ap) = (1 - 1/p”)( l/p”“). It is straightforward to check that E( 5) =
p/(p”
- l), var( Y,) = ps+*/(ps
- l)*. To prove (3.18) and (3.19), consider the random variables 2,:&I + Q u (0) where if n = IIp4’“),
Z,(n) =
~4~“). Thus, P,(Z, = 0) = 1 - l/p” and for a = 1, 2,, . . , P, (Z, = p”)
198
PERSI
DIACONIS
= (1 - 1/p”)( 1/p “). Again it is easy to check that
E(Z,)
= (1 -$)--$(l
Jqzp’) = (1 - -+o
---$)-I
and
- --+y.
The argument used to prove (3.14) and (3.15) now leads to a proof of (3.18)
and (3.19). Finally, to prove (3.20) and (3.21), use the fact that for any
prime P,
the product being over all primes q larger than p. It follows that
Since also
(3.20) and (3.21) follow.
The arguments prove the identities for all large real s and thus, by
analytic continuation, for all s such that the right sides are analytic. The
validity for the half planes given follows easily from the known behavior of
the function I&, 1/p” (see, for example, [4, Chap. 2, Sect. 4, (16)]). q
Proof of Theorem 1. The argument used here will follow Landau’s
proof of the prime number theorem as presented by Ayoub [4, Chap. 21.
Since we make constant use of Ayoub’s arguments, the reader is advised to
follow the present proof with a copy of Ayoub’s book in hand.
First consider the function n A(n). The identity (3.16) together with
Theorem 3.1 of Ayoub [4] for expressing the sum of the coefficients of a
Dirichlet series yields for nonintegral x and any (Y > 3,
(3.22)
with
f(s)
= sts - 1) T
p’_p-
1*
FAST
FOURIER
199
TRANSFORM
Changing the variable of integration
nonintegral x, and any 4 > 1;
from s to s + 2 in (3.22) gives, for
(3.23)
with
g(s) = S(s + 1)X
p
p ps+’ - 1
-
In what follows the path re s = a in (3.23) will be deformed so that part of
it lies slightly to the left of the line re s = 1. We now show that Ayoub’s
bounds for log l(s) apply to g(s). Observe that for re s > i, the function
{(s + 1) is uniformly bounded, in absolute value, by {(+). Further
2p
p’+’
p-
1
= 7 $ + F p’(p’j* _ 1)= logs(s)+ h(s),
(3.24)
where
h(s)= 2
P
1
ps(ps+’
-
1)
Thus h(s) is analytic in the half plane re s > f and uniformly bounded in
any half plane re s > b, with b > f .
Suppose the path of integration is now deformed exactly as in [4, Chap.
2, Sect. 51. Ayoub’s arguments yield bounds for all parts of the path except
along the cut running from b + k to b - k, where b is 1 - c loge9 T as in
[4, Eq. (I), p. 651. Along the cut, make the substitution g(s) = Z(s + 1) log
SW + w + lM( s1 with h(s) defined by (3.24). Since {(s + l)h(s) is
analytic and single valued along the cut, the integral along the upper side
of the cut cancels the integral along the lower side. From here, the
argument in [4, p. 691 yields
hi
1
s CUt
=
Il/2
’ {(s + l)h
x+2
G!S+ 0(x3 exp( - c(log x)““)).
(3.25)
The last equality follows from the choice of b given in [4, p. 701. This
completes the proof of (3.1). Equation (3.2) follows by routine integration
by parts.
200
PER.9
DIACONIS
The argument for n G(n) is virtually the same, and is omitted.
The arguments for Theorems 2, 4, and 5 are also virtually identical to
the proof of Theorem 1. In each case, the identity for the Dirichlet series is
used together with the inversion theorem (3.1) of Ayoub [4] as in (3.22).
Then a change of variables is made to move the path of integration to re
s = a with a > 1. Again the integrand differs from log S(s) by a bounded
analytic function. Thus, Ayoub’s argument can be used to bound the
integrals away from s = 1. Along the cut the argument given for Theorem
1 holds essentially word for word. Further details are omitted. 0
Proof of Theorems 3 and 6. It is useful to have another way to express
the limiting relations to be proved.
LEMMA
8. Let H(n) denote one of the functions A*(n), A(n), PI(n), or
G(n). Then, the following two conditions are equivalent. As x + CO,
J-I{n<x:H(n)<nY}l+L(y)
forO<yIl.
$I{n
for0 <y
lx:H(n)
Ixy}l+L(y)
(3.26)
I 1.
(3.27)
Prooj
Heuristically, Lemma 8 is true because most integers less than x
are “large.” We argue that (3.27) implies (3.26): Clearly {n 5 x:H(n)
I
ny} c {n I x : H(n) I xy}. But,
{n 2 x:H(n)
2 xy} = {n <x/log
x : H(n) I x’}
u {x/log x 5 n 2 x : H(n) I xy, H(n) > n’}
u
l-
X
log x
I n I x : H(n)
2 ny
I
= s, u s, u s,.
The set S, is negligible
and
S,c
n<x:i
XY
(1% XY
I H(n)
I xy
.
I
If (3.27) holds, this last set, and so S,, has density 0. Finally, S, differs
from {n I x : H(n) I ny } by a set of density 0. This completes the proof
that (3.27) implies (3.26). The proof of the reverse implication is similar
and is omitted. 0
The results for A(n) and G(n) given in Theorem 6 imply Theorem 3;
thus, we need only prove Theorem 6. Theorem 6, for the function P&n),
was proved by de Bruijn 191. Nice discussion and simplified proofs are
given by Billingsley [7] and Knuth and Trabb-Prado [Ill. The idea of the
FAST
FOURIER
201
TRANSFORM
proof of Theorem 6 is to use the known results for P,(n) by showing that
A(n), A*(n), and G(n) differ from P,(n) by a “small” amount.
We now prove Theorem 6 for A(n). Recall that we write P,.(n) for the ith
largest prime divisor of n. Let y E (0, 1) be fixed, and choosk‘an integer m
so large that l/m <~~.~Observe that
(3.28)
Write Q,(S) for the proportion
as the smallest set in (3.28)
Q,
5 Pi(n) < 5
and
of integers n I x such that n E S. Take S
A(n)
-
i=l
g P,(n)
i=l
2
1 I
I $
Qx(
j, Pi(n)
25) - Qx(
A(n)
- s,Pi(n)
>I).
(3.29)
Now, Markov’s inequality for positive random variables (for X > 0, P(X
> c) I E(X)/c),
together with the theorem of Alladi and Erdos quoted
above, implies
,4(n) - 2
P,(n) > $
i=l
Next we observe that the limiting distribution of CT:., P,(n) is the same as
the limiting distribution of P,(n). This follows from the inclusions
{n I x : P,(n)
I xy}
1
(
n I x : $l 4(n)
II {n I x : “P,(n)
5 xy)
5 xy}
along with de Bruijn’s result which implies I{ n < x : mP,(n)
XL(~) fory and m fixed as x + cc. Using the last observation,
(3.29) in (3.28), completes the proof for A(n).
The proof of Theorem 6 for A*(n) follows from the result just
A(n) via Markov’s inequality together with the estimate (3.9) of
Erdos for the mean of the difference A(n) - A*(n).
5 xy }I w
(3.30) and
proved for
Alladi and
202
PERSI
DIACONIS
The last stage in the proof of Theorem 6 is to show that G(n) has the
same limiting distribution as A*(n). The idea of the proof is to show that
for almost all integers n, G(n) and A*(n) differ by at most a bounded
amount. Toward this end, define u,(n) as the largest number such that
p4”“)n. For integers m and 2 > 0, let
S4 Z ={n:O<aP(n)<mfor21p~ZandOIaP(n)I
It is straightforward
lforp>Z}.
to show that S,,.
has density
Since the product II,( 1 - l/p*) converges, a,,,,z can be made arbitrarily
close to 1 by choosing Z and then m suitably large. Since G(n) 2 A*(n),
Q, { G(n) 5 ny } I Q, (A *(n) I nY }. For the opposite inequality, let y be
fixed and note that on S,,,,z
0 I G(n) - A*(n)
I
2 pm G c(m, Z).
PSZ
Then,
Qx{G(d
5 n’>
2 Qx{{S,n,z)
n {G(n)
2 Q,{{S,,,.}
n {A*(n) I ny - chz)})
2
5 n’>)
Q,{A*(n) I ny - c(m, Z>} -
Q,{S,i.z>-
I ny - c(m, Z)} - L(y) as x + cc. By choosing m and
Z suitably large Q,{ Si, z} can be made arbitrarily small. This completes
the proof of Theorem 6 for G(n). I-J
Now, Q,(A*(n)
4.To
SPLIT OR NOT TO SPLIT?
The FFT of n numbers can be computed by using the chirp-z transform
directly or by splitting n into prime factors, computing the FFT for each
factor, and then putting the pieces back together. These two approaches
were described in Section 2. Both approaches work in O(n log n) time. A
more careful comparison will now be presented.
As explained in Assumptions (2.7) and (2.8) of Section 2, a reasonable
approximation for the number of operations used by the two algorithms is
/K(n) for the chirp-z transform and /W(n) for the full factorization
transform. Here /I > 0 is a constant which may be taken as 2 and, if T(n)
203
FAST FOURIER TRANSFORM
is the smallest power of 2 larger than 2n - 4, C(n) and F(n) are defined by
c (n ) = 37(n) log, r(n)
= i n log, n
ifn # 2k
if n = 2k,
F(n) = n x %.
P”b p
As a numerical
(4.1)
(4.2)
example, when n = 100,
C(100) = 3 . 256 * log, 256 = 6144,
while
F(100) = 100[2(C(2)/2)
= 100[ 2(2/2)
+ 2(C(5)/5)]
+ 2(3 1 8 +3/5) 3 = 3080.
Table I gives F(n) for 1000 I n I 1025. For each n in Table I (except
1024), C(n) = 67,584. The results in Table I suggest that the better algorithm speeds things up by a factor of approximately 5. Neither approach
dominates. It is not clear how the approximations
used to form Table I
compare with actual running times.
Some information
about the two approaches can be gleaned from a
comparison of averages. Recall that we write [x] for the smallest integer
larger than x, lx] for the largest integer smaller than x, and {x} for the
fractional part of x.
THEOREM
7. Let C(n) and F(n) be defined by (4.1) and (4.2). For x > 0
define
w(x) =;
if {log,([x]
2-w3~2(L~J-*))
- 2)) = 0
otherwise.
As x tends to infinity,
- ’ 2 C(n) = 12w(x)( 1 - F)x
X n5.x
; ~xF(n)= &
log, x + O(X),
x log, x + 0(x log log x).
(4.3)
(4.4)
The factor w(x) oscillates boundedly between 1 and i so that
12w(x)(l - $w(x)) oscillates between 4 and 4.5, while 3/lag 2 = 4.33.
204
PER.91
DIACONIS
TABLE I
Number of Operations Using Full
Factorization of n With j3 = 1”
n
loo0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
I5
16
17
I8
19
20
21
22
23
24
25
ODirect
F(n)
Factorization
2’5’
7. 11. 13
2 . 3 . 167
17 . 59
22. 251
3 .5 .67
2 . 503
19 . 53
24. 32. 7
1009
2 . 5 . 101
3 . 337
22. 11 .23
1013
2.3.
l32
5 .7 I 29
2’. 127
32. 113
2 .509
1019
22.3.5.17
1021
2 .7 ’ 73
3. 11 .31
2’0
52.41
use of the chirp-z
46,200
108,096
85,950
74,016
57,304
108,642
62,446
112,128
35.712
122,880
76,994
94,182
96,872
122,880
40,482
82,776
52,200
59,364
62,458
122,880
47,568
122,880
115,070
84,702
10,240
96,720
idea uses 67,584 operations.
Proof of Theorem 7. To prove (4.3) it is easiest to approximate
the
distribution of C(n) and then calculate the mean from the distribution. The
largest value C(n) can take as n ranges in 1 i n I x is determined by
J = J(X) = [log2(2LxJ - 4)1; C(n) takes values 3J 2J, 3(5 - l)r-‘,
... .
When n = 2k, C(n) = 2kk. Since this happens only once for each value of
k, these values will not affect the computation. That is, ( I/x)X,,~,,,~~
x,k2k
= @log x) and the error in (4.3) is O(x). Excepting powers of2, C(n) =
3k2k for 2k-2 + 2 < n I 2k-’ + 2. Write
f(x)
= f(x)
- 2 - log, x = - 1
if x =2k+2
= - {1og*(p1
- 2))
+
0(1/x)
otherwise.
FAST
FOURIER
205
TRANSFORM
Then
and similarly,
for 1 I k I J,
-$(n
5. x : C(n)
The error term is uniform
3J2J
(
1 - w(x)
+3(5
= 3(5 - k)2-‘-k)l
in k. The mean of C(n) is
+ 0 +
+3(5( )I
- 2)Y-‘(+)
= 3J2.‘( 1 - ;w(x))
= 3 * 4w(x)(l
+oi ( x).
= 9
1)2-‘-l
+ O(i))
+.
(
w(x)
2+
0 +
( ))
. . +O(logx)
+ O(x)
- :w(x))x
log, x + O(x).
This proves (4.3).
To prove (4.4) write
We now argue from (4.5) to the approximation
nTx F(n)
=$
c
+-)
P<X
p
+ 0(x*).
(4.6)
To derive (4.6), we need the prime number theorem in the crude form
z Pa+. log p = O(x) (see [4, Theorem 6.21, for example). We also use
C(p) = O(p logp). First, Lx/p”J’
= (x/p”)*
+ 0(x/p”),
so
2
C(P) Prr
-T[x/pa]’
POIX P
=;
C(P)
2 :+0xX
p’5.x P
( pa0
C(P)
.
P 1
(4.7)
206
PERSI
DIACONIS
But
z
y
= o( .2X
logp)
= O(x).
(4.8)
P”lX
Using (4.8) in (4.7) and again for the term involving
side of (4.5) gives
[x/p”J
on the right
(4.9)
The proof of (4.6) is completed
Next we need the following
by the bound:
bound:
For w < z, as w -+ co,
w,z<z+=
(T&i - &)
+o(wlig2w). t4-lo)
To prove (4.10) write a(t) for the number of primes
I t. Then,
Using the prime number theorem in the form
7r(t)
=- t ( t 1
t2
log i
-n(t)
+o
z c----+0 1
w z log z
also,
2
2-=
r(r) dt
sw
t3
2
z
s H’
This completes the proof of (4.10).
~
(log t)2 ’
I
w log w
1
.
( w log2 w I3
207
FAST FOURIER TRANSFORM
Finally, we complete our argument
= [log, x]. We have
+3(L
from (4.6) to (4.4). Write L = L(x)
2
+ 2)2L+=
2L+2<pjx
The approximation
1 + O(1).
(4.11)
P2
(4.10) gives
x
2L+2<p
‘=o&
(
P2
IX
and
-=
P2
t
2k-=+2<p12k-‘+2
2k-2
log
2k-2
-
’
- 2) log2
= 2k-‘(k
’ 2k-l ]+o(&)
2&-' log
[ 1 + o(i)].
Using these bounds in (4.11) leads to
Ix
psx
-C(P) = &
y(1
+ o(i))
+ O(1)
p2
6 log, x
= ___
+ O(log log x).
log2
Using this in (4.6) completes the proof of Theorem
7. •J
ACKNOWLEDGMENTS
David Freedman, Andrew Odlyzko, Lawrence Rabiner, Don Rose, Larry Sbepp, and
Charles Stein all helped at crucial times. I also thank a patient, careful referee.
REFERENCES
1. A. AHO, J. HOPCRAFT, AND J. ULLMAN,“
The Design and Analysis of Computer Algorithms,” Addison-Wesley, Reading, Mass., 1974.
2. A. AHO, K. STEIOLI'IZ, AND J. ULLMAN, Evaluating polynomials at fixed sets of points,
SIAM
J. Computers
3. K. ALLADI
275-294.
4 (1975),
533-539.
AND P. Et&s., On an additive arithmetic function, Pacific J. Math.
71
(1977),
208
PERSI
DIACONIS
4. R. AYOUB, “An Introduction to the Analytic Theory of Numbers,” Amer. Math. Sot.,
Providence, R. I., 1963.
5. R. BELLMAN AND B. KOTKIN, On the numerical solution of a differential-difference
equation arising in analytic number. Theor. Math. Camp. 16 (1%2), 473-475.
6. G. BERGLAND, A guided tour to the fast Fourier transform, IEEE Trans. Audio Electroacousr.
AN-16
(1%8),
66-76.
7. P. BILLINGSLEY, On the distribution
(1972),
8.
of large prime divisors, Periodica
Math
Hungar.
2,
283-289.
N. G. DE BRWN, On the number of uncancelled elements in the sieve of Eratosthenes.
Indag.
Math.
12 (1950),
247-256.
9. N. G. DE BRUIJN,On a function occurring in the theory of primes, J. Indian Math.
15 (195Q
Sot. A
25-32.
10. K. DICKMAN, On the frequency of numbers containing prime factors of a certain relative
magnitude, Ark. Mar., Asfronom Fysik 22A 10 (1930), I- 14.
11. D. KNU~~-I AND L. TRU~B-PARD~, Analysis of a simple factorization algorithm, Theoret
Camp. Sci. 3 (I 976), 32 l-348.
12. D. LEHMER, Computer technology applied to the theory of numbers, in “Studies in
Number Theory,” pp. 117-151, Math. Assoc. Amer. (distributed by Prentice-Hall, Englewood Cliffs, N. J.), 1969.
13. L. RUNNER, R. SCHAFER,AND C. RADER, The chirp-z transform and its applications. Bell
System Tech. J. 48 (1969) 1249- 1292.
14. L. RABINER AND C. RADIZR,‘Digital Signal Processing,” IEEE, New York, 1972.
15. C. RADER, Discrete Fourier transforms when the number of data samples is prime. IEEE
Proc. 56 (1%8), 1107-1108.
16. D. Rose, “Matrix Identities of the Fast Fourier Transform,” Linear Algebra Appl., in
press.
17. B. SAFFARI,Sur quelques applications de la “methode de L’Hyperbole” de Dirichlet a la
theore des nombres premiers, Enseignement Math 14 (1969), 205-224.
18. R. SINGLETON,An algorithm for computing the mixed radix fast Fourier transform. IEEE
Tram.
Audio
Electroacowt.
AU-17
(1%9),
158-161.
J. VAN DE LUNE AND E. WATTEL, On the numerical solution of a differential-difference
equation arising in analytic number theory. Math. Cow. 23 (1%9), 417-421.
20. I. J. GOOD, The interaction algorithm and practical Fourier series, J. Roy. Stafist. Sot.
19.
Ser. B. Xl (1958),
361-372.