Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Asymptotic estimates of elementary probability distributions Hsien-Kuei Hwang Academia Sinica, Taiwan To appear in Studies in Applied Mathematics May 27, 1996 Abstract Several new asymptotic estimates (with precise error bounds) are derived for Poisson and binomial distributions as the parameters tend to infinity. The analytic methods used are also applicable to other discrete distribution functions. AMS 1991 Mathematical Subject Classification: 60A99 33B20. Key words and phrases: Poisson distribution, binomial distribution, uniform asymptotic approximations, saddlepoint method, Poissonization. 1 Introduction Finding asymptotically efficient approximations to discrete probability distribution functions is a classic subject in probability theory. The general problem is as follows. Given a random variable X depending on a certain large real parameter, say N , with probability distribution P(X = j) = aj (N ) (j ∈ Z), find asymptotic approximations for the distribution function j≤m aj (N ), as N → ∞ and for all possible values of m (depending on N ). In general, when m becomes large, the exact summation is practically not very useful. Thus there is a need to find simpler asymptotic estimates. For example, if X is a binomial random variable with mean np, then aj (N ) = aj (n) = nj pj q n−j ; and if X is a Poisson random variable with mean N = λ, aj (λ) = e−λ λj /j!. For elementary probability distributions like Poisson, binomial, hypergeometric, etc., a large amount of asymptotic estimates and numerical approximations have been derived in the literature due both to their intrinsic interest and, more importantly, to their wide applications to practical problems. We refer the reader to the recent monograph by Johnson, Kotz and Kemp [25], and to Molenaar [31] for further information and references. In this article, we introduce general analytic methods for deriving new estimates of elementary probability distribution functions. These methods are best described by the Poisson and binomial distributions to which most of our analyses are devoted. All these methods are based on Cauchy’s integral formula for the coefficients of analytic functions, from which asymptotic expansions of the quantity in question are derived by suitably choosing the path of integration (according to the saddlepoint of the integrand) and then evaluating the contribution of the integral. The underlying idea, which consists of expanding the integrand at the saddlepoint, is a rather fruitful one and has been applied in many different contexts with satisfactory estimates (cf. [40, 37, 44, 22]). P 1 In addition to the two classical distributions, our methods can also be applied to the many existing Poisson and binomial variants, mixtures, and convolutions, cf. [25, Chaps. 3 and 4]; and to other discrete distribution functions. Our techniques for deriving numerical bounds are also suitable for use for other Poisson approximation problems, cf. [5, 38, 45]. This article is organized as follows. We first list some known asymptotic estimates concerning the Poisson distribution function in the next section. Then we state and prove our new results in Section 2.2. Application of these methods to the technique of Poissonization is briefly discussed in Section 2.3. A parallel study of the binomial distribution with less details is given in Section 3. We then briefly compare the different expansions derived in this article and indicate applications of our methods to combinatorial and arithmetical problems in the final section. Notation. The notation [z j ]f (z) represents the coefficient of z j in the Taylor expansion of f ; (x)j = x(x − 1) · · · (x − j + 1) if j ≥ 1 and (x)0 = 1 for any real x. 2 Poisson distribution Consider a Poisson random variable X with mean λ > 0: P(X = j) = e−λ λj j! (j ≥ 0). Let us denote by Πm (λ) the distribution function of X: Πm (λ) = e−λ λj j! 0≤j≤m X (m ≥ 0). In many problems in number theory (cf. [44, Chap. II.6]) and in combinatorics (cf. [22, Chaps. 3,5,9]), λ is a large parameter. Moreover, once the different asymptotic behaviours of Πm (λ) have been explicitly characterized, we can employ Πm (λ) as “primitive asymptotic approximant” for more sophisticated problems. Besides the classical Poisson approximations (cf. [5]), let us mention the distribution of integers ≤ x with a given number of prime factors (cf. [4, 44]) and the number of components in decomposable combinatorial structures (cf. [21]). Thus we investigate the asymptotic behaviour of Πm (λ) as λ → ∞ and m runs through its possible values (depending on λ). When λ is bounded and m → ∞, the asymptotic behaviour of Πm (λ) can be easily derived by the usual saddlepoint method, cf. [10, 24, 26, 50]. For completeness, we include the resulting formula at the end of § 2.2. 2.1 Known results Let us first list some known asymptotic estimates of Πm (λ) in the literature. They are not intended to be complete but are chosen according to the variation of the second parameter m. For more information on other types of approximations, see the monographs [20, 31, 25, 5], the articles [2, 13] and the less known (in probability literature) article by Norton [32], where a rather complete account (before 1976) on asymptotics of Πm (λ) is given. Henceforth, Φ(x) denotes the standard normal distribution function: 1 Φ(x) = √ 2π Z x 2 /2 e−t (x ∈ R). dt −∞ 1. The classical central limit theorem (cf. [25, p. 162]): m − λ + 12 √ Πm (λ) = Φ + O λ−1/2 , λ √ as λ → ∞, uniformly for m = λ + O λ . For more precise Edgeworth expansions (with or without continuity correction and error bounds), see [9, 12, 32, 34] and [25, p. 162]. ! 2 2. Cramér-type large deviations (cf. [32] [27, p. 100] [22, Chap. 3]): x+1 x 1 − Πm (λ) = (1 − Φ(x)) exp −λH √ 1+O √ λ λ x x+1 Πm (λ) = Φ(−x) exp −λH − √ 1+O √ λ λ √ uniformly for x ≥ 0, x = o( λ), where H(y) = (1 + y) log(1 + y) − y − X (−1)j y2 = yj , 2 j(j − 1) j≥3 √ (m = λ + x λ), √ (m = λ − x λ), (1) the latter equality holding for −1 < y ≤ 1. Effective versions of these results can be found in [32]. 3. Uniform asymptotic expansions for the incomplete gamma function: our Πm (λ) is related to the incomplete gamma function by (m ≥ 0), Πm (λ) = Q(m + 1, λ) where Q(a, λ) = 1 Γ(a) Z ∞ (2) ta−1 e−t dt, λ Γ being the gamma function. The asymptotic behaviour of Q has been extensively studied in the literature, most notably by Temme [39, 40], see also [49]. For our purpose, let us mention the following expansion from [40] 2 √ e−mη /2 X Πm−1 (λ) ∼ 1 − Φ(η 2m) − √ bj m−j , 2πm j≥0 (3) as m → ∞ and λ > 0, where r = m/λ, r η = sign(1 − r) 1 − 1 + log r, r and the bj are bounded coefficients depending on r. In particular, b0 = r 1 − ; η 1−r b1 = r(r2 + 10r + 1) 1 − 3. 3 12(1 − r) η Note that each bj has a removable singularity at r = 1. It should be mentioned that (3) is also derivable by classical methods for uniform asymptotic expansions of integrals having a saddlepoint and a simple pole (one being allowed to approach the other), see [47, 7, 36, 29, 11, 24] and [50, pp. 356–360]. In particular, error bounds for (3) are discussed in [29, 39, 40]. 4. By the definition of Πm (λ) Πm (λ) = e−λ λm X (m)j , m! 0≤j≤m λj which is itself an asymptotic expansion for m = o(λ). 3 (4) 2.2 New results First of all, from (4), we have roughly λm X λm 1 mj λ−j ≈ e−λ , m! 0≤j≤m m! 1 − m/λ Πm (λ) ≈ e−λ (5) and we expect that the last expression would provide a better approximation to Πm (λ) for certain ranges of m than the first term in (5). On the other hand, when m = 0, the two “≈” become “=”. Hence, this formal approximation might be uniformly valid for 0 ≤ m < λ. This is roughly so as we now state. √ Theorem 1 If 1 ≤ m ≤ λ − A λ, where A > 0, then Πm (λ) satisfies X j! τj (m) λm 1 Πm (λ) = e−λ 1+ + Rν , m! 1 − r (λ − m)j 2≤j<ν (6) where r = m/λ, ν ≥ 2, and τj (m) is a polynomial in m of degree [j/2] defined by m τj (m) = [z j ] e−z (1 + z) (j ≥ 0; m ≥ 0). (7) The error term Rν satisfies e1/(12m) mν/2 (λ − m)ν |Rν | < Kν (m ≥ 1; ν ≥ 2), (8) √ with K2 = π/2 and for ν ≥ 3 Kν = 2(ν+2)/2 Γ((ν + 1)/2)/ π. In particular, |Rν | < Kν e1/12 A−ν . The first few values of τj are given as follows. m m(m − 2) m , τ3 (m) = , τ4 (m) = , 2 3 8 m(5m − 6) 3m2 − 26m + 24 τ5 (m) = − , τ6 (m) = − . 30 144 τ2 (m) = − The method of proof extends the original one by Selberg [37] to an asymptotic expansion as in [23], the error term being further improved on here. Proof. By Cauchy’s integral formula e−λ Πm (λ) = 2πi I |z|=ζ 1 −m−1 λz z e dz 1−z (0 < ζ < 1). Take ζ = r and expand the factor (1 − z)−1 at the saddlepoint z = r: X (z − r)j 1 (z − r)ν = + 1 − z 0≤j<ν (1 − r)j+1 (1 − z)(1 − r)ν (ν ≥ 1). Substituting this expansion into (9) yields Πm (λ) = X 0≤j<ν e−λ 1 j+1 (1 − r) 2πi I |z|=r 4 (z − r)j z −m−1 eλz dz + Yν , (9) where Yν = e−λ 1 ν (1 − r) 2πi (z − r)ν −m−1 λz e dz. z 1−z I |z|=r By expanding the factor (z − r)j and computing the residues, we obtain 1 2πi I |z|=r (z − r)j z −m−1 eλz dz = j! λm−j τj (m), m! where, in particular, τ0 (m) = 1 and τ1 (m) = 0, this last relation motivating the choice of r (= m/λ). The error term is then estimated by Laplace’s method and is better if preceded by an integration by parts. By using the relation 1 d −m λz z e , λ(z − r) dz z −m−1 eλz = we obtain Yν = − 1 e−λ ν λ(1 − r) 2πi I z −m eλz |z|=r (10) (z − r)ν−2 ((ν − 2)(1 − z) + 1 − r) dz. (1 − z)2 Thus |Yν | ≤ (ν − 1)e−λ+m rν−m−1 ην−2 , λ(1 − r)ν+1 (11) where for µ ≥ 0 ηµ = 1 2π Z π −π Z µ 2µ/2 it e − 1 e−m(1−cos t) dt = π π 0 (1 − cos t)µ/2 e−m(1−cos t) dt. If µ = 0 we use the elementary inequalities 2t2 t2 ≤ 1 − cos t ≤ π2 2 (|t| ≤ π), and we obtain1 1 η0 < π Z ∞ −2mt2 /π 2 e dt = 0 r π , 8m whenever m ≥ 1. Although the same arguments apply to the case when µ ≥ 1, the bound obtained is weaker than the required one. We proceed instead as follows. Carrying out the change of variables y = (1 − cos t)/2 and an integration by parts yield ηµ = = By using the inequality √ 2µ 1 y (µ−1)/2 e−2my √ dy π 0 1−y Z p 2µ+1 1 −(µ−3)/2 −2my µ − 1 y e − 2my 1 − y dy. π 2 0 Z 1 − y ≤ 1 − y/2 for 0 ≤ y ≤ 1 and by an integration by parts, we obtain ∞ 2(ν+3)/2 v µ−1 1− − v e−v v (µ−3)/2 dv (ν−1)/2 4m 2 πm 0 = 2(µ+1)/2 π −1 Γ((µ + 1)/2)m−(µ+1)/2 (µ ≥ 1). ηµ < 1 −m e Z The quantity η0 is essentially the modified Bessel function of order 0 I0 (z) (cf. [48, p. 373] and [5, p. 263]): η0 = I0 (m). By considering properties of the function x1/2 e−x I0 (x), we have √ η0 m ≤ c0 with c0 = 0.46882 . . . , (12) for all m ≥ 1. Note that p π/8 = 0.62665 . . . 5 From these bounds and the inequality (cf. [48, p. 253] or [5, p. 263]) √ e1/(12m) 2π m −m−1/2 e m < (m ≥ 1), m! it follows that |Rν | = (1 − r)eλ λ−m m! |Yν | < Kν e1/(12m) mν/2 , (λ − m)ν as required. This completes the proof. Remarks. 1. From (2) and the well-known continued fraction representation of the incomplete gamma function (cf. [16, p. 136]), we have Πm (λ) = e−λ λm+1 m! λ − 1 . m 1+ 1 m−1 λ− 1 1+ m−2 λ− 1 1+ .. . Useful estimates for Πm (λ) may be derived from this representation as in [46, pp. 53–56] and [3] for binomial distribution. 2. The polynomials τj (m) are related to Laguerre polynomials n −α−1 L(α) exp (−xz/(1 − z)) n (x) = [z ](1 − z) by (m−j) τj (m) = Lj (m), and thus satisfy the recurrences ( (j + 1)τj+1 (m) = −jτj (m) − mτj−1 (m) τ0 (m) = 1, τ1 (m) = 0. (j ≥ 1); (13) These relations are computationally more useful than the defining equation (7). The τj (m)’s are also related to Tricomi polynomials (cf. [42]) or Charlier polynomials (cf. [5, 36, 8]), see these cited papers and the references therein for asymptotics of this class of polynomials. 3. For the incomplete Gamma function, an expansion similar to (6) without explicit error bound was derived by Tricomi (cf. [16, p. 140 (4)]) by a different method. The fact that τ1 (m) = 0 makes the leading term in (6) rather powerful as the convergence rate (taking ν = 2) is of order m/λ2 , which becomes λ−2 for m = O(1). One may ask if such a phenomenon can be repeated so that one would have an expansion whose successive terms are of order mj λ−2j (the terms in (6) are of order m[j/2] λ−j ). Adapting an idea due to Franklin and Friedman [17] for integrals of the form J= Z ∞ tν−1 e−λt f (t) dt, 0 we can answer the above question affirmatively. 6 Theorem 2 Let rj = (m − j)/λ, j ≥ 0. The distribution function Πm (λ) satisfies the identity X (−1)j 1 + fj (rj )(m)j , m! 1 − r0 1≤j≤m λ2j −λ λ Πm (λ) = e m (14) for 0 ≤ m ≤ λ − 1, where f0 (z) := (1 − z)−1 and for j ≥ 0 fj+1 (z) = d fj (z) − fj (rj ) . dz z − rj (15) Moreover, √ the representation (14) is itself a uniform asymptotic expansion of Πm (λ) for 1 ≤ m ≤ λ − A λ, A > 0: X (−1)j λm 1 Πm (λ) = e−λ + fj (rj )(m)j + Rν∗ , m! 1 − r0 1≤j<ν λ2j (16) for any 1 ≤ ν < m, where |Rν∗ | < Mν∗ e1/(12(m−ν)) λ(m)ν (λ − m)2ν+1 (m ≥ 1; ν ≥ 1), (17) with Mν∗ = π(2ν)!/(2ν+1 ν!). No simple general expression for fj (rj ) seems available. In particular, setting r = m/λ, we have λ−2 f1 (r1 ) = λ−4 f2 (r2 ) = λ−6 f3 (r3 ) = 1 (1 − r)(λ − m + 1)2 3λ − 3m + 4 (1 − r)(λ − m + 1)2 (λ − m + 2)3 15(λ − m)3 + 90(λ − m)2 + 175(λ − m) + 108 . (1 − r)(λ − m + 1)2 (λ − m + 2)3 (λ − m + 3)4 Proof of Theorem 2. For convenience, let us write I(m; f ) instead of Πm (λ), where f (z) = f0 (z) = (1 − z)−1 . As in [17, 41], our starting point is the formula I(m; f ) = f (r0 )e−λ λm e−λ + m! 2πi I |z|=r0 z −m−1 eλz (f (z) − f (r0 )) dz. By an integration by parts using (10), we have I(m; f ) = f (r0 )e−λ λm I(m − 1; f1 ) − , m! λ (18) for m ≥ 1. Formula (18) together with the initial condition I(0; f ) = f (0) = 1 leads to (14) by induction. Thus, to establish the asymptotic nature of (16), it suffices to estimate the error term Yν∗ = (−1)ν I(m − ν; fν ) λν (1 ≤ ν ≤ m). To this end, we use the following representation (cf. [41, p. 238]) of fν : fν (z) = Z 1Z 0 0 1 ··· Z 0 1 tν t3ν−1 · · · t2ν−1 f (2ν) (r0 + t1 (r1 + · · · + tν (z − rν−1 ) · · ·)) dtν dtν−1 · · · dt1 , 1 7 from which we deduce by induction |fν (rν eiθ )| ≤ (2ν)! − r0 )2ν+1 (ν ≥ 1). 2ν ν!(1 Consequently, |Yν∗ | (2ν)!e−λ+m−ν rν−m+ν ≤ ν+1 2 πν! λν (1 − r0 )2ν+1 Z π 2 /π 2 e−2(m−ν)t dt. −π Proceeding along the lines as the estimation of η0 , we deduce (17). Note that a better numerical bound for Kν∗ can be obtained by applying (12). In view of the two error terms (8) and (17), it is obvious that the expansion (14) is more powerful than (6). Roughly, this is due to the fact that the interpolation point of each fj in (14) “adaptively” varies with m − j, while it remains fixed in (6). A major disadvantage of (14) is that the computation of its coefficients becomes involved as j increases. We propose, as in [41], a modification of (14) in which successive interpolation points are the same. Theorem 3 Let r = m/λ. The distribution function Πm (λ) satisfies the asymptotic expansion Πm (λ) ∼ e−λ λm X (−1)j fej (r) , m! j≥0 λj (19) √ uniformly in m, 0 ≤ m ≤ λ − φ(λ), φ(λ)/ λ → ∞, where fe0 (z) = (1 − z)−1 and fej+1 (z) = z d fej (z) − fej (r) dz z−r (j ≥ 0). Despite of the difficulty of constructing a general effective error bound for (19), this modification seems to make the expressions for the coefficients simpler. We have 1 r r(r + 2) , fe1 (r) = , fe2 (r) = , 3 1−r (1 − r) (1 − r)5 3 2 r(r2 + 8r + 6) e4 (r) = r(r + 22r + 58r + 24) , fe3 (r) = , f (1 − r)7 (1 − r)9 4 3 2 r(r + 52r + 328r + 444r + 120) fe5 (r) = . (1 − r)11 fe0 (r) = Proof. Setting J(f ) = I(m; f ), we can rewrite (18) in the form J(f ) = fe0 (r)e−λ from which (19) follows as above. We can write fej (z) = λm e−λ − J(fe1 ), m! λ $j,h (r) (z − r)h , 2j+h+1 (1 − r) h≥0 X where the $j,h ’s are polynomials in r of degree j and satisfy $j+1,h (r) = r(h + 1)$j,h+2 (r) + h(1 − r)$j,h+1 (r) (j, h ≥ 0), with $0,h (r) = 1. Note that the first two terms in (19) are identical to those in (6). Thus the error bound there can be applied in this case (ν = 2). 8 √ Remark. The preceding methods of proof apply mutatis mutandis to the case when m ≥ λ+A λ, A > P 0. It suffices to apply Cauchy’s residue theorem (or, equivalently, considering the tails e−λ j>m λj /j!): Πm (λ) = 1 − e−λ 2πi I |z|=ζ 1 z −m−1 eλz dz z−1 (ζ > 1). Let us briefly summarize the asymptotic results for Πm (λ), as λ → ∞: √ 1. For m = λ+O( λ), Πm (λ) is approximated by the normal distribution; while when m = λ+o(λ), it is characterized by Cramér-type large deviations. 2. When m → ∞, Temme’s expansion (3) is useful. √ √ 3. In the ranges 0 ≤ m ≤ λ − A λ or m ≥ λ + A λ, A > 0, our results can be used. Many other types of normal approximation (usually of the form Πm (λ) ≈ Φ(g(λ, m))) can be found in [31, 2, 13]. Concerning the case when m → ∞ and λ bounded, we have by the saddlepoint method (cf. [50]) ! e−λ+m r−m−1 √ Πm (λ) = 1 − 2πm 12λ − 13 288λ2 − 888λ + 313 −3 1+ + + O m , 12m 288m2 where r = m/λ > 1. Note that this expansion can also be obtained from (3) but with more involved computations. 2.3 Poissonization Poissonization is a widely-used technique in stochastic process, summability of divergent sequence, analysis of algorithms, etc.; see, for example, [1, 6, 18, 35, 19]. The idea is roughly described as follows. Given a discrete probability distribution {ak }k≥0 (or, in general, a complex sequence), consider the Poisson generating function: X λj (λ ∈ C). b(λ) = e−λ aj j! j≥0 The usual “Poisson heuristic” reads: If the sequence {ak }k≥0 is “smooth” enough, then an ∼ b(n), as n → ∞. Analytically, since b(λ) is an entire function of λ (the sequence ak being a probability distribution), we have the integral representation n! an = 2πi I b(z)z −n−1 ez dz |z|=r (r > 0; n ≥ 0). According to the preceding discussions, if the growth order of b, as |z| → ∞ in a certain sector containing the positive real axis, is not too large, then an would be well approximated by b(n) since n is the saddle point of ez z −n . Thus the Poisson heuristic may be regarded as a “saddlepoint heuristic.” To be more precise, let us consider the following “de-Poissonization” lemma from [35] (properly modified in a form suitable for our discussions). Let Sθ be a cone in the z-plane: Sθ := {z : | arg z| ≤ θ, 0 < θ < π/2}, 9 and f (z) := e−z j≥0 aj z j /j! be an entire function for some given sequence an . If, for z ∈ Sθ and |z| → ∞, the function f (z) satisfies P f (z) = O (|z|α ) and, for z 6∈ Sθ , (α ∈ R); (0 < δ < 1), (n → ∞). |ez f (z)| = O |z|α eδ|z| then an = f (n) + O nα−1/2 (20) Note that the assumptions on f imply the estimate |f (j) (z)| = O |z|α−j (|z| → ∞, z ∈ Sθ ; j ≥ 0), (21) by Ritt’s theorem (cf. [33, pp. 9–11]). From this observation and the proof technique of Theorem 1, we can derive the asymptotic expansion: an = f (n) + X f (j) (n)τj (n) + O nα−ν/2 (n → ∞), (22) 2≤j<ν for ν ≥ 2. In particular, the error term in (20) is O nα−1 . That (22) is an asymptotic expansion is easily seen from (21) and the fact that τj (n) is a polynomial in n of degree [j/2], thus f (j) (n)τj (n) nα−[(j+1)/2] . On the other hand, the method of proof of Theorem 2 also applies and we obtain an = f (n) + X (−1)j fj (n)(n − j)j + O nα−ν 1≤j<ν (ν ≥ 1), the fj being defined as in (15) with f0 = f . Note that fj (n)(n)j nα−j , j ≥ 0. Thus the use of the second expansion is preferable. 3 Binomial distribution The methods we used in the last section can be amended to treat the binomial distribution. Let Yn be a binomial random variable with parameters p and n, 0 < p < 1: P(Yn = j) = ! n j n−j p q j (0 ≤ j ≤ n), where q = 1 − p. As in the last section, we first list some known results regarding the asymptotics of the distribution function of Yn , and then present some new ones. Set for 0 ≤ m ≤ n Bm (n) = X 0≤j≤m ! n j n−j p q . j Note that by symmetry it suffices to consider Bm (n) only for 0 ≤ m ≤ pn. 10 3.1 Known results A rather complete account on different approximations and bounds for Bm (n) is given in [25, pp. 114– 122], most of them being of normal-type; cf. also [2, 13]. To this account, we may add the references [46, 5, 30, 43]. 1. The classical de Moivre-Laplace theorem: ! m − np + O n−1/2 , Bm (n) = Φ √ npq √ the result being asymptotic for m = np + O ( n). A precise error estimate was derived by Uspensky in [46, Chap. VII]. 2. Cramér-type large deviations: (cf. [34, Chap. VIII] [22, Chap. 3]): x √ npq 1 − Bm (n) = (1 − Φ(x)) exp nH1 Bm (n) = Φ(−x) exp nH1 x −√ npq !! !! x+1 1+O √ n 1+O x+1 √ npq √ (m = np + x npq), !! √ (m = np − x npq), √ uniformly for x ≥ 0, x = o( n), where H1 (y) = qH(−py) + pH(qy), H being defined by (1). 3. Bahadur’s estimate: for 0 ≤ m ≤ pn ! 1 n m+1 n−m Bm (n) = p q (1 + R) , 1−r m (23) where r = qm/(p(n + 1 − m)) and q 2 m(n + 1) q 2 m(n + 1) ≤ R ≤ . (pn − m + p)((n − m + 2)2 + qm) (n + 1 − m)(pn − m + p)2 √ Note that the result is asymptotic for 0 ≤ m ≤ np − φ(n), φ(n)/ n → ∞. His method is based on a continued fraction representation of Bm (n) and an approach by Markov (cf. [46, pp. 52–56]). 4. Littlewood [28] derived many asymptotic formulae for Bm (n) in different (overlapping) ranges of the interval 0 ≤ m ≤ np. The results are too complicated to be listed here. His results were then corrected and extended by McKay [30]. 5. The binomial distribution Bm (n) is related to the incomplete beta function by Bm (n) = Iq (n − m, m + 1), where Γ(x + y) Iq (x, y) = Γ(x)Γ(y) Z 0 q tx−1 (1 − t)y−1 dt. Asymptotics of this function has been extensively studied by Temme [39, 43]. In particular, we quote the following result √ Bm (n) = 1 − Φ( 2 η) + s n−m 2π(m + 1)(n + 1) × 1 + O m−1 q q0 n−m p p0 m+1 p0 1 + √ q0 − q η q0 , uniformly for m → ∞, where q0 = (n − m)/(n + 1), p0 = 1 − q0 , and r η = sign(pn − m + p) (n − m) log q0 p0 + (m + 1) log . q p For uniform estimates of Bm (n) in a wider range of m and error bounds, see [40, 43]. 11 ! 6. By definition Bm (n) = ! q j (m)j n m n−m , p q 1+ pj (n − m + j)j m 1≤j≤m X (24) which is also an asymptotic expansion for m = o(n). 3.2 New results As for Poisson distribution, we may “guess” the more uniform estimate (23) from (24) as follows. ! (pm)j n m n−m ≈ 1+ p q j (n − m + 1)j m p 1≤j≤m Bm (n) ≈ X ! n m n−m 1 p q , m 1−r where r = qm/(p(n + 1 − m)) is the same as in (23). To derive asymptotic expansions for Bm (n), we may start from the integral representation Bm (n) = 1 2πi 1 z −m−1 (q + pz)n dz |z|=r 1 − z I (0 ≤ m ≤ n), (25) expand the factor (1 − z)−1 at the saddlepoint z = r = qm/(p(n − m + 1)), and then proceed as above; the error estimates obtained are less satisfactory, however. Hence we use instead the following representation: pm+1 2πi Bm (n) = I |z|=ζ z −n+m (1 − qz)−m−1 dz z−1 (1 < ζ < 1/q), which follows either from (25) by a change of variables or from the well-known relation between binomial and negative binomial distributions (cf. [25, p. 210]). It will be more convenient to wrok with pm+1 z −n+m−1 (1 − qz)−m−1 dz 2πi |z|=ζ z − 1 √ Theorem 4 If 3 ≤ m ≤ np − A n, where A > 0, then βm (n) satisfies βm (n) = Bm (n + 1) = βm (n) = ! I (1 < ζ < 1/q). X n pm+1 q n−m (−1)j j!σj (n, m) 1+ + Eν , j (n − 1) m ρ−1 (pn − m) j−1 2≤j<ν (26) (27) where ρ = (n − m)/(qn), 2 ≤ ν < m, and σj satisfies the recurrence σj+2 (n, m) = (j + 1)(n − 2m)σj+1 (n, m) − (j + 1)m(n − m)(n − j)σj (n, m), (28) for j ≥ 0 with the initial conditions σ0 (n, m) = n−1 and σ1 (n, m) = 0. The absolute value of the error term is bounded above by |Eν | < Cν mν (n − m)ν/2 (pn − m)ν (m − ν)ν/2 nν/2 3 with Cν = 2−ν/2 π ν− 2 e1/6 (ν − 1)Γ((ν − 1)/2). 12 (m ≥ 1, 2 ≤ ν < m) (29) The result is clearly stronger than (23). The first few terms of σj are given by σ2 (n, m) = −m(n − m), σ3 (n, m) = −2m(n − m)(n − 2m), 2 σ4 (n, m) = 3m(n − m) n (m − 2) − mn(m − 6) − 6m2 , σ5 (n, m) = 4m(n − m)(n − 2m) n2 (5m − 6) − mn(5m − 12) − 12m2 . From the recurrence (28), we deduce the estimate |σj (n, m)| = O n[j/2] (n − 1)j−1 (j ≥ 2), uniformly for 0 ≤ m ≤ n. Note that we restricted ν to be less than m since otherwise a direct computation of βm (n) by its definition is preferable. This condition is also needed to justify the convergence of an integral. Proof. Starting from (26) we expand the factor (z − 1)−1 at the saddlepoint z = ρ: βm (n) = p X m+1 0≤j<ν where (−1)j 1 (ρ − 1)j+1 2πi pm+1 (−1)ν 1 Zν = (ρ − 1)ν 2πi I |z|=ρ I |z|=ρ (z − ρ)j z −n+m−1 (1 − qz)−m−1 dz + Zν , (30) (z − ρ)ν −n+m−1 z (1 − qz)−m−1 dz. z−1 By an integration by part using the relation z −n+m−1 (1 − qz)−m−1 = 1 d −n+m z (1 − qz)−m , qn(z − ρ) dz we obtain, as the derivations of Yν , (ν − 1)pm+1 −n+m+ν−1 1 |Zν | ≤ ρ (1 − qρ)−m ν+1 qn(ρ − 1) 2π π Z −π − qρeit −m dt. 1 − qρ ν−2 1 |t| From the inequality 1 − qρeit −1 4n(n − m) 2 −1/2 t ≤ 1+ 1 − qρ π 2 m2 (|t| ≤ π), it follows that 1 2π − qρeit −m |t| dt 1 − qρ −π Z ∞ Z π 1 < π ν−2 1 t 0 ν−2 4n(n − m) 2 1+ t π 2 m2 −m/2 = 2−ν π ν−2 mν−1 (n(n − m))−(ν−1)/2 Z dt ∞ u(ν−3)/2 (1 + u)−m/2 du. 0 Note that the integral on the right-hand side (a beta function B((ν −1)/2, (m−ν +1)/2)) is convergent for ν > 1 and m > ν − 1. We next find an upper bound for this integral. We have for 2 ≤ ν < m Z 0 ∞ u(ν−3)/2 (1 + u)−m/2 du < 2(ν−1)/2 (m − ν)−(ν−1)/2 Γ((ν − 1)/2). 13 For, Z ∞ Z u(ν−3)/2 (1 + u)−m/2 du = 0 ∞ (1 − e−w )(ν−3)/2 e−(m−ν+1)w/2 dw 0 Z ∞ w(ν−3)/2 e−(m−ν+1)w/2 dw, if ν ≥ 3; 0 Z < ∞ w−1/2 e−(m−2)w/2 dw, if ν = 2, 0 ( − ν + 1)−(ν−1)/2 Γ((ν − 1)/2), if ν ≥ 3; 2π/(m − 2), if ν = 2, (ν−1)/2 (m 2 p = and the required inequality follows in both cases. Hence, |Zν | < (ν − 1)2−(ν+1)/2 π ν−2 Γ((ν − 1)/2) pm+1 ρ−n+m+ν−1 (1 − qρ)−m mν−1 , q(ρ − 1)ν+1 n(ν+1)/2 (n − m)(ν−1)/2 (m − ν)(ν−1)/2 from which the bound (29) follows. As to the integrals on the right-hand side of (30), we have by direct computation of residues 1 2πi I (z − ρ)j z −n+m−1 (1 − qz)−m−1 dz = n n−m−j −j q n j![z j ] (emz 1 F1 (−m; −n; −nz)) , m |z|=ρ ! where 1 F1 (a; c; z) = Φ(a, c, z) is the confluent hypergeometric functions (cf. [15, Chap. VI]). (The above relation may also be represented in terms of other hypergeometric functions.) The final form of the coefficients in (27) and (28) follows from the differential equation satisfied by 1 F1 and straightforward computations. In an analogous manner, we deduce the following results whose proofs will be omitted. Theorem 5 Let ρj = (n − m − j)/(n − 2j), j ≥ 0, and m0 = min{m, n − m}. For 0 ≤ m ≤ pn − 1, βm (n) satisfies the identity βm (n) = pm+1 q n−m g0 (ρ0 ) + X 1≤j≤m0 ! (−1)j gj (ρj ) n − 2j , j q n(n − 2) · · · (n − 2j + 2) m − 2j where g0 (z) = (z − 1)−1 and gj+1 (z) = d gj (z) − gj (ρj ) dz z − ρj (j ≥ 0). √ Moreover, if 2 ≤ m ≤ np − A n, where A > 0, then we have βm (n) = ! X n m+1 n−m (m)j (n − m)j + E∗, p q g0 (r0 ) + (−1)j gj (ρj ) j ν m q (n) n(n − 2) · · · (n − 2j + 2) 2j 1≤j<ν where 1 ≤ ν < m and |Eν∗ | ! (2ν)!πe1/6 q n−m+ν+1 n − 2ν mn2ν < . 2ν+1 ν! (pn − m)2ν+1 m − ν (m − ν − 1)(n − m − ν)2ν (n − 2) · · · (n − 2ν + 2) √ Theorem 6 For 0 ≤ m ≤ np − φ(n), where φ(n)/ n → ∞, we have βm (n) ∼ ! n m+1 n−m X p q (−1)j gej (ρ)q −j n−j , m j≥0 14 where ρ = (n − m)/(qn), and ge0 (z) = (z − 1)−1 and gej+1 (z) = z(1 − qz) d gej (z) − gej (ρ) dz z−ρ (j ≥ 0). Since there are two variables (n and m) in this case, other expansions are also possible. We list the first few terms of ge in the following. ρ(1 − qρ) ρ(1 − qρ) , ge2 (ρ) = (ρ(1 − 2q) + 2 − q) , 3 (ρ − 1) (ρ − 1)5 ρ(1 − qρ) 2 2 2 2 ge3 (ρ) = ρ (6q − 6q + 1) + 2ρ(4q − 9q + 4) + q − 6q + 6 . (ρ − 1)7 ge1 (ρ) = 4 Remarks We have discussed analytic methods for describing the asymptotic behaviours of integrals of the forms 1 2πi I z −m−1 λz e f (z)dz and 1 2πi I z −m−1 (q + pz)n f (z)dz, where f (z) = (1 − z)−1 . The function f being meromorphic, the expansions we derived are valid in a somewhat restricted range. In general, if f is entire with moderate growth order at infinity (as the de-Poissonization lemma in Section 2.3), our expansions would hold in a wider range for the second parameter. This is so, for example, when f (z) = 1/Γ(z) in the case of the Stirling numbers of the first kind (cf. [23, 14]). A great deal of related combinatorial and arithmetical problems can be found in [22]. Integrals of the form I 1 z −m−1 L(λz) f (z)dz (λ → ∞), 2πi with X zj , L(z) = j!(j − 1)! j≥1 arising in many combinatorial and arithmetic instances (cf. [22, Chaps. 6,10]) can also be dealt with along the lines of this article, using known analytic properties of the modified Bessel functions. Acknowledgements The author is indebted to Jim Pitman and W.-Q. Liang for many valuable comments and suggestions. References [1] D. Aldous, Probability approximations via the Poisson clumping heuristic, Springer-Verlag, New York, 1989. [2] D. Alfers and H. Dinges, A normal approximation for beta and gamma tail probabilities, Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 65:399–420 (1984). [3] R. R. Bahadur, Some approximations to the binomial distribution function, The Annals of Mathematical Statistics, 31:43–54 (1960). [4] M. Balazard, H. Delange, and J.-L. Nicolas, Sur le nombre de facteurs premiers des entiers, Comptes Rendus de l’Académie des Sciences, Série I, Paris, 306:511–514 (1988). 15 [5] A. D. Barbour, L. Holst, and S. Janson, Poisson approximation, Oxford Science Publications, Clarendon Press, Oxford, 1992. [6] B. C. Berndt, Ramanujan’s notebooks, part I, Springer-Verlag, New York, 1985. [7] N. Bleistein, Uniform asymptotic expansions of integrals with stationary point near algebraic singularity, Communications on Pure and Applied Mathematics, 19:353–370 (1966). [8] Bo Rui and R. Wong, Uniform asymptotic expansion of Charlier polynomials, Methods and Applications of Analysis, 1:294–313 (1994). [9] T.-T. Cheng, The normal approximation to the Poisson distribution and a proof of a conjecture of Ramanujan, Bulletin of the American Mathematical Society, 55:396–401 (1949). [10] H. E. Daniels, Saddlepoint approximations in statistics. Annals of Mathematical Statistics, 25:614–649 (1954). [11] H. E. Daniels, Tail probability approximations, International Statistical Review, 55:37–48 (1987). [12] H. Delange, Sur le nombre des diviseurs premiers de n, Acta Arithmetica, 7:191–215 (1962). [13] H. Dinges, Special cases of second order Wiener germ approximations, Probability Theory and Related Fields, 83:5–57 (1989). [14] M. Drmota and M. Soria, Marking in combinatorial constructions: generating functions and limiting distributions, Theoretical Computer Science, 144:67–99 (1995). [15] A. Erdélyi, Higher transcendental functions, volume I, Robert E. Krieger Publishing Company, Malabar, Florida, 1953. [16] A. Erdélyi, Higher transcendental functions, volume II, Robert E. Krieger Publishing Company, Malabar, Florida, 1953. [17] J. Franklin and B. Friedman, A convergent asymptotic representation for integrals, Proceedings of the Cambridge Philosophical Society, 53:612–619 (1957). [18] G. H. Gonnet and J. I. Munro, The analysis of linear probing sort by the use of a new mathematical transform, Journal of Algorithms, 5:451–470 (1984). [19] P. Grabner, Searching for losers, Random Structures and Algorithms, 4:99–110 (1993). [20] F. A. Haight, Handbook of the Poisson distribution, Wiley, New York, 1968. [21] H.-K. Hwang, A Poisson ∗ geometric law for the number of components in unlabelled combinatorial structures, submitted. [22] H.-K. Hwang, Théorèmes limites pour les structures combinatoires et les fonctions arithmétiques, Thèse, Ecole polytechnique, 1994. [23] H.-K. Hwang, Asymptotic expansions for the Stirling numbers of the first kind, Journal of Combinatorial Theory, series A, 71:343–351 (1995). [24] J. L. Jensen, Saddlepoint approximations, Oxford Science Publications, Oxford, 1995. [25] N. L. Johnson, S. Kotz, and A. W. Kemp, Univariate discrete distributions, John Wiley & Sons, Inc., New York, second edition, 1992. [26] J. E. Kolassa, Series approximation methods in statistics, Lecture Notes in Statistics, volume 88, Springer-Verlag, 1994. 16 [27] V. F. Kolchin, Random mappings, Optimization Software Inc., New York, 1986. [28] J. E. Littlewood, On the probability in the tail of a binomial distribution, Advances in Applied Probability, 1:43–72 (1969). [29] R. Lugannani and S. O. Rice, Saddlepoint approximation for the distribution of the sum of independent random variables, Advances in Applied Probability, 12:475–490 (1980). [30] B. D. McKay, On Littlewood’s estimate for the binomial distribution, Advances in Applied Probability, 21:475–478 (1989). [31] W. Molenaar, Approximations to the Poisson, binomial and hypergeometric functions, Mathematical Centre Tracts 31, Amsterdam, 1970. [32] K. K. Norton, Estimates for partial sums of the exponential series, Journal of Mathematical Analysis and Applications, 63:265–296 (1978). [33] F. W. J. Olver, Asymptotics and special functions, Academic Press, New York, 1974. [34] V. V. Petrov, Sums of independent random variables, Springer-Verlag, Berlin-Heidelberg, NewYork, 1975. Translated from the Russian by A. A. Brown. [35] B. Rais, P. Jacquet, and W. Szpankowski, Limiting distribution for the depth in Patricia tries, SIAM Journal on Discrete Mathematics, 6:197–213 (1993). [36] S. O. Rice, Uniform asymptotic expansions for saddle point integrals—application to a probability distribution occurring in noise theory, The Bell System Technical Journal, 47:1971–2013 (1968). [37] A. Selberg, Note on a paper by L. G. Sathe, Journal of the Indian Mathematical Society, 18:83–87 (1954). [38] S. Ya. Shorgin, Approximation of a generalized Binomial distribution, Theory of Probability and its Applications, 22:846–850 (1977). [39] N. M. Temme, The asymptotic expansion of the incomplete gamma functions, SIAM Journal on Mathematical Analysis, 10:757–766 (1979). [40] N. M. Temme, The uniform asymptotic expansions of a class of integrals related to cumulative distribution functions, SIAM Journal on Mathematical Analysis, 13:239–253 (1982). [41] N. M. Temme, Uniform asymptotic expansions of Laplace integrals, Analysis, 3:221–249 (1983). [42] N. M. Temme, A class of polynomials related to those of Laguerre, In Lecture Notes in Mathematics, 1171, Proceedings of the Laguerre Symposium held at Bar-le-Duc, pages 459–464. SpringerVerlag, Berlin, 1985. [43] N. M. Temme, Incomplete Laplace integrals: uniform asymptotic expansion with application to the incomplete beta function, SIAM Journal on Mathematical Analysis, 18:1638–1663 (1987). [44] G. Tenenbaum, Introduction à la théorie analytique et probabiliste des nombres, Institut Elie Cartan, Université de Nancy I, Nancy, France, 1990. English version by C. B. Thomas, Cambridge University Press, 1995. [45] J. V. Uspensky, On Ch. Jordan’s series for probability, Annals of Mathematics, 32:306–312 (1931). [46] J. V. Uspensky, Introduction to mathematical probability, McGraw-Hill Book Company, Inc., New York, 1937. 17 [47] B. L. van der Waerden, On the method of saddle-points. Applied Scientific Research, B2:33–45 (1951). [48] E. T. Whittaker and G. N. Watson, A course of modern analysis, an introduction to the general theory of infinite processes and of analytic functions; with an account of the principal transcendental functions. Cambridge University Press, Cambridge, 4th edition, 1927. [49] R. Wong, On uniform asymptotic expansions of definite integrals, Journal of Approximation Theory, 7:76–86 (1973). [50] R. Wong, Asymptotic approximations of integrals, Academic Press, Inc., Boston, 1989. Hsien-Kuei Hwang Institute of Statistical Science Academia Sinica Taipei, 11529 Taiwan e-mail: [email protected] 18