Download Almost sure lim sup behavior of bootstrapped means with

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Vincent's theorem wikipedia , lookup

Mathematical model wikipedia , lookup

Wiles's proof of Fermat's Last Theorem wikipedia , lookup

Brouwer fixed-point theorem wikipedia , lookup

Four color theorem wikipedia , lookup

Georg Cantor's first set theory article wikipedia , lookup

Fundamental theorem of calculus wikipedia , lookup

Fundamental theorem of algebra wikipedia , lookup

Infinite monkey theorem wikipedia , lookup

Theorem wikipedia , lookup

Law of large numbers wikipedia , lookup

Proofs of Fermat's little theorem wikipedia , lookup

Transcript
Journal of Statistical Planning and
Inference 98 (2001) 1–14
www.elsevier.com/locate/jspi
Almost sure lim sup behavior of bootstrapped means
with applications to pairwise i.i.d. sequences and stationary
ergodic sequences
S.E. Ahmeda , Deli Lib , Andrew Rosalskyc;∗ , Andrei I. Volodina
a Department
of Mathematics and Statistics, University of Regina, Regina, Sask., Canada S4S 0A2
of Mathematics and Statistics, Lakehead University, Thunder Bay, Ont., Canada P7B 5E1
c Department of Statistics, University of Florida, 206 Gri-n-Floyd Hall=Box 118545, Gainesville,
FL 32611-8545, USA
b Department
Received 3 February 2000; accepted 1 December 2000
Abstract
For a sequence of random variables {Xn ; n¿1}, the exact convergence rate (i.e., an iterated
logarithm-type result) is obtained for bootstrapped means. No assumptions are made concerning either the marginal or joint distributions of the random variables {Xn ; n¿1}. As special
c 2001
cases, new results follow for pairwise i.i.d. sequences and stationary ergodic sequences. Elsevier Science B.V. All rights reserved.
MSC: primary 62E20; 62G05; secondary 60F15
Keywords: Bootstrapped means; Law of the iterated logarithm; Pairwise i.i.d. sequences;
Stationary ergodic sequences; Strong law of large numbers
1. Introduction
The exact convergence rates in the form of iterated logarithm-type results are obtained for bootstrapped means from sequences of random variables which are not necessarily independent or identically distributed. We begin with a brief discussion of
results in the literature pertaining to a sequence of independent and identically distributed (i.i.d.) random variables. Let {X; Xn ; n¿1} be a sequence of i.i.d. random
n
variables de<ned on a probability space (; F; P) and write Sn = i=1 Xi ; n¿1. For
n
! ∈ and n¿1, let Pn (!) = n−1 i=1 Xi (!) denote the empirical measure and let
(!)
{X̂ n; j ; 16j6m(n)} be i.i.d. random variables with law Pn (!) where {m(n); n¿1} is a
(!)
sequence of positive integers. In other words, the random variables {X̂ n; j ; 16j6m(n)}
∗ Corresponding author. Tel.: +1-352-392-1941 e225; fax: +1-352-392-5175.
E-mail address: [email protected] (A. Rosalsky).
c 2001 Elsevier Science B.V. All rights reserved.
0378-3758/01/$ - see front matter PII: S 0 3 7 8 - 3 7 5 8 ( 0 0 ) 0 0 3 2 2 - 0
2
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
result by sampling m(n) times with replacement from the n observations X1 (!); : : : ;
Xn (!) such that for each of the m(n) selections, each Xi (!) has probability n−1 of
(!)
being chosen. For each n¿1; {X̂ n; j ; 16j6m(n)} is the so-called Efron (1979) bootstrap sample from X1 ; : : : ; Xn with bootstrap sample size m(n). Let XF n (!) = Sn (!)=n
denote the sample mean of {Xi (!); 16i6n}; n¿1.
When X is nondegenerate and EX 2 ¡ ∞, Bickel and Freedman (1981) and Singh
(1981) showed that for almost every ! ∈ the central limit theorem (CLT),
(!)
Ŝ
d
n
(1.1)
n1=2
− XF n (!) → N (0; 2 )
n
n
(!)
(!)
obtains. Here and below, Ŝ n = j=1 X̂ n; j ; n¿1 and 2 = Var X . Note that by the
Glivenko–Cantelli theorem Pn (!) is close to L(X ) for almost every ! ∈ and all
large n, and by the classical LHevy CLT
Sn
d
1=2
− EX → N (0; 2 ):
n
(1.2)
n
Thus there is no major diIerence between the CLT (1.1) for bootstrapped means and
the classical LHevy CLT (1.2) for i.i.d. random variables; for almost every ! ∈ , the
(!)
bootstrap statistic n1=2 (n−1 Ŝ n − XF n (!)) is close in distribution to that of n1=2 (n−1 Sn −
EX ) for all large n. This is, very roughly speaking, the idea behind the bootstrap. See
the pioneering work of Efron (1979), where this nice idea is made explicit and where
it is substantiated with several important examples. GinHe and Zinn (1989) proved that
in order for there to exist positive scalars an ↑ ∞, centerings cn (!), and a random
probability measure (!) nondegenerate with positive probability such that
(!)
Ŝ n
w
− cn (!) → (!) for almost every ! ∈ ;
L
an
it is necessary that EX 2 ¡ ∞. The limit law (1.1) tells us just the right rate at which
(!)
to magnify the diIerence n−1 Ŝ n − XF n (!), which is tending to 0 in probability for
almost every ! ∈ , in order to obtain convergence in distribution to a nondegenerate
law for almost every ! ∈ . We note from (1.1) that for almost every ! ∈ ,
(!)
n1=2 Ŝ n
F
− X n (!) → 0 in probability
xn
n
for any sequence of constants x n → ∞.
On the other hand, strong laws of large numbers (SLLNs) were proved by Athreya
(1983) and CsLorgő (1992) for bootstrapped means. Arenal-GutiHerrez et al. (1996) analyzed the results of Athreya (1983) and CsLorgő (1992). Then, by taking into account
the diIerent growth rates for the resampling size m(n), they gave new and simple
proofs of those results. They also provided examples that show that the sizes of resampling required by their results to ensure almost sure (a.s.) convergence are not far
from optimal.
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
3
It is natural to ask about an exact convergence rate for bootstrapped means. The
main <nding of the current work is Theorem 1 wherein we establish the law of the
iterated logarithm (LIL)-type result (3.5) for bootstrapped means from a sequence of
random variables {Xn ; n¿1}. An interesting and unusual feature of Theorem 1 is
that no assumptions are made concerning either the marginal or joint distributions of
the random variables {Xn ; n¿1}; it is not assumed that these random variables are
independent or that they are identically distributed. Furthermore, in general, no moment
conditions are imposed on the {Xn ; n¿1}. Pioneering work establishing a LIL-type
result for bootstrapped means from a sequence of i.i.d. integrable random variables
was carried out by Mikosch (1994); our method is substantially diIerent from his.
The plan of the paper is as follows. The lemmas used in the proof of Theorem 1
will be presented in Section 2. Theorem 1 will be stated and proved in Section 3. In
Section 4, special cases of Theorem 1 are presented yielding new results for pairwise
i.i.d. sequences and stationary ergodic sequences. Some <nal comments are made in
Section 5.
2. Preliminary lemmas
Two lemmas used to establish Theorem 1 are presented in this section. The <rst
lemma may be called the transformation principle of the bootstrap.
Lemma 1. Let {Xi ; 16i6n} be random variables (which are not necessarily indepen(!)
dent or identically distributed) and for ! ∈ let {X̂ n; j ; 16j6m} be i.i.d. random
n
variables with law Pn (!) = n−1 i=1 Xi (!) and let f! : R → R be a Borel func(!)
tion. Then {f! (X̂ n; j ); 16j6m} are i.i.d. random variables with law Pn; f! (!) =
n
n−1 i=1 f! (Xi (!)) .
Proof. The i.i.d. portion of the conclusion is clear. Next, let A be a Borel set. Then
(!)
(!)
Prob{f! (X̂ n; 1 ) ∈ A} = Prob{X̂ n; 1 ∈ f!−1 (A)}
= Pn (!){f!−1 (A)}
n
= n−1 Xi (!) (f!−1 (A))
= n−1
i=1
n
i=1
f! (Xi (!)) (A)
= Pn; f! (!)(A);
thereby proving Lemma 1.
The next lemma is a general result for arrays of independent random variables and
is the key lemma used in the proof of Theorem 1. Lemma 2 is presented in a form
somewhat more general than what is required for the proof of Theorem 1 and may be
of independent interest. It should be noted that Lemma 2 extends some of the work
4
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
of Sung (1996). Indeed, Lemma 2(ii) reduces to Corollary 4 of Sung (1996) in the
special case where m(n) = n; n¿1.
Lemma 2. Let {Yn; j ; 16j6m(n) ¡ ∞; n¿1} be an array of independent random
variables such that EYn; j = 0 and |Yn; j |6cn ; 16j6m(n); n¿1; where {cn ; n¿1}
m(n)
is a sequence of constants in (0; ∞). Set sn2 = j=1 EYn;2 j ; n¿1 and suppose that
sn2 ¿ 0; n¿1. Let {an ; n¿1} be a sequence of positive constants such that
∞
2 2
n=1 exp{− an } ¡ ∞ for some 0 ¡ ¡ ∞. Then:
(i) If cn an = o(sn );
lim sup
|
m(n)
j=1
sn a n
n→∞
where
Yn; j |
=
B0 = inf
B ∈ (0; ] :
√
∞
n=1
(ii) If cn (log n)1=2 = o(sn ),
m(n)
| j=1 Yn; j |
=1
lim sup
1=2
n→∞ sn (2 log n)
2B0
a:s:;
(2.1)
exp{−B2 a2n } ¡ ∞ :
a:s:
(2.2)
Proof. Part (ii) follows immediately from (i) by taking a1 = (log 2)1=2 ; an =
(log n)1=2 ; n¿2. To prove (i), note that for arbitrary ¿ 0, there exists a positive
integer N () such that for all n¿N ()
√
2(B0 + )an cn
2
2
¿ B0 +
(B0 + ) 1 −
:
sn
2
Employing a Kolmogorov exponential bounds inequality (see, e.g., Theorem 5:2:2(i)
of Stout (1974), p. 262) we have for n¿N ()
m(n)
√
| j=1 Yn; j | √
2(B
+
)a
c
0
n
n
P
¿ 2(B0 + ) 6 2exp −(B0 + )2 a2n 1 −
s n an
2sn
2 2
6 2exp − B0 +
an :
2
By de<nition of B0 , it follows that
m(n)
∞
| j=1 Yn; j | √ ¡ ∞:
P
¿ 2 B0 +
s n an
2
n=1
Then by the Borel–Cantelli lemma,
m(n)
| j=1 Yn; j | √ P
i:o: (n) = 0
¿ 2 B0 +
2
s n an
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
and, hence,
lim sup
|
m(n)
Yn; j |
j=1
s n an
n→∞
√ 6 2 B0 +
2
Since is arbitrary,
m(n)
| j=1 Yn; j | √
lim sup
6 2B0
sn an
n→∞
5
a:s:
a:s:
(2.3)
To prove the reverse inequality, note that we can assume B0 ¿ 0. (For if B0 =0, then
∞
(2.1) coincides with (2.3).) Note that an → ∞ by n=1 exp{−2 a2n } ¡ ∞. Then by
employing another Kolmogorov exponential bounds inequality (see, e.g., Stout, 1974,
Theorem 5:2:2(iii), p. 262), it follows that for arbitrary ∈ (0; B0 ), there exists an
integer N () such that for all n¿N ()
m(n)
| j=1 Yn; j | √
¿ 2 (B0 − ) ¿2 exp{−(B0 − )2 a2n (1 + )};
P
s n an
where
=
Then
(B0 − =2)2
− 1:
(B0 − )2
m(n)
| j=1 Yn; j |
2 2
P
¿ 2 (B0 − ) ¿
exp − B0 −
an = ∞
sn a n
2
n=1
n=N ()
m(n)
by de<nition of B0 . Since
is a sequence of independent random
j=1 Yn; j ; n¿1
variables, by the Borel–Cantelli lemma,
m(n)
| j=1 Yn; j | √
¿ 2(B0 − ) i:o: (n) = 1
P
s n an
∞
and hence
lim sup
n→∞
|
m(n)
j=1
Yn; j |
s n an
√
√
¿ 2 (B0 − )
Since is arbitrary,
m(n)
| j=1 Yn; j | √
¿ 2B0
lim sup
s n an
n→∞
∞
a:s:
a:s:
Remark 1. If the independence hypothesis to Lemma 2 is weakened to {Yn; j ; 16j6
m(n)¡ ∞; n¿1} being an array of rowwise independent random variables, then Lemma
2 still holds with conclusions (2.1) and (2.2) weakened to
m(n)
| j=1 Yn; j | √
6 2B0 a:s:
lim sup
s n an
n→∞
6
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
and
lim sup
n→∞
|
m(n)
j=1
Yn; j |
sn (2 log n)1=2
61
a:s:;
respectively. This follows by noting that nothing beyond rowwise independence was
used to prove (2.3).
3. Mainstream
With the preliminaries accounted for, the main result will now be established. For
an arbitrary sequence of random variables {Xn ; n¿1} de<ned on a probability space
(!)
(; F; P), let XF n and X̂ n; j be de<ned as in Section 1 (even though the {Xn ; n¿1} are
not assumed to be i.i.d.) and set
2 1=2
n
n
n
2
F 2 1=2
i=1 Xi
i=1 Xi
i=1 (Xi − X n )
−
˜n =
; n¿1:
(3.1)
=
n
n
n
It is assumed that the bootstrap samples
(!)
{X̂ n; j ; 16j6m(n)};
n¿1 are independent for almost every ! ∈ :
(3.2)
Theorem 1. Let {Xn ; n¿1} be a sequence of random variables (which are not necessarily independent or identically distributed) and let {m(n); n¿1} be a sequence
of positive integers. Suppose that
(log n) max16i6n (Xi − XF n )2
=0
n→∞
m(n)
lim
a:s:
(3.3)
and for almost every ! ∈ the limit
lim ˜n (!) ≡ (!)
˜
¿ 0 exists:
n→∞
(3.4)
(!)
If the bootstrap samples {X̂ n; j ; 16j6m(n)}; n¿1 satisfy (3:2); then for almost every
!∈
1=2 m(n) (!)
X̂
m(n)
j=1 n; j − XF n (!) = (!)
lim sup
a:s:
(3.5)
˜
2 log n
n→∞
m(n)
Proof. Conclusion (3.5) is equivalent to: for almost every ! ∈ m(n) (!)
| j=1 (X̂ n; j − XF n (!))|
lim sup
= (!)
˜
a:s:
(2m(n) log n)1=2
n→∞
To prove (3.6), set for ! ∈ (!)
F
Yn;(!)
j = X̂ n; j − X n (!);
16j6m(n); n¿1:
(3.6)
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
7
Note that for ! ∈ ,
(!)
F
EYn;(!)
j = 0; |Yn; j |6 max |Xi (!) − X n (!)|;
16i6n
16j6m(n); n¿1
and by Lemma 1
m(n)
j=1
(!) 2
2
2
E(Yn;(!)
j ) = m(n)E(Yn; 1 ) = m(n)˜ n (!):
(3.7)
Now for almost every ! ∈ , by (3.7), (3.3), and (3.4)
(max16i6n |Xi (!)−XF n (!)|)(log n)1=2 (max16i6n |Xi (!)−XF n (!)|)(log n)1=2
=
→0
m(n)
2 1=2
(m(n))1=2 ˜n
( j=1 E(Yn;(!)
j ) )
and so for almost every ! ∈ , by (3.2) and Lemma 2(ii)
m(n)
| j=1 Yn;(!)
j |
lim sup
= 1 a:s:
m(n)
(!) 2 1=2
n→∞ ((2 log n)
j=1 E(Yn; j ) )
(3.8)
Then for almost every ! ∈ m(n) (!)
| j=1 (X̂ n; j − XF n (!))|
lim sup
(2m(n) log n)1=2
n→∞


m(n)
m(n)
(!) 2 1=2
| j=1 Yn;(!)
|
E(Y
)
j
n; j
j=1

= lim sup 
m(n)
(!) 2 1=2
m(n)
n→∞
((2 log n) j=1 E(Yn; j ) )


m(n) (!)
| j=1 Yn; j |


= lim sup  1=2 ˜n (!) (by (3:7))
m(n)
(!) 2
n→∞
(2 log n) j=1 E(Yn; j )
=(!)
˜
a:s: (by(3:8) and (3:4)):
Remark 2. (i) If the sequence of random variables {Xn ; n¿1} satis<es supn¿1 |Xn |¡∞
a.s. (a fortiori if {Xn ; n¿1} is uniformly bounded), then (3.3) can be replaced by the
structurally simpler condition log n = o(m(n)).
(ii) Since
max |Xi − XF n |62 max |Xi |;
16i6n
16i6n
n¿1;
(3.3) will hold if
(log n) max16i6n Xi2
= 0 a:s:
(3.9)
n→∞
m(n)
(iii) If
m(n)
↑ ∞;
(3.10)
log n
then (3.9) is equivalent to the apparently weaker and structurally simpler condition
lim
(log n)Xn2
= 0 a:s:
n→∞
m(n)
lim
(3.11)
8
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
Proof. Assume (3.10) and (3.11). For arbitrary n¿k¿2,
(log n) max16i6n Xi2 (log n) max16i6k−1 Xi2
log n
max X 2
6
+
m(n) k6i6n i
m(n)
m(n)
log i 2
(log n) max16i6k−1 Xi2
6
+ max
Xi (by (3:10))
k6i6n m(i)
m(n)
(log n) max16i6k−1 Xi2
log i 2
6
Xi
+ sup
m(n)
m(i)
i¿k
→0
as <rst n → ∞ and then k → ∞ by (3.10) and (3.11). The reverse implication is
obvious.
(iv) Conclusion (3.5) still holds with (!)
˜
= limn→∞ ˜n (!) when P{˜ = 0} ¿ 0
provided (3.3) is replaced by the condition
(log n) max16i6n (Xi − XF n )2
= 0 a:s:
n→∞
m(n)˜2n
lim
The details are left to the reader. Note that for ! ∈ ; ˜2n (!) = 0 for all n¿1 if and
only if max16i6n (Xi (!) − XF n (!))2 = 0 for all n¿1. For any such ! ∈ ; max16i6n
(Xi (!)− XF n (!))2 = ˜2n (!) should be interpreted as 0 and Conclusion (3.5) trivially holds.
(v) It follows as a special case of Theorem 2:1 of Li et al. (1999) by taking an =
n
(log n=m(n))1=2 ; n¿1 that if (3.9) holds and if i=1 Xi2 =n → 0 a.s., then for every real
number r, every $ ¿ 0, and almost every ! ∈ that the complete convergence result


 m(n) 1=2 m(n) X̂ (!)

∞
j=1 n; j − XF n (!) ¿ $ ¡ ∞
nr P
m(n)
 log n

n=1
obtains. This of course implies by the Borel–Cantelli lemma that for almost every
! ∈ , the SLLN


1=2 m(n) (!)
X̂ n; j
m(n)
j=1

lim
− XF n (!) = 0 a:s:
n→∞
log n
m(n)
holds.
(vi) If either
(a) Condition (3.4) is weakened to lim inf n→∞ ˜n (!) ¿ 0 for almost every ! ∈ ,
or
(b) Condition (3.2) is dispensed with,
then setting n∗ (!) ≡ lim supn→∞ ˜n (!); ! ∈ , a slight modi<cation of the proof
of Theorem 1 yields the following upper bound result: for almost every ! ∈ 1=2 m(n) (!)
X̂
m(n)
j=1 n; j − XF n (!) 6∗ (!) a:s:
lim sup
2 log n
n→∞
m(n)
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
9
The details are left to the reader. (For case (b), refer to Remark 1 and also note that
under (3.4), ∗ (!) = (!)
˜
for almost every ! ∈ .)
The <rst example shows apropos of Theorem 1 that (!);
˜
! ∈ need not be a
constant a.s.
Example 1. Let Xn = $n X; n¿1 where X is a nondegenerate random variable with
P{X = 0} = 0 and {$n ; n¿1} is a sequence of independent random variables with
P{$n = 1} = p; P{$n = −1} = 1 − p; n¿1 where 0 ¡ p ¡ 1. Now by the Kolmogorov
n
SLLN i=1 $i =n → 2p − 1 a.s. whence {˜n ; n¿1} (as de<ned by (3.1)) satis<es
2 1=2
n
$
i
˜n = X 2 − X i=1
→ 2(p(1 − p))1=2 |X | ¿ 0 a:s:
n
For any sequence of positive integers {m(n); n¿1} satisfying log n = o(m(n)), Condition (3.3) holds by Remark 2(i). Thus by Theorem 1, for almost every ! ∈ , (3.5)
1=2
holds with (!)=2(p(1−p))
˜
|X (!)|; ! ∈ . It is interesting to note that no moment
condition has been imposed on X .
The following corollary is an application of Theorem 1 to <nite population sampling.
Corollary 1. Let {x n ; n¿1} be a sequence of real numbers and let {m(n); n¿1} be
a sequence of positive integers and suppose that
(log n) max (xi − xFn )2 = o(m(n))
16i6n
and that the limit
n 2
1=2
2
i=1 xi
= ¿ 0 exists;
lim
− xFn
n→∞
n
n
where xFn = i=1 xi =n; n¿1. For each n¿1, let Xn; 1 ; : : : ; Xn; m(n) be random variables
each uniformly distributed on {x1 ; : : : ; x n } and suppose that the random variables
{Xn; j ; 16j6m(n); n¿1} are independent. Then
1=2 m(n)
m(n)
j=1 Xn; j
lim sup
− xFn = a:s:
m(n)
2
log
n
n→∞
4. Pairwise i.i.d. and stationary ergodic specializations
In this section, new results are obtained by specializing Theorem 1 to the cases where
{Xn ; n¿1} is either a sequence of pairwise i.i.d. random variables or a stationary
ergodic sequence of random variables. The next theorem is an extension of a result
of Mikosch (1994) which was obtained for a sequence {Xn ; n¿1} of i.i.d. random
variables. The overall argument in Theorem 2 is substantially diIerent and considerably
simpler than that of Mikosch (1994).
10
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
Theorem 2. Let {Xn ; n¿1} be a sequence of nondegenerate random variables such
that either
(i) {Xn ; n¿1} is a sequence of pairwise i.i.d. random variables
or
(ii) {Xn ; n¿1} is a stationary ergodic sequence of random variables.
Let {m(n); n¿1} be a sequence of positive integers such that
m(n)
↑:
log n
(4.1)
Suppose that there exists a constant &¿1 such that
n1=& log n = O(m(n))
(4.2)
E|X1 |2& ¡ ∞:
(4.3)
and
Set 2 = Var X1 . Then for almost every ! ∈ 1=2 m(n) (!)
j=1 X̂ n; j
m(n)
F
− X n (!) = lim sup
2
log
n
m(n)
n→∞
a:s:
(4.4)
Proof. Under case (i), {Xn2 ; n¿1} is also a sequence of pairwise i.i.d. random variables. Now in view of (4.3), by a double application of the Etemadi (1981) SLLN
2 1=2
n
n
2
i=1 Xi
i=1 Xi
˜n =
→ (EX12 − (EX1 )2 )1=2 = ¿ 0 a:s:
(4.5)
−
n
n
Under case (ii), {Xn2 ; n¿1} is also a stationary ergodic sequence by Theorem 3:5:8
of Stout (1974), p. 182. Again in view of (4.3), by a double application of the pointwise
ergodic theorem for stationary sequences (see, e.g., Stout, 1974, p. 181), (4.5) holds.
Next, by (4.2) there exists a constant M ¡ ∞ such that n1=& log n6Mm(n); n¿1.
Then for arbitrary $ ¿ 0
$ 1=2
∞
∞
(log n)Xn2
P
P |X1 | ¿
n1=2& ¡ ∞ (by (4:3)):
¿$ 6
m(n)
M
n=1
n=1
Thus by the Borel–Cantelli lemma
(log n)Xn2
= 0 a:s:
n→∞
m(n)
lim
Since (4.1) and (4.2) ensure that (3.10) holds, (3.3) then follows from Remarks 2(ii)
and 2(iii). The conclusion (4.4) results directly from Theorem 1.
Remark 3. (i) Theorem 2(i) gains added interest in light of the fact that the classical Hartman–Wintner (1941) LIL does not hold in general for sequences of pairwise
i.i.d. random variables. (See RHevHesz and Wschebor (1964) or, for a more thorough
discussion, Cuesta and MatrHan (1991).)
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
11
(ii) Theorem 2(ii) can fail for a stationary sequence {Xn ; n¿1} if the hypothesis that
{Xn ; n¿1} is ergodic is dispensed with. To see this, let X be a random variable with
0 ¡ 2 ≡ Var X ¡ ∞ and set Xn = X; n¿1. Then {Xn ; n¿1} is a stationary sequence
which is not ergodic. It is clear for every sequence of positive integers {m(n); n¿1}
that for almost every ! ∈ ,
m(n)
j=1
(!)
X̂ n; j
m(n)
− XF n (!) = X (!) − X (!) = 0
a:s:
and so (4.4) fails.
(iii) The simple example in (ii) also shows that for a sequence of identically dis˜
(in
tributed random variables {Xn ; n¿1} with 0 ¡ Var X1 ¡ ∞ that the limit (!)
(3.4)) can be 0 for almost every ! ∈ . Conclusion (3.5) of Theorem 1 trivially holds
for this example.
(iv) Theorem 2 also holds for an M -dependent process {Xn ; n¿1} of nondegenerate
identically distributed random variables. Veri<cation is left to the reader.
When {Xn ; n¿1} is a sequence of i.i.d. random variables, the next corollary provides
conditions so that XF n (!) can be replaced by EX1 in the conclusion (4.4) of Theorem 2.
Corollary 2. Let {Xn ; n¿1} be a sequence of nondegenerate i.i.d. random variables
and let {m(n); n¿1} be a sequence of positive integers such that (4:1) and
n log n
(4.6)
m(n) = o
log log n
hold. Suppose that there exists a constant & ¿ 1 such that (4:2) and (4:3) hold. Set
2 = Var X1 . Then for almost every ! ∈ 1=2 m(n) (!)
j=1 X̂ n; j
m(n)
lim sup
− EX1 = a:s:
(4.7)
2 log n
n→∞
m(n)
Proof. By Theorem 2, (4.4) holds. Now for almost every ! ∈ ,
m(n) (!)
1=2 m(n) (!)
j=1 X̂ n; j
X̂
m(n)
n;
j
j=1
F n (!)
−
−
EX
−
X
1
m(n)
m(n)
2 log n
6
m(n)
2 log n
1=2
|XF n (!) − EX1 |
m(n) log log n
=
n log n
→ 0 a:s:
1=2 n
| i=1 (Xi (!) − EX1 )|
(2n log log n)1=2
by (4.6) and the Hartman–Wintner (1941) LIL. This combined with (4.4) yields
conclusion (4.7).
12
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
5. Some (nal comments
To conclude, some <nal comments are in order concerning bootstrapped means (with
m(n) = n; n¿1) formed from a sequence {Xn ; n¿1} of nondegenerate i.i.d. random
variables. Set L(x) = loge (e ∨ x); x¿0.
(i) Suppose that E|X1 |p ¡ ∞ for some p ¿ 2. Set 2 =Var X1 . Now by the Hartman–
Wintner (1941) LIL
1=2 n
i=1 Xi
n
(5.1)
lim sup
− EX1 = a:s:
2 log log n
n
n→∞
However, for bootstrapped means, it follows from Corollary 2 that for almost every
!∈
(!)
1=2 n
j=1 X̂ n; j
n
(5.2)
lim sup
− EX1 = a:s:
2
log
n
n
n→∞
Obviously (5.1) and (5.2) have diIerent orders of convergence; an iterated logarithm
appears in (5.1) whereas a single logarithm appears in (5.2).
(ii) Suppose that E(|X1 |p (L(|X1 |)p ) ¡ ∞ for some p ∈ [1; 2). Now by the classical
Marcinkiewicz–Zygmund SLLN
n
1−p−1
i=1 Xi
lim n
(5.3)
− EX1 = 0 a:s:
n→∞
n
and by Proposition 3:3 of Mikosch (1994) for almost every ! ∈ 

(!)
n
n
X̂
−1
X
(!)
n;
j
i
j=1
 = 0 a:s:
lim n1−p 
− i=1
n→∞
n
n
implying for almost every ! ∈ 

(!)
n
X̂
−1
j=1 n; j
lim n1−p 
− EX1  = 0 a:s:
n→∞
n
(5.4)
Thus, in contrast to the LIL situation discussed in (i) above, there is no diIerence
between the order of convergence for the classical Marcinkiewicz–Zygmund SLLN
(5.3) and the Marcinkiewicz–Zygmund-type SLLN (5.4) for bootstrapped means.
(iii) If E|X1 |p ¡ ∞ for some p ¿ 2, then by Theorem 2 and Corollary 2 for almost
every ! ∈ (!)
1=2 n
j=1 X̂ n; j
n
lim sup
− XF n (!)
2 log n
n
n→∞
(!)
1=2 n
j=1 X̂ n; j
n
= lim sup
− EX1 2 log n
n
n→∞
= a:s:
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
13
where 2 = Var X1 . It will now be shown that
n
n
(!)
(!)
j=1 X̂ n; j
j=1 X̂ n; j
F
− X n (!) and
− EX1 ; n¿1
n
n
do not exhibit similar behavior to each other in the CLT sense. When EX12 ¡∞,
recalling (1.1), for almost every ! ∈ 

(!)
n
X̂
n;
j
d
j=1
− XF n (!) → N(0; 1):
n1=2 
n
However, for almost every ! ∈ 

(!)
n
X̂
j=1 n; j
n1=2 
− EX1  does not converge in distribution:
n
(5.5)
To see this, note that




(!)
(!)
n
n
X̂
X̂
j=1 n; j
j=1 n; j
− XF n (!) + n1=2 (XF n (!) − EX1 ):
n1=2 
− EX1  = n1=2 
n
n
Since either (1.2) or (5.1) ensures that for almost every ! ∈ ,
lim sup n1=2 (XF n (!) − EX1 ) = ∞;
n→∞
the assertion (5.5) follows.
(iv) Let {Xn; j ; 16j6n; n¿1} be a triangular array of nondegenerate i.i.d. random
variables with EX1;2 1 ¡ ∞. Set 2 = Var X1; 1 . Li et al. (1995) showed that
1=2 n
n
j=1 Xn; j
(5.6)
lim sup
− EX1; 1 = a:s:
2 log n
n
n→∞
if and only if
E
X1;4 1
(L(|X1; 1 |))2
¡ ∞:
Note that (5.2) and (5.6) have the same order of convergence. We thus pose the
question: Do conditions exist which are both necessary and suTcient for (5.2)? The
authors hope that this problem will be investigated by the interested reader.
References
Arenal-GutiHerrez, E., MatrHan, C., Cuesta-Albertos, J.A., 1996. On the unconditional strong law of large
numbers for the bootstrap mean. Statist. Probab. Lett. 27, 49–60.
Athreya, K.B., 1983. Strong law for the bootstrap. Statist. Probab. Lett. 1, 147–150.
Bickel, P.J., Freedman, D.A., 1981. Some asymptotic theory for the bootstrap. Ann. Statist. 9, 1196–1217.
CsLorgő, S., 1992. On the law of large numbers for the bootstrap mean. Statist. Probab. Lett. 14, 1–7.
Cuesta, J.A., MatrHan, C., 1991. On the asymptotic behavior of sums of pairwise independent random variables.
Statist. Probab. Lett. 11, 201–210 (Erratum. Statist. Probab. Lett. 12, (1991) 183).
14
S.E. Ahmed et al. / Journal of Statistical Planning and Inference 98 (2001) 1–14
Efron, B., 1979. Bootstrap methods: Another look at the jackknife. Ann. Statist. 7, 1–26.
Etemadi, N., 1981. An elementary proof of the strong law of large numbers. Z. Wahrsch. Verw. Gebiete 55,
119–122.
Hartman, P., Wintner, A., 1941. On the law of the iterated logarithm. Amer. J. Math. 63, 169–176.
GinHe, E., Zinn, J., 1989. Necessary conditions for the bootstrap of the mean. Ann. Statist. 17, 684–691.
Li, D., Rao, M.B., Tomkins, R.J., 1995. A strong law for B-valued arrays. Proc. Amer. Math. Soc. 123,
3205–3212.
Li, D., Rosalsky, A., Ahmed, S.E., 1999. Complete convergence of bootstrapped means and moments of the
supremum of normed bootstrapped sums. Stochastic Anal. Appl. 17, 799–814.
Mikosch, T., 1994. Almost sure convergence of bootstrapped means and U -statistics. J. Statist. Plann.
Inference 41, 1–19.
RHevHesz, P., Wschebor, W., 1964. On the statistical properties of the Walsch functions. Magyar Tud. Akad.
Mat. KutatHo Int. KLozl. (Publ. Math. Inst. Hung. Acad. Sci.) 9, Ser. A, 543–554.
Singh, K., 1981. On the asymptotic accuracy of Efron’s bootstrap. Ann. Statist. 9, 1187–1195.
Stout, W.F., 1974. Almost Sure Convergence. Academic Press, New York.
Sung, S.H., 1996. An analogue of Kolmogorov’s law of the iterated logarithm for arrays. Bull. Austral.
Math. Soc. 54, 177–182.