Download On complete convergence of triangular arrays of independent

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
ARTICLE IN PRESS
Statistics & Probability Letters 77 (2007) 952–963
www.elsevier.com/locate/stapro
On complete convergence of triangular arrays
of independent random variables
István Berkesa,,1, Michel Weberb
a
Department of Statistics, Technical University Graz, Steyrergasse 17/IV, A-8010 Graz, Austria
Mathématique (IRMA), Université Louis-Pasteur et C.N.R.S., 7 rue René Descartes, 67084 Strasbourg Cedex, France
b
Received 23 March 2006; received in revised form 11 September 2006; accepted 19 December 2006
Available online 19 January 2007
Abstract
Given P
a triangular array a ¼ fan;k ; 1pkpkn ; nX1g of positive reals, we study the complete convergence property
n
of T n ¼ kk¼1
an;k X n;k for triangular arrays X ¼ fX n;k ; 1pkpkn ; nX1g of independent random variables. In the
Gaussian case we obtain a simple characterization of density type. Using Skorohod representation and Gaussian
randomization, we then derive sufficient criteria for the case when X n;k are in Lp , and establish a link between the Lp -case
and L2p -case in terms of densities. We finally obtain a density type condition in the case of uniformly bounded random
variables.
r 2007 Elsevier B.V. All rights reserved.
MSC: primary 60G50; 60F15; secondary 60G15
Keywords: Complete convergence; Triangular arrays; Independent random variables
1. Introduction and results
Throughout this paper, we let X ¼ fX n;k ; 1pkpkn ; nX1g denote a triangular array of real centered
independent random variables, and a ¼ fan;k ; 1pkpkn ; nX1g with fkn ; nX1g non-decreasing, a triangular
array of positive reals. When the random variables are symmetric (resp. identically distributed), we will say
that the triangular array X is symmetric (resp. iid). Set, for every nX1,
Tn ¼
kn
X
an;k X n;k ;
k¼1
An ¼
kn
X
k¼1
an;k ; B2n ¼
kn
X
a2n;k ; C n ¼ An =Bn .
(1)
k¼1
Let ðO; A; PÞ be the basic probability space on which X is defined. Note that C n X1. We investigate under
c:c:
what conditions the sequence T n =An converges completely to 0: T n =An ! 0, which means, as is well-known,
Corresponding author.
1
E-mail addresses: [email protected] (I. Berkes), [email protected] (M. Weber).
Research supported by Hungarian National Foundation for Scientific Research, Grants T 043037 and K 61052.
0167-7152/$ - see front matter r 2007 Elsevier B.V. All rights reserved.
doi:10.1016/j.spl.2006.12.011
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
953
that for any e40
X
PfjT n j=An 4ego1.
n
The study of this property originates from a well-known paper by Hsu andP
Robbins (1947) who proved in the
case of a single iid sequence n ¼ fx; xn ; nX1g with partial sums S n ¼ nk¼1 xk , n ¼ 1; 2; . . . that Ex ¼ 0,
c:c:
+ (1949) proved the validity of the converse implication.
Ex2 o1 imply Sn =n ! 0. Shortly afterward, Erdos
Since then, the study of various possible generalizations of this result (subsequence case, the theorems of Baum
and Katz (1965), extensions to triangular arrays of independent random variables, Banach space valued
random variables) have received a lot of attention. See, for example, the works of Pruitt (1966), Rohatgi
(1971), Fazekas (1985, 1992), Hu et al. (1989), Kuczmaszewska and Szynal (1988, 1990, 1994), Gut (1992), Li
et al. (1992), Rao et al. (1993), Sung (1997), Adler et al. (1999), Hu et al. (1999), Ahmed et al. (2002). The
purpose of the present paper is to present new necessary as well as sufficient criteria for the complete
convergence of triangular arrays of independent random variables, and discuss their relations with known
results in the literature.
We start our investigations with the Gaussian case, because of the classical Gaussian randomization
procedure for sums of independent random variables, and also because this case is in general very informative.
If X is Gaussian, the problem can be simply settled. Put
LðaÞ ¼ lim sup
x!1
log ]fn : C n pxg
.
x2
Then we have the following characterization.
Theorem 1. Assume that the X n;k are iid standard Gaussian variables. Then we have
c:c:
T n =An ! 0 () LðaÞ ¼ 0.
In view of this complete result, it is natural to attack the general iid case using invariance principles.
Applying Skorohod embedding for the row sums of the triangular array X leads, under natural conditions on
c:c:
the stopping times in the Skorohod representation, to a necessary and sufficient criterion for T n =An ! 0, see
Proposition 10. This condition, in turn, leads to sufficient criteria under the existence of higher moments. In
particular, we will prove:
Theorem 2. Assume that EX 2n;k ¼ 1 and X n;k 2 L2p for some pX2. Then the relation
P n 4 p=2
X ð kk¼1
an;k Þ
Pkn 2 p o1,
ð k¼1 an;k Þ
n
c:c:
implies T n =An ! 0.
To compare this result with the Gaussian case, note that LðaÞ ¼ 0 is equivalent to
0 2 1
Pk n
a
X
n;k
k¼1
B
C
exp@d Pkn
Ao1 for all d40.
2
n
k¼1 an;k
In the case when X is also symmetric, the condition in Theorem 2 can be weakened.
Theorem 3. Assume that X is symmetric, EX 2n;k ¼ 1 and X n;k 2 L2p for some pX2. Then the relation
P n 4 p=2
X ð kk¼1
an;k Þ
o1,
Pk n
2p
p
n ð k¼1 an;k Þ log n
c:c:
implies T n =An ! 0.
Recall that the array X is stochastically bounded by a random variable X if there is a constant D such that
PfjX n;k j4xgpDPfjDX j4xg for all x40 and for all nX1, 1pkpkn . We will prove the following result.
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
954
Theorem 4. Let X be a symmetric triangular array stochastically bounded by a square integrable random
variable X. Assume that for any e40:
P
1pkplo1 PfjX jXeAl =ak go1.
Further
assume that for some integer rX2 and any e40,
P
(b) nX1 PfjT n j4eAn gr o1.
Then
c:c:
(c) T n =An ! 0.
Conversely, if the triangular array X is iid symmetric, then (c) implies (a).
(a)
The next result concerns the uniformly bounded case. We show that a condition similar to that assumed in
the Gaussian case suffices for complete convergence. Put, for any positive integer n,
V 2n ¼
kn
X
a2n;k X 2n;k .
k¼1
Theorem 5. Let X be a triangular array of real centered, uniformly bounded independent random variables.
Assume that for any e40
E sup
mX1
]fn : moAn =V n pm þ 1g
o1.
expfem2 g
c:c:
Then T n =An ! 0.
Our final result establishes a link between the complete convergence of arrays in the Lp and L2p -case.
Remarkably, the link is provided by the density condition in the Gaussian case in Theorem 1. We need a
preliminary definition.
Definition. Let pX2. We say that a is p-regular if any triangular array X of real centered iid random variables
c:c:
with finite pth moments satisfies T n =An ! 0.
Let a be a triangular array of positive reals. Define
a2 :¼fa2n;k ; 1pkpkn ; nX1g.
Then we have:
Theorem 6. Let pX2 and assume that a2 is p-regular. Then a is 2p-regular iff LðaÞ ¼ 0.
2. Proofs
Proof of Theorem 1. Before giving the proof, recall for the reader’s convenience an elementary estimate for
Gaussian random variables due to Komatsu–Pollak (see Mitrinović, 1970, p. 178).
R1 2
2
Lemma 7. The Mills’s ratio RðxÞ ¼ ex =2 x et =2 dt satisfies
rffiffiffi
2
2
p
ffiffiffiffiffiffiffiffiffiffiffiffi
ffi
pffiffiffiffiffiffiffiffiffiffiffiffiffi
q
for all x40.
pRðxÞp
p
2
2
8
x þ4þx
x2 þ p þ x
Note that
PfjT n j=An 4eg ¼ PfjNð0; 1Þj4eC n g 2
1
eðeC n Þ =2
1 þ eC n
as n ! 1,
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
955
where the symbol means that the ratio of the two sides is between positive constants. Thus it follows that
c:c:
T n =An ! 0 if and only if the series
X
2
edC n
n
converges for any d40. And this is equivalent to LðaÞ ¼ 0 (for a proof, see e.g. Weber, 1995, pp.
402–403). &
Proof of Theorem 6. The proof relies upon several intermediate results. Let n ¼ fxk ; kX1g be a sequence of
real centered independent square integrable random variables defined on the probability space ðO; A; PÞ, and
let w ¼ fwk ; kX1g be a sequence of positive reals. Put, for any positive integer m,
Sm ¼
m
X
w k xk ;
Wm ¼
k¼1
m
X
k¼1
wk ;
Mm ¼
m
X
w2k .
k¼1
Recall the Skorohod embedding scheme (see e.g. Breiman, 1968): there exists, after suitably enlarging the
probability space, a linear Brownian motion B ¼ fBðtÞ; 0pto1g starting at 0, and a sequence t1 ; t2 ; . . . of
independent non-negative random variables with Etk ¼ w2k Ex2k , kX1 such that, with t0 ¼ 0 a.s.,
(
!
!
)
k
k1
X
X
D
B
tj B
tj ; kX1 ¼fwk xk ; kX1g.
j¼0
j¼0
Put, for any real x,
Z 1
1
2
CðxÞ ¼ pffiffiffiffiffiffi
eu =2 du.
2p x
pffiffiffiffiffi
Lemma 8. Let e; h; d be positive numbers with e4h4 2d, and put
(
)
X
m
Dm ¼ Dm ðdÞ ¼ P t M m XdM m .
j¼0 j
Then, for any positive integer m, we have
Wm
hW m
Wm
hW m
C ðe þ hÞ pffiffiffiffiffiffiffiffi 4C pffiffiffiffiffiffiffiffiffiffiffiffiffi Dm pPfjS m j4eW m gpC ðe hÞ pffiffiffiffiffiffiffiffi þ 4C pffiffiffiffiffiffiffiffiffiffiffiffiffi þ Dm .
Mm
Mm
2dM m
2dM m
Proof. We observe that
(
!
)
X
m
PfjS m j4eW m g ¼ P B
tj 4eW m
j¼0
(
!
)
X
m
tj 4eW m ; jBðM m Þjpðe hÞW m
pPfjBðM m Þj4ðe hÞW m g þ P B
j¼0
(
!
)
X
m
ðe hÞW m
pffiffiffiffiffiffiffiffi
tj BðM m ÞXhW m
pC
þ P B
Mm
j¼0
(
)
(
)
X
m
ðe hÞW m
pffiffiffiffiffiffiffiffi
t M m XdM m þ P sup jBðyM m Þ BðM m ÞjXhW m
pC
þP j¼0 j
Mm
jy1jpd
(
)
(
)
m
ðe hÞW m
Wm
X
pffiffiffiffiffiffiffiffi
¼C
t M m XdM m þ P sup jBðyÞ Bð1ÞjXh pffiffiffiffiffiffiffiffi .
þP j¼0 j
Mm
Mm
jy1jpd
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
956
Conversely,
Wm
C ðe þ hÞ pffiffiffiffiffiffiffiffi ¼ PfjBðM m Þj4ðe þ hÞW m g
Mm
(
!
)
(
!
)
X
X
m
m
p P B
tj 4eW m þ P jBðM m Þj4ðe þ hÞW m ; B
tj peW m
j¼0
j¼0
(
!
)
(
!
)
X
X
m
m
p P B
tj 4eW m þ P B
tj BðM m ÞXhW m
j¼0
j¼0
(
!
)
(
)
X
X
m
m
p P B
tj 4eW m þ P tj M m XdM m
j¼0
j¼0
(
)
þP
sup jBðyM m Þ BðM m ÞjXhW m
jy1jpd
(
!
)
(
)
(
)
X
X
m
m
Wm
¼ P B
tj 4eW m þ P t M m XdM m þ P sup jBðyÞ Bð1ÞjXh pffiffiffiffiffiffiffiffi .
j¼0 j
Mm
jy1jpd
j¼0
Since B has stationary increments, we get by using scale invariance, the symmetry of the law of B and
Eq. (1.5.1) in Csörgo+ and Révész (1981, p. 43),
(
)
(
)
Wm
hW m
P sup jBðyÞ Bð1ÞjXh pffiffiffiffiffiffiffiffi ¼ P sup jBðuÞjX pffiffiffiffiffiffiffiffi
Mm
Mm
jy1jpd
u2½0;2d
hW m
¼ P sup jBðuÞjX pffiffiffiffiffiffiffiffiffiffiffiffiffi
2dM m
0pup1
hW m
¼ P max sup BðuÞ; sup ðBðuÞÞ X pffiffiffiffiffiffiffiffiffiffiffiffiffi
2dM m
0pup1
0pup1
hW m
hW m
p2P sup BðuÞX pffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 4C pffiffiffiffiffiffiffiffiffiffiffiffiffi .
2dM m
2dM m
0pup1
Consequently,
Wm
hW m
Wm
hW m
C ðe þ hÞ pffiffiffiffiffiffiffiffi 4C pffiffiffiffiffiffiffiffiffiffiffiffiffi Dm pPfjS m j4eW m gpC ðe hÞ pffiffiffiffiffiffiffiffi þ 4C pffiffiffiffiffiffiffiffiffiffiffiffiffi þ Dm .
Mm
Mm
2dM m
2dM m
This completes the proof.
&
We shall apply Lemma 8 to triangular arrays. Let again X ¼ fX n;k ; 1pkpkn ; nX1g be a triangular array of
real centered independent random variables and a ¼ fan;k ; 1pkpkn ; nX1g a triangular array of positive reals.
By considering, if necessary, a larger probability space, we can always assume that there exists a sequence
n1 ; n2 ; . . . such that for each positive integer n,
nn ¼ fxn;k ; kX1g;
with xn;k ¼ X n;k ; 1pkpkn ,
n
and n is a sequence of independent random variables. Further the sequences n1 ; n2 ; . . . are mutually
independent. By suitably enlarging the probability space, there exists for each integer n a linear Brownian
motion Bn ¼ fBn ðtÞ; 0pto1g starting at 0 and a sequence tn1 ; tn2 ; . . . of independent non-negative random
variables with Etnk ¼ a2n;k Ex2n;k , kX1 such that, with tn0 ¼ 0 a.s.,
(
!
!
)
k
k1
X
X
D
tnj Bn
tnj ; kX1 ¼fan;k xn;k ; kX1g.
Bn
j¼0
j¼0
In fact, in each step, it would be enough to let k run between 1 and kn . By applying Lemma 8 with the choice
n ¼ nn , m ¼ kn , we now easily deduce the following corollary.
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
957
pffiffiffiffiffi
Corollary 9. Let e; h; d be positive reals with e4h4 2d. Then, with notation (1), for n ¼ 1; 2; . . .
h
h
Cððe þ hÞC n Þ 4C pffiffiffiffiffi C n Dn ðdÞpPfjT n j4eAn gpCððe hÞC n Þ þ 4C pffiffiffiffiffi C n þ Dn ðdÞ,
2d
2d
where
(
)
kn
X
tn B2n XdB2n .
Dn ðdÞ ¼ P j¼0 j
This result will allow us to establish the following statement.
Proposition 10. Assume that X and a satisfy
X
Dm ðdÞo1 for all d40.
(2)
m
Then
c:c:
T n =An ! 0 () LðaÞ ¼ 0.
a:s:
This proposition can be viewed as an extension of Theorem 1, since in the Gaussian case tnj ¼ a2n;j .
ffiC Þ, which is achieved by using
Proof. The key lies in the comparison between Cððe þ hÞC n Þ and Cðphffiffiffi
2d n
c:c:
Lemma
7.
The
implication
LðaÞ
¼
0
)
T
=A
!
0
is
easy.
Indeed,
if
LðaÞ
¼ 0, then for any r40 the series
n
n
P rC 2
n converges, or equivalently,
e
n
X
CðrC n Þo1 for all r40.
(3)
n
pffiffiffiffiffi
Let e40,
and
choose
h;
d
in
Corollary
9
such
that
h
¼
e=24
2d. By Corollary 9 and the assumption made, the
P
series n PfjT n j4eAn g converges provided
X
X h
Cððe hÞC n Þo1;
C pffiffiffiffiffi C n o1.
2d
n
n
P
And this holds true if n Cððe=2ÞC n Þo1, which is satisfied by assumption. Hence the first part of Proposition
10 is proved.
P
c:c:
Conversely, if T n =An ! 0, then the series n PfjT n j4eAn g converges for any e40. We shall prove that (3)
holds true. We distinguish two cases.
Case
choose e; h; d such as e ¼ h ¼ r=2, d ¼ 1=8, so that
pffiffiffiffiffi 1: lim inf n!1 C n ¼ 1. Let r40 be fixed, wepffiffiffiffiffi
h= 2d ¼ 2r. Then Cððe þ hÞC n Þ ¼ CðrC n Þ and Cððh= 2dÞC n Þ ¼ Cð2rC n Þ. By Lemma 7,
CðrC n Þ 2
1
eðrC n Þ =2 ;
1 þ rC n
Cð2rC n Þ 2
1
e2ðrC n Þ ,
1 þ 2rC n
so that, for any ror1 or2 o2r, if n is sufficiently large
CðrC n ÞXeðr1 C n Þ
2
=2
;
4Cð2rC n Þpeðr2 C n Þ
2
=2
.
Therefore,
CðrC n Þ 4Cð2rC n ÞXeðr1 C n Þ
2
=2
2
2
ð1 eðr2 r1 ÞðC n Þ
2
=2
ÞXð1=2Þeðr1 C n Þ
2
=2
P
2
for n sufficiently large. In view of Corollary 9, and assumption (2) this implies that the series n eðr1 C n Þ =2
converges. This being true for any r40 and any r1 4r, it follows that (3) is satisfied, as claimed.
Case 2: lim inf n!1 C n o1. In this case there exist a sequence of indices fnj ; jX1g and a real t such that
limj!1 C nj ¼ t. Choose r40 such that CðrtÞ44Cð2rtÞ, and let again e; h; d such as e ¼ h ¼ r=2, d ¼ 1=8.
Applying Corollary 9 for n ¼ nj , j ¼ 1; 2; . . . gives
CðrC nj Þ 4Cð2rC nj ÞpPfjT nj j4eAnj g þ Dnj ðdÞ.
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
958
Letting now j tend to infinity implies
0oCðrtÞ 4Cð2rtÞp lim inf ðPfjT nj j4eAnj g þ Dnj ðdÞÞ,
j!1
which contradicts the fact that both series
P
n PfjT n j4eAn g,
P
n Dn ðdÞ
converge. The proof is now complete.
&
We can now pass to the proof of Theorem 6. Let pX2 and let a ¼ fan;k ; 1pkpkn ; nX1g be a triangular
array of positive reals such that b ¼ a2 is p-regular. Let X ¼ fX n;k ; 1pkpkn ; nX1g be a triangular array of
real centered iid random variables with finite 2p-th moment. We shall make use of the fact (Fisher (1992),
D
Theorem 2.1) that for each n, we can assume that ftnk ; 1pkpkn g ¼fa2n;k ynk ; 1pkpkn g, and fynk ; 1pkpkn g is an
iid sequence with finite pth moments. As b is p-regular, (2) is satisfied. Using Proposition 10, we get the desired
conclusion. &
Remark. Although the characterization given in Theorem 6 is simple, it is rather abstract. Usually condition
(2) is as difficult to check as the fact that a is 2p-regular. Thus the interest in a statement like Theorem 6 is the
link established between p-regularity and 2p-regularity, via the arrays a and b.
It is possible to check directly condition (2), by imposing conditions on the weights, which, however, appear
to be stronger than the condition LðaÞ ¼ 0. To see this, we shall use some arguments from Weber (2006). In
order to avoid unnecessarily heavy notation, we simply return to the setting considered in Lemma 8, and will
bound the quantity
(
)
X
m
Dm ¼ Dm ðdÞ ¼ P t M m XdM m .
j¼0 j
Using inequality (1.2) in Davis (1976) we see that if Ejxi j2þe o1 for some e40, the sequence of stopping times
ti satisfies
1þe=2
Eti
2þe
pCw2þe
,
i Ejxi j
(4)
where the constant C depends on e only. Let pX2. Assume that for any positive integer j, xi 2 L2p , and
moreover
Qp ðnÞ:¼ sup kxi kp o1.
jX1
Put for any positive integer l,
xl ¼ tl Etl ¼ tl w2l .
Then using (4) with 2ðp 1Þ ¼ e gives
2p
0
p
Ejxl jp p2p ðEjtl jp þ w2p
l ÞpC p ð1 þ Qp ðnÞÞwl ,
where C 0p depends on p only, and may vary in the next lines. Further note that in the case xl 2 L4 , lX1 we have
0pEx2l ¼ Et2l ðEtl Þ2 pEt2l pC 02 w4l Ejxl j4 .
Apply now Rosenthal’s inequality (see e.g. Petrov, 1995, p. 59). In view of centering and independence of the
xl ’s, we get
0
p
!p=2 1
X
m
m
m
X
X
A
E ðtl w2l Þ pC 0p @
w2p
Ex2l
l þ
l¼1
l¼1
l¼1
0
!p=2 1
!p=2
m
m
m
X
X
X
2p
0
p
4
0
p
4
@
A
pC p ð1 þ Qp ðnÞÞ
pC p ð1 þ Qp ðnÞÞ
wl þ
wl
wl
.
l¼1
l¼1
l¼1
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
959
Consequently, by using Chebyshev’s inequality,
(
)
Pm 4 p=2
X
m
0
p
l¼1 wl
t M m XdM m pC p ð1 þ Qp ðnÞÞ
.
Dm ðdÞ ¼ P j¼0 j
ðdM m Þ2
We thus see that condition (2) holds provided
!p
X ½Pm w4 1=2
l¼1 l
o1.
m
Mm
For triangular arrays, this means that
P n 4 1=2 !p
X ½ kk¼1
an;k o1,
Pkn 2
n
k¼1 an;k
establishing Theorem 2. As we noted earlier, LðaÞ ¼ 0 is equivalent to
0 2 1
Pk n
X
k¼1 an;k
B
C
exp@d Pkn
Ao1 for all d40.
2
a
n
k¼1 n;k
Proof of Theorem 3. Since X is symmetric, it has the same law as X ¼ fen;k X n;k ; 1pkpkn ; nX1g, where
e ¼ fen;k ; 1pkpkn ; nX1g is a Rademacher sequence defined on a joint probability space ðOe ; Ae ; Pe Þ (with
corresponding expectation symbol Ee ). Put
Pk n 2 2
kn
X
k¼1 an;k X n;k
Yn ¼
an;k en;k X n;k ; Qn ¼
.
B2n
k¼1
Let fOn ; nX1g be a sequence of positive reals. Write
jT n j
jY n j
jY n j
P
4e ¼ EPe
4e pPfQn 4On g þ E1fQn pOn g Pe
4e .
An
An
An
Further, there exists an absolute constant C such that
(
)
(
)
jY n j
e2 A2n
e2 A2n
e2 C 2n
Pe
4e p exp C Pkn
.
¼ exp C
¼ exp C
2
2
An
Qn
Qn B2n
k¼1 an;k X n;k
We deduce that
jT n j
e2 C 2n
4e pPfQn 4On g þ exp C
P
.
An
On
It follows that if
1
X
ðaÞ
then
PfQn 4On go1;
n¼1
c:c:
T n =An ! 0.
On ¼
C 2n =ðL
ðbÞ
log nÞ,
c:c:
n¼1
Choosing in particular (with L41)
shows that T n =An ! 0, provided that
1
X
e2 C 2n
exp C
o1
On
n¼1
1
X
PfQn 4lC 2n = log ngo1
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
960
for any l40. To connect the last sum with the sum in Theorem 3, we use Rosenthal’s inequality. Recall that
we assumed for 1pkpkn ; nX1 that EX 2n;k ¼ 1, and for some pX2, X n;k 2 L2p . Put
Y n;k ¼ a2n;k ðX 2n;k 1Þ;
1pkpkn ; nX1,
then for sufficiently large n we have
(P k
PfQn 4lC 2n = log
2
n
2
k¼1 an;k X n;k
2
Bn
ng ¼ P
(
pP
kn
X
a2n;k ðX 2n;k
k¼1
A2
4l 2 n
Bn log n
(
)
¼P
kn
X
)
a2n;k X 2n;k 4lA2n
log n
k¼1
)
!
Pn
Ej kk¼1
Y n;k jp
l 2
1Þ4 An log n p l 2
.
2
ð2 An log nÞp
Now, by Rosenthal’s inequality
9
8
p !2 p=2
>
p >
kn
k
k
X
=
<
n
n
X
X
p
p
,
Y n;k p C 0
E
Y
þ
EjY
j
E
n;k
n;k
k¼1
>
log p >
;
:
k¼1
k¼1
where C 0 is an absolute constant. But
2
kn
kn
kn
X
X
X
E
Y n;k ¼
a4n;k EðX 2n;k 1Þ2 pCkX k44
a4n;k ,
k¼1
k¼1
k¼1
so that
8
9
p !p=2
p <
kn
kn
kn
X
=
X
X
p
2p
E
Y n;k p C 0
CkX k44
a4n;k
þ kX k2p
a
2p
n;k ;
k¼1
log p :
k¼1
k¼1
pC p maxðkX k44 ; kX k2p
2p Þ
kn
X
!p=2
a4n;k
.
k¼1
Therefore,
PfQn 4lC 2n = log
ngpC p maxðkX k44 ; kX k2p
2p Þ
Pkn
4 p=2
k¼1 an;k Þ
ðl2A2n log nÞp
ð
!
.
This completes the proof of Theorem 3. &
Proof of Theorem 4. Let Y 1 ; . . . ; Y n be independent symmetric random variables, S n ¼ Y 1 þ þ Y n . One
part of the Hoffmann-Jørgensen (1974) inequality states that
p
p
PfjSn j43 tgpC p P max jX k j4t þ C p fPðjSn j4tÞg2
(5)
1pkpn
for any integer pX1, where C p is a constant depending on p. By (5) we have
PfjT n j43p eAn gpDC p
kn
X
p
PfjDak X j4eAn g þ C p ðPfjT n j4eAn gÞ2 .
k¼1
Choosing p large enough and summing (6) for n ¼ 1; 2; . . . we get
1
X
n¼1
PfjT n j43p eAn gpDC p
X
1pkpkn
nX1
PfjX j4eAn =Dak g þ C p
1
X
p
ðPfjT n j4eAn gÞ2 .
n¼1
(6)
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
961
Assumptions (a) and (b) therefore imply (c). Conversely if (c) is true, then
1
1
PfjT n j4eAn gX P max jak X k jXeAn ¼ 1 P max jak X k joeAn
1pkpkn
1pkpkn
2
2
"
#
"
#
kn
kn
Y
Y
1
1
1
¼
ð1 Pfjak X k jXeAn gÞ X 1 ePfjak X k jXeAn g
2
2
k¼1
k¼1
Pk n
1
1
Pfja
X
jXeA
g
l
n
k k
k¼1
¼
:¼ ½1 e n .
1e
2
2
From this estimate and (c) follows that ln tends to 0, and then the chain of estimates can be continued as
1
2½1
eln ¼ 12½ln þ Oðl2n ÞX14ln ,
for any integer n sufficiently large. Therefore, for n large
PfjT n j4eAn gX14ln .
And consequently (c) implies
P
n
ln o1, which is exactly (a).
&
Proof of Theorem 5. The proof is based on a convexity argument enabling us to use the Gaussian
randomization technique. First of all, there is no loss of generality in assuming that for any nX1 and 1pkpkn
we have
jX n;k jp1 a.s.
Let X0 be an independent copy of XPdefined on a joint probability space ðO0 ; A0 ; P0 Þ with corresponding
n
expectation symbol E0 . Write T 0n ¼ kk¼1
an;k X 0n;k . Let e ¼ fen;k ; 1pkpkn ; nX1g be a triangular array of
independent Rademacher random variables defined on a joint probability space ðOe ; Ae ; Pe Þ, with
corresponding expectation symbol Ee . Similarly, let g ¼ fgn;k ; 1pkpkn ; nX1g be a triangular array of
independent Nð0; 1Þ distributed random variables defined on a joint probability space ðOg ; Ag ; Pg Þ, with
corresponding expectation symbol Eg . Let A be any real number and consider the convex non-decreasing
function jA ðxÞ ¼ ðx AÞþ . If X is any random variable, then for any positive real a, aPfX 4A þ agpEjA ðX Þ.
Applying this for A ¼ An e ¼ a and X ¼ T n and then using Jensen’s inequality lead to
ðeAn ÞPfT n 42eAn gpEjeAn ðT n Þ ¼ EjeAn ðT n E0 T 0n Þ
pEE jeAn ðT n 0
T 0n Þ
¼ EEe jeAn
!
an;k en;k X n;k
k¼1
Pk n
k¼1 an;k en;k ðEg jgn;k jÞX n;k
1=2
¼ EEe jeAn
ð2=pÞ
pEEe Eg jeAn
Pk n
k¼1 an;k en;k jgn;k jX n;k
1=2
ð2=pÞ
Pk n
¼ EEg jeAn
kn
X
k¼1 an;k gn;k X n;k
1=2
ð2=pÞ
!
!
!
.
ð7Þ
D
In the last equality we used the fact that fen;k jgn;k j; 1pkpkn ; nX1g ¼fgn;k ; 1pkpkn ; nX1g. Applying it now to
A ¼ An e ¼ a and X ¼ T n , and arguing similarly also gives
!
Pk n
k¼1 an;k gn;k X n;k
ðeAn ÞPfT n 42eAn gpEEg jeAn
.
(8)
ð2=pÞ1=2
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
962
As PfjT n j42eAn gpPfT n 42eAn g þ PfT n 42eAn g, we obtain from (7) and (8)
!
Pk n
k¼1 an;k gn;k X n;k
ðeAn ÞPfjT n j42eAn gp2EEg jeAn
.
ð2=pÞ1=2
But,
Pk n
Eg jeAn
k¼1 an;k gn;k X n;k
1=2
!
)
ð2=pÞ1=2 u
P Nð0; 1Þ4
du
Vn
eAn
Z 1
Vn
pffiffiffiffiffiffiffiffi pffiffiffiffiffiffi
PfNð0; 1Þ4vg dv
2=p ð 2=peAn Þ=V n
Z 1
Z 1
Vn
dw
2
pffiffiffiffiffiffiffiffi pffiffiffiffiffiffi
ew =2 pffiffiffiffiffiffi dv
2p
2=p ð 2=peAn Þ=V n v
!
pffiffiffiffiffiffiffiffi
rffiffiffi
Z 1
2=peAn e2 A2n =pV 2n
Vn
p Vn
v2 =2
R
RðvÞe
dvp
e
ffi
2 2
2 ðpffiffiffiffiffi
Vn
2=peAn Þ=T n
Z
¼
ð2=pÞ
¼
¼
¼
p
1
(
pV n e2 A2n =pV 2n
e
.
4
Therefore,
PfjT n j42eAn gpE
pV n e2 A2n =pV 2n
e
.
4eAn
We now make use of the boundedness assumption on the sequence X. The above inequality becomes in this
case
2
p
2 2
PfjT n j42eAn gp Eee An =pV n ,
4e
Pn 2 2
Pn 2
since V 2n ¼ kk¼1
an;k X n;k p kk¼1
an;k pA2n a.s. Put for m ¼ 1; 2; . . . :
J m ¼ fn : mpAn =V n om þ 1g.
Then
1
X
1
1
X
2
pX
pX
2 2
2 2
2 2
E
ee An =pV n p
E½]fJ m gee m =2p ee m =2p ðe2 m2 =pÞ
4e m¼1 n2J
4e m¼1
m
"
#
1
X
p
2 2
2 2
p E sup ½]fJ m gee m =2p ee m =2p
4e mX1
m¼1
PfjT n j42eAn gp
n¼1
pC e E sup
mX1
]fn : moAn =V n pm þ 1g
.
expfe2 m2 =2pg
This completes the proof of Theorem 5. &
Acknowledgment
We thank Anna Kuczmaszewska for several useful remarks concerning the first draft of the paper.
References
Adler, A., Cabrera, M., Rosalsky, A., Volodin, A., 1999. Degenerate weak convergence of row sums for arrays of random elements in
stable type p Banach spaces. Bull. Inst. Math. Acad. Sinica 27, 187–212.
Ahmed, S., Antonini, R.G., Volodin, A., 2002. On the rate of complete convergence for weighted sums of arrays of Banach space valued
random elements with application to moving average processes. Statist. Probab. Lett. 58, 185–194.
Baum, L.E., Katz, M., 1965. Convergence rates in the law of large numbers. Trans. Amer. Math. Soc. 120, 108–123.
ARTICLE IN PRESS
I. Berkes, M. Weber / Statistics & Probability Letters 77 (2007) 952–963
963
Breiman, L., 1968. Probability. Addison-Wesley, Reading, MA.
+ M., Révész, P., 1981. Strong Approximations in Probability and Statistics. Academic Press, New York.
Csörgo,
Davis, B., 1976. On the Lp norms of stochastic integrals and other martingales. Duke Math. J. 43, 697–704.
+ P., 1949. On a theorem of Hsu and Robbins. Ann. Math. Statist. 20, 286–291.
Erdos,
Fazekas, I., 1985. Convergence rates in the Marczinkiewicz strong law of large numbers for Banach space values random variables with
multidimensional indices. Publ. Math. Debrecen 32, 203–209.
Fazekas, I., 1992. Convergence rates in the law of large numbers for arrays. Publ. Math. Debrecen 41, 53–71.
Fisher, E., 1992. A Skorohod representation and an invariance principle for sums of weighted iid random variables. Rocky Mount. J.
Math. 22, 169–179.
Gut, A., 1992. Complete convergence for arrays. Period. Math. Hungar. 25, 51–75.
Hoffmann-Jørgensen, J., 1974. Sums of independent Banach space valued random variables. Studia Math. 52, 159–186.
Hu, T.-C., Móricz, F., Taylor, R.L., 1989. Strong laws of large numbers for arrays of rowwise independent random variables. Acta Math.
Hungar. 54, 153–162.
Hu, T.-C., Rosalsky, A., Szynal, D., Volodin, A., 1999. On complete convergence for arrays of rowwise independent random elements in
Banach spaces. Stochastic Anal. Appl. 17, 963–992.
Hsu, P.L., Robbins, H., 1947. Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33, 25–31.
Kuczmaszewska, A., Szynal, D., 1988. On the Hsu–Robbins law of large numbers for subsequences. Bull. Polish Acad. Sci. Math. 36,
69–79.
Kuczmaszewska, A., Szynal, D., 1990. On complete convergence for partial sums of independent identically distributed random variables.
Probab. Math. Statist. 11, 223–235.
Kuczmaszewska, A., Szynal, D., 1994. On complete convergence in a Banach space. Internat. J. Math. Math. Sci. 17, 1–14.
Li, D., Rao, M.B., Wang, X., 1992. Complete convergence of moving average processes. Statist. Probab. Lett. 14, 111–114.
Mitrinović, D.S., 1970. Analytic inequalities. Die Grundlehren der Mathematischen Wissenschaften, Band 1965, Springer, Berlin.
Petrov, V.V., 1995. Limit Theorems of Probability Theory. Sequences of Independent Random Variables. Clarendon Press, Oxford.
Pruitt, W., 1966. Summability of independent random variables. J. Math. Mech. 15, 769–776.
Rao, M.B., Wang, X., Yang, X., 1993. Convergence rates on strong laws of large numbers for arrays of rowwise independent elements.
Stochastic Anal. Appl. 11, 115–132.
Rohatgi, V.K., 1971. Convergence of weighted sums of independent random variables. Proc. Cambridge Philos. Soc. 69, 305–307.
Sung, S.H., 1997. Complete convergence for weighted sums of arrays of rowwise independent B-valued random variables. Stochastic Anal.
Appl. 15, 255–267.
Weber, M., 1995. Borel matrix. Comment. Math. Univ. Carolin. 36, 401–415.
Weber, M., 2006. A weighted CLT. Statastic. Probab. Lett. 76, 1482–1487.
Related documents