Download On convergence properties of sums of dependent random variables

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Statistics and Probability Letters 78 (2008) 1999–2005
www.elsevier.com/locate/stapro
On convergence properties of sums of dependent random variables
under second moment and covariance restrictions
Tien-Chung Hu a,∗ , Andrew Rosalsky b , Andrei Volodin c
a Department of Mathematics, National Tsing Hua University, Hsinchu 30043, Taiwan, ROC
b Department of Statistics, University of Florida, Gainesville, FL 32611, USA
c Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan S4S 0A2, Canada
Received 23 April 2007; accepted 9 January 2008
Available online 3 February 2008
Abstract
For a sequence of dependent square-integrable
random variables and a sequence of positive constants {bn , n ≥ 1}, conditions
Pn
are provided under which the series
i=1 (X i − E X i )/bi converges almost surely as n → ∞ and {X n , n ≥ 1} obeys the strong
Pn
law of large numbers limn→∞ i=1 (X i − E X i )/bn = 0 almost surely. The hypotheses stipulate that two series converge, where
the convergence of the first series involves the growth rates of {Var X n , n ≥ 1} and {bn , n ≥ 1} and the convergence of the second
series involves the growth rate of {supn≥1 |Cov (X n , X n+k )|, k ≥ 1}.
c 2008 Elsevier B.V. All rights reserved.
MSC: 60F15
1. Introduction
Let {X n , n ≥ 1} be a sequence of square-integrable random variables defined on a probability space (Ω , F, P)
and let {bn , n ≥ 1} be a sequence of positive constants. The random variables {X n , n ≥ 1} are not assumed to be
independent. We assume that there exists a sequence of constants {ρk , k ≥ 1} such that
sup |Cov (X n , X n+k )| ≤ ρk ,
k≥1
(1.1)
n≥1
and we provide conditions on the growth rates of {Var X n , n ≥ 1}, {bn , n ≥ 1} and {ρk , k ≥ 1} under which (i) the
series
n
X
(X i − E X i )/bi
converges almost surely (a.s.) as n → ∞
i=1
∗ Corresponding author. Tel.: +886 3 5742642; fax: +886 3 5723888.
E-mail address: [email protected] (T.-C. Hu).
c 2008 Elsevier B.V. All rights reserved.
0167-7152/$ - see front matter doi:10.1016/j.spl.2008.01.073
(1.2)
2000
T.-C. Hu et al. / Statistics and Probability Letters 78 (2008) 1999–2005
and (ii) {X n , n ≥ 1} obeys the strong law of large numbers (SLLN)
lim
n→∞
n
X
(X i − E X i )/bn = 0
a.s.
(1.3)
i=1
The history and literature on the SLLN problem for dependent summands is not nearly as extensive and complete
as it is for the case of independent summands. It appears that the best result establishing a SLLN for a sequence
of correlated random variables satisfying (1.1) and Var X n = O(1) is that of Lyons (1988) wherein bn ≡ n. We
point out that Lyons (1988) did not provide conditions for a series of correlated random variables to converge a.s.; he
only treated the SLLN problem. The only SLLN result for correlated summands that we are aware of satisfying (1.1)
without assuming that Var X n = O(1) is due to Hu et al. (2005) where again bn ≡ n. This result established a link
between the Golden Ratio ϕ and the SLLN. Other results on the SLLN problem for a sequence of correlated random
variables are those of Chandra (1991), Gapos̆kin (1975), Móricz (1977, 1985), Serfling (1970b, 1980).
In the current work, the main result, Theorem 1, provides conditions under which (1.2) and (1.3) hold with
{bn , n ≥ 1} being more general than only bn ≡ n in that n = O(bn ) where again it is not assumed that Var X n = O(1).
The proof of Theorem 1 is classical in nature and is based on the general “method of subsequences”. This method was
apparently developed initially by Rajchman (1932) (see Chung (1974), p. 103) and has since been used by numerous
other authors. However, the key inequality used in our proof is a much more recent result due to Serfling (1970a).
The plan of the paper is as follows. In Section 2, a lemma which is used in the proof of Theorem 1 is presented.
Theorem 1 is stated and proved in Section 3. In Section 4, Theorem 1 is compared with a well-known result.
2. Preliminaries
Throughout this paper, the symbol C denotes a generic constant (0 < C < ∞) which is not necessarily the same
one in each appearance. For x ≥ 1, the natural (base e) logarithm of x and the logarithm of x to the base 2 will be
denoted, respectively, by log x and Log x. We note that for all x ≥ 1, Log x = C log x where C = 1/ log 2.
The following lemma is used in the proof of Theorem 1.
Lemma 1. Let {X n , n ≥ 1} be a sequence of square-integrable random variables and suppose that there exists a
sequence of constants {ρk , k ≥ 1} such that
sup |Cov (X n , X n+k )| ≤ ρk ,
k ≥ 1.
(2.1)
n≥1
Let {bn , n ≥ 1} be a sequence of positive constants such that
n = O(bn ).
(2.2)
Then for all n ≥ 0, m ≥ n + 2, and 0 ≤ q < 1,
!2
m
m
X
X
X ρk
Xi − E Xi
Var X i
C m−n−1
E
≤
+
1−q
2
bi
kq
n
bi
i=n+1
i=n+1
k=1
where C is a constant independent of n and m.
Proof. For all n ≥ 0, m ≥ n + 2, and 0 ≤ q < 1,
!2
m
m
m−1
m
X
X
X X
Cov (X i , X j )
Xi − E Xi
Var (X i )
E
=
+
2
2
bi
bi b j
bi
i=n+1
i=n+1 j=i+1
i=n+1
≤
m
m−1
m
X
X X
ρ j−i
Var X i
+
C
2
ij
bi
i=n+1
i=n+1 j=i+1
(by (2.1) and (2.2))
m−1
m
m
X
X X
ρ j−i 1 1
Var X i
=
+C
−
j −i i
j
bi2
i=n+1
i=n+1 j=i+1
2001
T.-C. Hu et al. / Statistics and Probability Letters 78 (2008) 1999–2005
=
m
m−n−1
X
X ρk
Var X i
+
C
k
bi2
i=n+1
k=1
m−k
X
≤
m−n−1
m
X ρk
X
Var X i
+
C
2
k
bi
k=1
i=n+1
≤
m
m−n−1
X
X ρk
Var X i
+
C
2
k
bi
i=n+1
k=1
≤
m
X
X ρk
Var X i
C m−n−1
+
1−q
2
kq
n
bi
i=n+1
k=1
m
X
1
1
−
l
l
l=n+1
l=n+k+1
!
n+k
X
1
l
l=n+1
n+k
log
n
!
(2.3)
since log(1 + x) ≤ x 1−q /(1 − q) for all x ≥ 0.
3. The main result
With the preliminaries accounted for, the
P main result may be stated and proved. We note that the condition (3.2)
is indeed stronger than that the condition ∞
k=1 ρk /k < ∞ of Lyons (1988). However, as we remarked in Section 1,
Lyons (1988) treated the case Var X n = O(1) and bn ≡ n to prove a SLLN and he did not establish a result along the
lines of (3.3). We also note that the condition (3.1) is automatic if
1
Var X n
=
O
for some ε > 0.
bn2
n(log n)3 (log log n)1+ε
For example, suppose that Var X n ∼ n/(log n)2 . If bn ∼ n log n, then (2.2) and (3.1) hold whereas if bn ∼ n, then
(2.2) holds but (3.1) fails.
Theorem 1. Let {X n , n ≥ 1} be a sequence of square-integrable random variables and suppose that there exists a
sequence of constants {ρk , k ≥ 1} such that (2.1) holds. Let {bn , n ≥ 1} be a sequence of positive constants satisfying
(2.2). Suppose that
∞
X
(Var X n )(log n)2
<∞
bn2
n=1
(3.1)
and
∞
X
ρk
k=1
kq
<∞
for some 0 ≤ q < 1.
(3.2)
Then
n
X
Xi − E Xi
bi
i=1
converges a.s. as n → ∞
(3.3)
and if bn ↑, the SLLN
n
P
lim
n→∞
(X i − E X i )
i=1
bn
=0
a.s.
(3.4)
obtains.
Proof. Note at the outset that (2.2) ensures that limn→∞ bn = ∞. Thus if bn ↑, then the conclusion (3.4) follows
immediately from (3.3) and the Kronecker lemma. To prove (3.3), note that for all n ≥ 1,
!2
!2
m
n
m
X
X
Xi − E Xi X
Xi − E Xi
Xi − E Xi
sup E
−
= sup E
bi
bi
bi
m>n
m>n
i=1
i=1
i=n+1
2002
T.-C. Hu et al. / Statistics and Probability Letters 78 (2008) 1999–2005
m
X
X ρk
C m−n−1
Var X i
≤ sup
+
n 1−q k=1 k q
bi2
m>n i=n+1
=
!
(by Lemma 1)
∞
∞
X
C X
ρk
Var X i
+
1−q
2
kq
n
bi
k=1
i=n+1
= o(1)
as n → ∞ (by (3.1) and (3.2), and q < 1).
Then by the Cauchy Convergence Criterion (see, e.g., Chow and Teicher (1997, p. 99)), there exists a random variable
S on (Ω , F, P) with E S 2 < ∞ such that
n
X
X i − E X i L2
→ S.
bi
i=1
Thus for all n ≥ 1,
m
2n
2n
X
X
Xi − E Xi
Xi − E Xi X
X i − E X i L2
→S−
−
b
b
bi
i
i
i=1
i=1
i=1
as m → ∞
whence (see, e.g., Chow and Teicher (1997, p. 101))
!2
!2
m
2n
2n
X
X
Xi − E Xi
Xi − E Xi X
Xi − E Xi
= lim E
−
E S−
m→∞
bi
bi
bi
i=1
i=1
i=1
!2
m
X
Xi − E Xi
= lim E
m→∞
bi
i=2n +1
!
n −1
m
X
X
ρk
C m−2
Var X i
+ n(1−q)
≤ lim
m→∞
kq
2
bi2
k=1
i=2n +1
≤
(by Lemma 1)
∞
∞
X
ρk
C X
Var X i
+
.
q
2
n(1−q)
k
2
b
i
k=1
i=2n
(3.5)
Next, it will be shown that
2n
X
Xi − E Xi
→S
bi
i=1
a.s.
(3.6)
For arbitrary ε > 0,
( n
)
!2
2
∞
∞
2n
X
X
X
Xi − E Xi
1 X
Xi − E Xi
P E
− S > ε ≤ 2
−S
i=1
bi
bi
ε n=1
n=1
i=1
(by the Markov inequality)
∞
∞
∞
X
1 X
Var X i
C X
ρk
≤ 2
+
ε n=1 i=2n bi2
2n(1−q) k=1 k q
!
∞ [Log
∞
X
Xi] Var X i
X
1
=C
+
C
2
n(1−q)
2
bi
i=2 n=1
n=1
!
≤C
∞
∞
X
X
(Var X i ) log i
ρk
+
C
q
2
k
b
i
i=2
k=1
<∞
(by (3.1) and (3.2)).
Then by Borel–Cantelli lemma and the arbitrariness of ε > 0, (3.6) follows.
(by (3.5))
∞
X
ρk
k=1
kq
(since q < 1)
!
T.-C. Hu et al. / Statistics and Probability Letters 78 (2008) 1999–2005
We will now verify that
k
n−1
X X i − E X i 2X X i − E X i →0
max −
bi
b
2n−1 <k≤2n i=1
i
i=1
a.s. as n → ∞.
2003
(3.7)
For a ≥ 0 and n ≥ 1, let the joint distribution function of X a+1 , . . . , X a+n be denoted by Fa,n . Define a functional g
on {Fa,n : a ≥ 0, n ≥ 1} by
g(Fa,n ) =
a+n−1
X a+n
X |Cov (X i , X j )|
Var X i
+
2
,
bi b j
bi2
i=a+1 j=i+1
i=a+1
a+n
X
a ≥ 0, n ≥ 1
where the second term is interpreted as 0 if n = 1. Then for a ≥ 0, k ≥ 1, and m ≥ 1,
g(Fa,k ) + g(Fa+k,m ) =
a+k
a+k−1
a+k
X
X X
|Cov (X i , X j )|
Var X i
+
2
2
bi b j
bi
i=a+1
i=a+1 j=i+1
+
≤
a+k+m
X
a+k+m−1
X a+k+m
X |Cov (X i , X j )|
VarX i
+
2
2
bi b j
i=a+k+1 bi
i=a+k+1 j=i+1
a+k+m
X
i=a+1
a+k+m−1
X a+k+m
X |Cov (X i , X j )|
Var X i
+
2
2
bi b j
bi
i=a+1 j=i+1
= g(Fa,k+m ).
Moreover, it follows from (2.3) that for all a ≥ 0 and n ≥ 1,
!2
a+n
X Xi − E Xi
E
≤ g(Fa,n ).
bi
i=a+1
Then by Serfling’s (1970a) generalization of the Rademacher–Menchoff fundamental maximal inequality for the
partial sums of orthogonal random variables (see also Stout (1974), Sections 2.3 and 2.4), for all a ≥ 0 and n ≥ 1,
!2
a+k
X
X i − E X i E max ≤ (Log2n)2 g(Fa,n ).
1≤k≤n b
i
i=a+1
Thus for arbitrary ε > 0,


X
n−1
∞


X
k X i − E X i 2X X i − E X i >ε
P
max −
2n−1 <k≤2n i=1

bi
bi
n=1
i=1
2

X
n−1
∞
k X i − E X i 2X X i − E X i 1 X

≤1+ 2
E  max −
bi
bi
ε n=2
2n−1 <k≤2n i=1
i=1
2

X
∞
k
X i − E X i 
1 X

=1+ 2
E
max bi
ε n=2
2n−1 <k≤2n i=2n−1 +1
2

2n−1
∞
X+k X i − E X i 1 X

=1+ 2
E  max bi
ε n=2
1≤k≤2n−1 n−1
i=2
≤1+
1
ε2
∞
X
n=2
+1
(Log(2 · 2n−1 ))2 g(F2n−1 ,2n−1 )
(by (3.8))
(3.8)
(by the Markov inequality)
2004
T.-C. Hu et al. / Statistics and Probability Letters 78 (2008) 1999–2005
≤1+C
∞
X

2n
X
n −1
2X
Var X i
+2
bi2
i=2n−1 +1
i=2n−1 +1
(Log2n−1 )2 
n=2
2n
X

2n
X
|Cov (X i , X j )|

bi b j
j=i+1
n −1
∞
2X
2n
X
X
ρ j−i
(Var X i )(Log i)2
2
≤1+C
+C
n
2
ij
bi
n=2 i=2n−1 +1
n=2
i=2n−1 +1 j=i+1
∞
X
(by (2.1) and (2.2))
∞
∞
2n−1
X
X
X−1 ρk
(Var X i )(log i)2
n2
+
C
(by arguing as in the proof of Lemma 1)
(2n−1 )1−q k=1 k q
bi2
i=1
n=2
!
!
∞
∞
∞
X
X
X
(Var X i )(log i)2
n2
ρk
≤1+C
+C
kq
(2n−1 )1−q
bi2
k=1
i=1
n=2
<∞
≤1+C
by (3.1) and (3.2), q < 1, and
n2
1 n
=O
.
1−q
(2n−1 )1−q
2 2
Then by the Borel–Cantelli lemma and the arbitrariness of ε > 0, (3.7) follows.
Next, for k ≥ 2, let n ≥ 1 be such that 2n−1 < k ≤ 2n . Then
k
2n−1
n−1
2X
k
X
X
X
X
−
E
X
X
−
E
X
X
−
E
X
X
−
E
X
i
i
i
i
i
i
i
i
− S ≤ −
+
− S i=1
bi
bi
bi
bi
i=1
i=1
i=1
j
n−1
n−1
X X i − E X i 2X X i − E X i 2X X i − E X i
+
≤
max −
− S n−1
n
bi
bi
bi
2
< j≤2 i=1
i=1
i=1
→0
a.s. as k → ∞
by (3.6) and (3.7) thereby proving that
k
X
Xi − E Xi
=S
k→∞
bi
i=1
lim
a.s.
and hence (3.3) is established.
4. Concluding comments
To conclude, we compare Theorem 1 with a well-known result. The following proposition is essentially Corollary
2.4.1 of Stout (1974, p. 28).
Proposition 1. Let {X n , n ≥ 1} be a sequence of square-integrable random variables and suppose that there exists a
sequence of constants {rk , k ≥ 1} such that
|Cov (X n , X n+k )|
≤ rk ,
(Var
X n · Var X n+k )1/2
n≥1
sup
k ≥ 1.
Let {bn , n ≥ 1} be a sequence of positive constants. If
∞
X
(Var X n )(log n)2
<∞
bn2
n=1
and
∞
X
k=1
rk < ∞,
(4.1)
T.-C. Hu et al. / Statistics and Probability Letters 78 (2008) 1999–2005
2005
then
n
X
Xi − E Xi
bi
i=1
converges a.s. as n → ∞.
To compare Theorem 1 with Proposition 1, observe that condition (2.2) is not needed in Proposition 1. In general,
the conditions (3.2) and (4.1) are not comparable; however, if Var X n = O(1), then (3.2) is weaker than (4.1).
References
Chandra, T.K., 1991. Extensions of Rajchman’s strong law of large numbers. Sankhyā Ser. A 53, 118–121.
Chow, Y.S., Teicher, H., 1997. Probability Theory: Independence, Interchangeability, Martingales, third ed. Springer-Verlag, New York.
Chung, K.L., 1974. A Course in Probability Theory, second ed. Academic Press, New York.
Gapos̆kin, V.F., 1975. Criteria for the strong law of large numbers for classes of stationary processes and homogeneous random fields. Dokl. Akad.
Nauk SSSR 223, 1044–1047 (in Russian). English translation in Soviet Math. Dokl. 16, 1009-1013.
Hu, T.-C., Rosalsky, A., Volodin, A.I., 2005. On the golden ratio, strong law, and first passage problem. Math. Scientist 30, 77–86.
Lyons, R., 1988. Strong laws of large numbers for weakly correlated random variables. Michigan Math. J. 35, 353–359.
Móricz, F., 1977. The strong laws of large numbers for quasi-stationary sequences. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 38, 223–236.
Móricz, F., 1985. SLLN and convergence rates for nearly orthogonal sequences of random variables. Proc. Amer. Math. Soc. 95, 287–294.
Rajchman, A., 1932. Zaostrzone prawo wielkich liczb. Mathesis Poloka 6, 145 (in Polish).
Serfling, R.J., 1970a. Moment inequalities for the maximum cumulative sum. Ann. Math. Statist. 41, 1227–1234.
Serfling, R.J., 1970b. Convergence properties of Sn under moment restrictions. Ann. Math. Statist. 41, 1235–1248.
Serfling, R.J., 1980. On the strong law of large numbers and related results for quasi-stationary sequences. Teor. Veroyatnost. i Primenen. 25,
190–194. (in English with Russian summary). Theory Probab. Appl., 25, 187–191.
Stout, W.F., 1974. Almost Sure Convergence. Academic Press, New York.
Related documents