Almost sure limit theorems for the maximum
of stationary Gaussian sequences
Endre Cs´aki
a,1
,Khurelbaatar Gonchigdanzan
b,2
a
A.R´enyi Institute of Mathematics,Hungarian Academy of Sciences,
P.O.Box 127,H1364,Budapest,Hungary
b
Department of Mathematical Sciences,University of Cincinnati,
Cincinnati,OH 452210025,USA
Abstract.We prove an almost sure limit theorem for the maxima of stationary Gaussian
sequences with covariance r
n
under the condition r
n
log n(log log n)
1+ε
= O(1).
Key words:almost sure central limit theorem,logarithmic average,stationary Gaussian se
quence.
Introduction.The early results on the almost sure central limit theorem (ASCLT) dealt
mostly with partial sums of random variables.A general pattern of these investigations
is that if X
1
,X
2
,...is a sequence of random variables with partial sums S
n
=
n
k=1
X
k
satisfying a
n
(S
n
− b
n
)
D
−→ G for some numerical sequences (a
n
),(b
n
) and distribution
function G,then under some additional mild conditions we have
lim
n→∞
1
log n
n
k=1
1
k
I (a
k
(S
k
−b
k
) < x) = G(x) a.s.
for any continuity point x of G,where I is indicator function.
For more discussions about ASCLT we refer to the survey papers by Berkes (1998),
and Atlagh and Weber (2000).Recently Fahrner and Stadtm¨uller (1998) and Cheng et
al.(1998) have extended this principle by proving ASCLT for the maxima of independent
random variables.
1
Supported by the Hungarian National Foundation for Scientiﬁc Research Grant No.T 029621
2
Supported by a TAFT Fellowship at the University of Cincinnati
Typeset by A
M
ST
E
X
1
THEOREM A.Let X
1
,X
2
,...be i.i.d.random variables and M
k
= max
i≤k
X
i
.If
a
k
(M
k
−b
k
)
D
−→G for a nondegenerate distribution G and some numerical sequences (a
k
)
and (b
k
),then we have
lim
n→∞
1
log n
n
k=1
1
k
I (a
k
(M
k
−b
k
) < x) = G(x) a.s.
for any continuity point x of G.
Berkes and Cs´aki (2001) extended the ASCLT for general nonlinear functionals of in
dependent random variables.For strong invariance principles improving Theorem A see
Berkes and Horv´ath (2001) and Fahrner (2001).
Throughout this paper Z
1
,Z
2
,...is a stationary Gaussian sequence and we denote
its covariance function by r
n
= Cov(Z
1
,Z
n+1
),and M
n
= max
1≤i≤n
Z
i
and M
k,n
=
max
k+1≤i≤n
Z
i
.Here a b and a ∼ b stand for a = O(b) and a/b →1 respectively.Φ(x)
is the standard normal distribution function and φ(x) is its density function.
For notational convenience let R(n) = r
n
log n(loglog n)
1+ε
.
1.Main Result.The main result is an almost sure central limit theoremfor the maximum
of stationary Gaussian sequences.
THEOREM 1.1.Let Z
1
,Z
2
,...be a standardized stationary Gaussian sequence with
R(n) = O(1) as n →∞.Then
(i) If n(1 −Φ(u
n
)) →τ for 0 ≤ τ < ∞,then
lim
n→∞
1
log n
n
k=1
1
k
I(M
k
≤ u
k
) = e
−τ
a.s.,
(ii) If a
n
= (2 log n)
1/2
and b
n
= (2 log n)
1/2
−
1
2
(2 logn)
−1/2
(log log n +log 4π),then
lim
n→∞
1
log n
n
k=1
1
k
I(a
k
(M
k
−b
k
) ≤ x) = exp(−e
−x
) a.s..
2.Auxiliary Results.The main weak convergence result for the maximumof stationary
Gaussian sequence is summarized in the following theorem.
2
THEOREM 2.1.(Theorem 4.3.3 in Leadbetter et al.(1983)).Let Z
1
,Z
2
,...be a
standardized stationary Gaussian sequence with r
n
log n →0.Then
(i) For 0 ≤ τ < ∞,P(M
n
≤ u
n
) →e
−τ
if and only if n(1 −Φ(u
n
)) →τ
(ii) P(a
n
(M
n
−b
n
) ≤ x) →exp(−e
−x
),
where a
n
= (2 logn)
1/2
and b
n
= (2 logn)
1/2
−
1
2
(2 log n)
−1/2
(log log n +log 4π).
We need the following lemmas for the proof of our main result.
LEMMA 2.1.Let Z
1
,Z
2
,...be a standardized stationary Gaussian sequence.Assume
that R(n) = O(1) and n(1 −Φ(u
n
)) is bounded.Then
sup
1≤k≤n
k
n
j=1
r
j
 exp
−
u
2
k
+u
2
n
2(1 +r
j
)
(log log n)
−(1+ε)
.
PROOF OF LEMMA 2.1:Under the condition r
n
→0 we have sup
n≥1
r
n
 = σ < 1 (cf.,
Leadbetter et al.,1983).By assumption,n(1 − Φ(u
n
)) ≤ K.Let the sequence (v
n
) be
deﬁned by v
n
= u
n
if n ≤ K and n(1 −Φ(v
n
)) = K,if n > K.Then clearly u
n
≥ v
n
and
hence
k
n
j=1
r
j
 exp
−
u
2
k
+u
2
n
2(1 +r
j
)
≤ k
n
j=1
r
j
 exp
−
v
2
k
+v
2
n
2(1 +r
j
)
.
Thus it would be enough to prove the lemma for the sequence (v
n
).By the well known
fact
1 −Φ(x) ∼
φ(x)
x
,x →∞
we can see that
(2.1) exp
−
v
2
n
2
∼
K
√
2πv
n
n
,v
n
∼ (2 log n)
1/2
.
Deﬁne α to be 0 < α < (1 −σ)/(1 +σ).Note that
k
n
j=1
r
j
 exp
−
v
2
k
+v
2
n
2(1 +r
j
)
=
= k
1≤j≤n
α
r
j
 exp
−
v
2
k
+v
2
n
2(1 +r
j
)
+k
n
α
<j≤n
r
j
 exp
−
v
2
k
+v
2
n
2(1 +r
j
)
=
=:T
1
+T
2
.
3
Using (2.1)
T
1
≤ kn
α
exp
−
v
2
k
+v
2
n
2(1 +σ)
= kn
α
exp
−
v
2
k
+v
2
n
2
1/(1+σ)
kn
α
v
k
v
n
kn
1/(1+σ)
k
1−1/(1+σ)
n
α−1/(1+σ)
(log k log n)
1/2(1+σ)
≤
≤ n
1+α−2/(1+σ)
(log n)
1/(1+σ)
.
Since 1+α−2/(1+σ) < 0,we get T
1
≤ n
−δ
for some δ > 0,uniformly for 1 ≤ k ≤ n.Now
we estimate the second term T
2
.Setting σ
n
= sup
j≥n
r
j
 and counting on R(n) = O(1)
as n →∞
(2.2) σ
n
log n(log log n)
1+ε
≤ sup
j≥n
r
j
 log j(log log j)
1+ε
= O(1),n →∞.
Set p = [n
α
].By (2.1) and (2.2) we have
σ
p
v
k
v
n
σ
[n
α
]
(log k log n)
1/2
σ
[n
α
]
log n
α
(log log n
α
)
−(1+ε)
∼ (log log n)
−(1+ε)
(2.3)
and similarly,for 1 ≤ k ≤ n
(2.4) σ
p
v
2
k
(log log n)
−(1+ε)
.
Hence using (2.1),(2.3) and (2.4)
T
2
≤ kσ
p
exp
−
v
2
k
+v
2
n
2
p≤j≤n
exp
(v
2
k
+v
2
n
)r
j

2(1 +r
j
)
≤
≤ knσ
p
exp
−
v
2
k
+v
2
n
2
exp
(v
2
k
+v
2
n
)σ
p
2
(log log n)
−(1+ε)
.
The proof is completed.
LEMMA 2.2.Let Z
1
,Z
2
,...be a standard stationary Gaussian sequence.Suppose that
sup
n≥1
r
n
 < 1.Then for k < n
P(M
k
≤ u
k
,M
k,n
≤ u
n
) −P(M
k
≤ u
k
)P(M
k,n
≤ u
n
)
k
n
j=1
r
j
 exp
−
u
2
k
+u
2
n
2(1 +r
j
)
.
PROOF OF LEMMA 2.2.We use the following
4
THEOREM 2.2.(Theorem 4.2.1,Normal Comparison Lemma in Leadbetter et al.
(1983)).Suppose ξ
1
,...,ξ
n
are standard normal variables with covariance matrix Λ
1
=
(Λ
1
ij
),and η
1
,...,η
n
with covariance matrix Λ
0
= (Λ
0
ij
),and let ρ
ij
= max(Λ
1
ij
,Λ
0
ij
).
Further,let u
1
,...,u
n
be real numbers.Then
P(ξ
j
≤ u
j
,j = 1,...,n) −P(η
j
≤ u
j
,j = 1,...,n) ≤
≤ K
1≤i<j≤n
Λ
1
ij
−Λ
0
ij
 exp
−
u
2
i
+u
2
j
2(1 +ρ
ij
)
.
Apply this Theorem with (ξ
i
= Z
i
,i = 1,...,n),(η
j
= Z
j
,j = 1,...,k;η
j
=
˜
Z
j
,j =
k +1,...,n),where (
˜
Z
k+1
,...,
˜
Z
n
) has the same distribution as (Z
k+1
,...,Z
n
),but it is
independent of (Z
1
,...,Z
k
).Further,u
i
= u
k
,i = 1,...,k and u
i
= u
n
,i = k +1,...,n.
Then Λ
1
ij
= Λ
0
ij
= r
j−i
if either 1 ≤ i < j ≤ k,or k + 1 ≤ i < j ≤ n.Otherwise
Λ
1
ij
= r
j−i
,Λ
0
ij
= 0.Hence we have
P(M
k
≤ u
k
,M
k,n
≤ u
n
) −P(M
k
≤ u
k
)P(M
k,n
≤ u
n
)
k
i=1
n
j=k+1
r
j−i
 exp
−
u
2
k
+u
2
n
2(1 +r
j−i
)
≤ k
n
m=1
r
m
 exp
−
u
2
k
+u
2
n
2(1 +r
m
)
.
This completes the proof of LEMMA 2.2.
LEMMA 2.3.Let Z
1
,Z
2
,...be a standardized stationary Gaussian sequence.Assume
that R(n) = O(1) and n(1 −Φ(u
n
)) is bounded.Then for 1 ≤ k < n
Cov(I(M
k
≤ u
k
),I(M
k,n
≤ u
n
)) (log log n)
−(1+ε)
.
PROOF OF LEMMA 2.3:It follows simply from LEMMA 2.1 and LEMMA 2.2.
LEMMA 2.4.Let Z
1
,Z
2
,...be a standardized stationary Gaussian sequence.Assume
that R(n) = O(1) and n(1 −Φ(u
n
)) is bounded,then
EI(M
n
≤ u
n
) −I(M
k,n
≤ u
n
)
k
n
+(log log n)
−(1+ε)
.
5
PROOF OF LEMMA 2.4:Note that
EI(M
n
≤ u
n
) −I(M
k,n
≤ u
n
) = P(M
k,n
≤ u
n
) −P(M
n
≤ u
n
) ≤
≤ P(M
k,n
≤ u
n
) −Φ
n−k
(u
n
) +P(M
n
≤ u
n
) −Φ
n
(u
n
)+
+Φ
n−k
(u
n
) −Φ
n
(u
n
) =:D
1
+D
2
+D
3
.
From the elementary fact that
x
n−k
−x
n
≤
k
n
,0 ≤ x ≤ 1
we have D
3
≤ (k/n).By Corollary 4.2.4 in Leadbetter et al.(1983),p.84
D
i
n
n
j=1
r
j
 exp
−
u
2
n
1 +r
j

i = 1,2.
Thus by LEMMA 2.1 we have D
i
(log log n)
−(1+ε)
,i = 1,2.
3.Proof of Main Result.We now give the proof of THEOREM 1.1.We need the
following lemma for the proof.
LEMMA 3.1.Let η
1
,η
2
,...be a sequence of bounded random variables.If
Var
n
k=1
1
k
η
k
log
2
n(log log n)
−(1+ε)
for some ε > 0,
then
lim
n→∞
1
log n
n
k=1
1
k
(η
k
−Eη
k
) = 0 a.s..
PROOF OF LEMMA 3.1:Setting
µ
n
=
1
log n
n
k=1
1
k
(η
k
−Eη
k
)
and n
k
= exp(exp(k
ν
)) for some
1
1+
< ν < 1,we have
∞
k=3
Eµ
2
n
k
∞
k=3
(log log n
k
)
−(1+)
∞
k=3
k
−ν(1+)
< ∞
6
implying
∞
k=3
µ
2
n
k
< ∞a.s.Thus
µ
n
k
→0 a.s..
Since
(k +1)
ν
−k
ν
→0 as k →∞ if ν < 1,
we have
log n
k+1
log n
k
= e
(k+1)
ν
−k
ν
→1 as k →∞.
Obviously for any given n there is an integer k such that n
k
< n ≤ n
k+1
.Therefore
µ
n
 ≤
1
log n
n
j=1
1
j
(η
j
−Eη
j
)
≤
≤
1
log n
k
n
k
j=1
1
j
(η
j
−Eη
j
)
+
1
log n
k
n
k+1
j=n
k
+1
1
j
η
j
−Eη
j

µ
n
k
 +
1
log n
k
(log n
k+1
−log n
k
) µ
n
k
 +
logn
k+1
logn
k
−1
and thus
lim
n→∞
µ
n
= 0 a.s..
PROOF OF THEOREM1.1:First,we claimthat under the assumptions that R(n) = O(1)
and n(1 −Φ(u
n
)) is bounded,we have
(3.1) lim
n→∞
1
log n
n
k=1
1
k
(I(M
k
≤ u
k
) −P(M
k
≤ u
k
)) = 0 a.s..
In order to show this,by LEMMA 3.1 it is suﬃcient to show
(3.2) Var
n
k=1
1
k
I(M
k
≤ u
k
)
(log log n)
−(1+ε)
log
2
n for some ε > 0.
Let η
k
= I(M
k
≤ u
k
) −P(M
k
≤ u
k
).Then
Var
n
k=1
1
k
I(M
k
≤ u
k
)
= E
n
k=1
1
k
η
k
2
=
=
n
k=1
1
k
2
Eη
k

2
+2
1≤k<l≤n
E(η
k
η
l
)
kl
=:L
1
+L
2
.(3.3)
7
Since η
k
 ≤ 2,it follows that
(3.4) L
1
∞
k=1
1
k
2
< ∞.
To estimate L
2
,note that for l > k
E(η
k
η
l
)
=
Cov
I(M
k
≤ u
k
),I(M
l
≤ u
l
)
≤
Cov
I(M
k
≤ u
k
),I(M
l
≤ u
l
)−
−I(M
k,l
≤ u
l
)
+
Cov
I(M
k
≤ u
k
),I(M
k,l
≤ u
l
)
EI(M
l
≤ u
l
) −I(M
k,l
≤ u
l
) +
Cov
I(M
k
≤ u
k
),I(M
k,l
≤ u
l
)
.(3.5)
By LEMMA 2.3 and LEMMA 2.4 we get
Cov
I(M
k
≤ u
k
),I(M
k,l
≤ u
l
)
(log log l)
−(1+ε)
and
EI(M
l
≤ u
l
) −I(M
k,l
≤ u
l
)
k
l
+(log log l)
−(1+ε)
.
Hence for l > k
(3.6) E(η
k
η
l
)
k
l
+(log log l)
−(1+ε)
and consequently
L
2
1≤k<l≤n
1
kl
k
l
+
1≤k<l≤n
1
kl(log log l)
1+ε
=
=:L
21
+L
22
.(3.7)
For L
21
and L
22
we have the following estimates:
L
22
n
l=3
1
l(log log l)
1+ε
l−1
k=1
1
k
n
l=3
log l
l(log log l)
1+ε
log n
n
l=3
1
l(log log l)
1+ε
log
2
n(loglog n)
−(1+ε)
(3.8)
8
and
(3.9) L
21
≤
1≤k<l≤n
1
kl
k
l
log n.
Thus (3.3)–(3.9) together establish (3.1).
PROOF OF (i):Note that R(n) = O(1) implies r
n
log n →0.By THEOREM 4.3.3(i) in
Leadbetter et al.(1983),we have P(M
n
≤ u
n
) →e
−τ
.Clearly this implies
lim
n→∞
1
log n
n
k=1
1
k
P(M
k
≤ u
k
) = e
−τ
which is,by (3.1),equivalent to
lim
n→∞
1
log n
n
k=1
1
k
I(M
k
≤ u
k
) = e
−τ
a.s..
PROOF OF (ii):By THEOREM 2.1 we have n(1 −Φ(u
n
)) → e
−x
for u
n
= x/a
n
+b
n
.
Thus the statement of (ii) is a special case of (i).
Acknowledgement
We thank a referee for some useful comments.
REFERENCES
1.Atlagh,M.and Weber,M.(2000),Le th´eor`eme central limite presque sˆur.Expositiones
Mathematicae 18,097–126.
2.Berkes,I.(1998),Results and problems related to the pointwise central limit theorem.
Asymptotic results in Probability and Statistics,(Avolume in honour of Mikl´os Cs¨org˝o),
59–60,Elsevier,Amsterdam.
3.Berkes,I.and Cs´aki,E.(2001),A universal result in almost sure central limit theory.
Stoch.Process.Appl.94,105–134.
9
4.Berkes,I.and Horv´ath,L.(2001),The logarithmic average of sample extremes is asymp
totically normal.Stoch.Process.Appl.91,77–98.
5.Cheng,S.,Peng,L.and Qi,Y.(1998),Almost sure convergence in extreme value theory.
Math.Nachr.190,43–50.
6.Fahrner,I.and Stadtm¨uller,U.(1998),On almost sure maxlimit theorems.Stat.Prob.
Letters 37,229–236.
7.Fahrner,I.(2001),A strong invariance principle for the logarithmic average of sample
maxima.Stoch.Process.Appl.93,317–337.
8.Hurelbaatar,G.(1997),Almost sure limit theorems for dependent random variables.
Studia Sci.Math.Hung.33,167–175.
9.Leadbetter,M.R.,Lindgren,G.,and Rootz´en,H.(1983),Extremes and Related Prop
erties of Random Sequences and Processes.Springer–Verlag,New York.
10
Comments 0
Log in to post a comment