CREATES Research Paper 200861
Limit theorems for moving averages of discretized
processes plus noise
Jean Jacod, Mark Podolskij and Mathias Vetter
School of Economics and Management
University of Aarhus
Building 1322, DK8000 Aarhus C
Denmark
Limit theorems for moving averages of discretized processes
plus noise
Jean Jacod
y
University of Paris VI
Mark Podolskij
z
ETH Zurich and CREATES
Mathias Vetter
x
RuhrUniversity Bochum
November 19,2008
Abstract
This paper presents some limit theorems for certain functionals of moving averages
of semimartingales plus noise,which are observed at high frequency.Our method gen
eralizes the preaveraging approach (see [13],[11]) and provides consistent estimates for
various characteristics of general semimartingales.Furthermore,we prove the associ
ated multidimensional (stable) central limit theorems.As expected,we nd central
limit theorems with a convergence rate n
1=4
,if n is the number of observations.
Keywords:central limit theorem,high frequency observations,microstructure
noise,quadratic variation,semimartingale,stable convergence.
JEL Classication C10,C13,C14.
1 Introduction
The last years have witnessed a considerable development of the statistics of processes
observed at very high frequency,due to the recent availability of such data.This is par
ticularly the case for market prices of stocks,currencies,and other nancial instruments.
Correlatively,the technology for the analysis of such data has grown rapidly.The em
blematic problem is the question of how to estimate daily volatility for nancial prices (in
stochastic process terms,the quadratic variation of log prices).
Mark Podolskij gratefully acknowledges nancial support from CREATES funded by the Danish Na
tional Research Foundation.
y
Institut de Mathematiques de Jussieu,175 rue du Chevaleret 75 013 Paris,France (CNRS { UMR
7586,and Universite Pierre et Marie Curie  P6),Email:jean.jacod@upmc.fr
z
Department of Mathematics,ETH Zurich,HG G37.1,8092 Zurich,Switzerland,Email:
mark.podolskij@math.ethz.ch.
x
RuhrUniversitat Bochum,Fakultat fur Mathematik,44780 Bochum,Germany,Email:math
ias.vetter@rub.de
1
However,those high frequency data are almost always corrupted by some noise.This
may be recording or measurement errors,a situation which can be modeled by an additive
white noise.For nancial data we also have a dierent sort of"noise",due to the fact
that prices are recorded as multiples of the basic currency unit,so that some rounding is
necessarily performed,and the level of rounding is far from being negligible for very high
frequency data in comparison with the intrinsic variability of the underlying process.For
these reasons,it is commonly acknowledged that the underlying process of interest,such
as the price semimartingale,is latent rather than observed.
A large amount of work has already been devoted to the subject,especially for additive
white noise,but also for some other types of noise like rounding eects.A comprehensive
discussion of the noise models and the eect of noise on the inference for the underlying
process may be found in [12].And various statistical procedures for getting rid of the
noise have been proposed,see for example [1],[2],[3],[15],[16] and,more closely related
to the present work,[5],[13],[14],[11].
As a matter of fact,most of the aforementioned papers are concerned with the es
timation of the integrated volatility,that is the quadratic variation,for a continuous
semimartingale.Only a few consider discontinuous semimartingales,and mostly study
again the quadratic variation or its continuous part.So there is a lack of more general
results,allowing for example to estimate other powers of the volatility (like the"quartic
ity") or the sum of some powers of the jumps,for a general It^o semimartingale.These
quantities have proved extremely useful for a number of estimation or testing problems in
the context of high frequency data,but they have been studied so far when the process is
observed without noise.
The aim of this paper is to (partly) ll this gap.This is a probabilistic paper,with no
explicit statistical application,but of course the interest and motivation of the forthcoming
results lie essentially in potential applications.It is done in the context of the"pre
averaging method"developed in [11] and [13] for the estimation of the integrated volatility
for a continuous semimartingale.
Let us be more specic:we consider an It^o semimartingale X which is corrupted by
noise.The observed process Z = (Z
t
)
t0
is given as
Z
t
= X
t
+
t
;t 0;
where (
t
)
t0
are errors,which are,conditionally on the process X,centered and in
dependent.The process Z is assumed to be observed at equidistant time points i
n
,
i = 0;1;:::;[t=
n
],with
n
!0 as n!1.This structure of noise allows for an additive
white noise,but also for noise involving rounding eects since
t
may depend on X
t
,or
even on the whole past of X before time t.It rules out,though,some other interesting
types of noise,like an additive colored noise.Note however that the
t
are not necessarily
independent (the independence is only"conditional on X").
In the nonoise case (i.e. 0) an extensive theory has been developed in various
papers,which allows for estimating quantities like
P
st
jX
s
j
p
where X
s
denotes the
jump size of X at time s,or
R
t
0
j
s
j
p
ds where is the volatility.See,for instance,[4]
or [10] among others.Typically,these quantities are estimated by sums of powers of the
2
successive increments of X,that is are limits of such sums.When the noise is present,
these estimators are inadequate because they converge toward some characteristics of the
noise rather than toward the characteristics of the process X in which we are interested.
There are currently three main approaches to overcome this diculties,mainly for the
estimation of the quadratic variation in the continuous case:the subsampling method
([16]),the realized kernel method ([5]) and the preaveraging method ([13],[11]) (see also
[6] for a comprehensive theory in the parametric setting).All these approaches achieve
the optimal rate of
1=4
n
.In this paper we use the preaveraging method to derive rather
general estimators.
More precisely,we choose a (smooth) weight function g on [0;1] and an appropriate
sequence k
n
,with which we associate the (observed) variables
Z(g)
n
i
=
k
n
1
X
j=1
g(j=k
n
)(Z
(i+j)
n
Z
(i+j1)
n
);
b
Z(g)
n
i
=
k
n
X
j=1
(g(j=k
n
) g((j 1)=k
n
))
2
(Z
(i+j)
n
Z
(i+j1)
n
)
2
:
Our aim is to study the asymptotic behavior of the following functionals:
V (Z;g;p;r)
n
t
=
[t=
n
]k
n
X
i=0
j
Z(g)
n
i
j
p
j
b
Z(g)
n
i
j
r
for suitable powers p;r 0.The role of
Z(g)
n
i
is the reduction of the in uence of the
noise process ,whereas
b
Z(g)
n
i
is used for bias corrections.The asymptotic theory for
the functionals V (Z;g;p;0)
n
t
in the absence of jumps is (partially) derived in [11] and [14],
but here we extend these results to the case of general semimartingales.
Quite naturally,the asymptotic behavior of V (Z;g;p;r)
n
t
is dierent according to
whether the process X is continuous or not.In particular,dierent scaling is required
to obtain nontrivial limits for V (Z;g;p;r)
n
t
.More precisely,we show the following (
P
!
means convergence in probability,and
u.c.p.
!means convergence in probability uniformly
over all nite time intervals):
(i) For all semimartingales X it holds that
1
k
n
V (Z;g;p;0)
n
t
P
!
g(p)
P
st
jX
s
j
p
for
p > 2 and
1
k
n
V (Z;g;2;0)
n
t
1
2k
n
V (Z;g;0;1)
n
t
P
!
g(2)[X;X]
t
,where the
g(p)'s are
known constants (which depend on g) and [X;X] is the quadratic variation of X.
(ii) When X is a continuous It^o semimartingale it holds that
1p=4
n
V (Z;g;p;0)
n
t
u.c.p.
!
m
p
R
t
0
g(2)
2
s
+
g
0
(2)
2
s
p=2
ds,where m
p
, are certain constants,(
2
s
) is the volatility
process and (
2
s
) is the local conditional variance of the noise process .Furthermore,
a proper linear combination of V (Z;g;p;r)
n
t
for integers p;r with p+2r = l converges
in probability to
R
t
0
j
s
j
l
ds,when l is an even integer.
3
For each of the aforementioned cases we prove a joint stable central limit theorem for a
given family of weight functions (g
i
)
1id
(for the rst functional in (i) we additionally
have to assume that p > 3).The corresponding convergence rate is
1=4
n
.
We end this introduction by emphasizing that only the 1dimensional case for X is
studied here.The extension to multidimensional semimartingales is possible,and even
mathematically rather straightforward,but extremely cumbersome,and this paper is al
ready quite complicated as it is.
This paper is organized as follows:in Section 2 we introduce the setting and the
assumptions.Sections 3 and 4 are devoted to stating the results,rst the various con
vergences in probability,and second the associated central limit theorems.The proof are
gathered in Section 5.
2 The setting
We have a 1dimensional underlying process X = (X
t
)
t0
,and observation times i
n
for
all i = 0;1; ;k; ,with
n
!0.We suppose that X is a semimartingale,which can
thus be written as
X = X
0
+B +X
c
+(x1
fjxj1g
)?( ) +(x1
fjxj>1g
)?:(2.1)
Here is the jump measure of X and is its predictable compensator,and X
c
is the
continuous (local) martingale part of X,and B is the drift.All these are dened on
some ltered probability space (
(0)
;F
(0)
;(F
(0)
t
)
t0
;P
(0)
).We use here the traditional
notation of stochastic calculus,and for any unexplained (but standard) notation we refer
to [9];for example ?( )
t
=
R
t
0
R
R
(s;x)( )(ds;dx) is the stochastic integral of
the predictable function (!;t;x) with respect to the martingale measure ,when it
exists.
The process X is observed with an error:that is,at stage n and instead of the values
X
n
i
= X
i
n
for i 0,we observe X
n
i
+
n
i
,where the
n
i
's are"errors"which are,
conditionally on the process X,centered and independent (this allows for errors which are
depending on X and thus may be unconditionally dependent).It is convenient to dene
the noise
t
for any time t,although at stage n only the values
i
n
are really used.
Mathematically speaking,this can be formalized as follows:for each t 0,we have
a transition probability Q
t
(!
(0)
;dz) from (
(0)
;F
(0)
t
) into R.We endow the space
(1)
=
R
[0;1)
with the product Borel eld F
(1)
and the"canonical process"(
t
:t 0) and
with the probability Q(!
(0)
;d!
(1)
) which is the product
t0
Q
t
(!
(0)
;:).We introduce
the ltered probability space (
;F;(F
t
)
t0
;P) and the ltration (G
t
) as follows:
=
(0)
(1)
;F = F
(0)
F
(1)
;
F
t
= F
(0)
t
(
s
:s 2 [0;t));G
t
= F
(0)
(
s
:s 2 [0;t));
P(d!
(0)
;d!
(1)
) = P
(0)
(d!
(0)
) Q(!
(0)
;d!
(1)
):
9
>
=
>
;
(2.2)
Any variable or process which is dened on either
(0)
or
(1)
can be considered in the
usual way as a variable or a process on
.Note that X is still a semimartingale,with the
4
same decomposition (2.1),on (
;F;(F
t
)
t0
;P),despite the fact that the ltration (F
t
) is
not rightcontinuous.On the other hand,the"process" typically has no measurability
property in time,since under Q(!
(0)
;:) it is constituted of independent variables;as men
tioned before,only the values of at the observation times are relevant,and the extension
as a process indexed by R
+
is for notational convenience only.
At time t,instead of X
t
we observe the variable
Z
t
= X
t
+
t
(2.3)
We make the following crucial assumption on the noise,for some q 2:
Hypothesis (Nq):There is a sequence of (F
(0)
t
)stopping times (T
n
) increasing to 1,
such that
R
Q
t
(!
(0)
;dz) jzj
q
n whenever t < T
n
(!
(0)
).We write for any integer r q:
(r)
t
(!
(0)
) =
Z
Q
t
(!
(0)
;dz) z
r
;
t
=
p
(2)
t
;(2.4)
and we also assume that
(1) 0:(2.5)
2
In most applications,the local boundedness of the qth moment of the noise,even for
all q > 0,is not a genuine restriction.The condition (2.5),on the other hand,is quite a
serious restriction,and for instance it rules out the case where Z
t
is a pure rounding of
X
t
:see [11] for a discussion of the implications of this assumption,and some examples.
We choose a sequence of integers k
n
satisfying for some > 0:
k
n
p
n
= +o(
1=4
n
);we write u
n
= k
n
n
:(2.6)
We will also consider weight functions g on [0;1],satisfying
g is continuous,piecewise C
1
with a piecewise Lipschitz derivative g
0
,
g(0) = g(1) = 0;
R
1
0
g(s)
2
ds > 0:
(2.7)
It is convenient to extend such a g to the whole of R by setting g(s) = 0 if s =2 [0;1].We
associate with g the following numbers (where p 2 (0;1) and i 2 Z):
g
n
i
= g(i=k
n
);g
0n
i
= g
n
i
g
n
i1
;
g(p)
n
=
P
k
n
i=1
jg
n
i
j
p
;
g
0
(p)
n
=
P
k
n
i=1
jg
0n
i
j
p
:
)
(2.8)
If g;h are bounded functions with support in [0;1] and p > 0 and t 2 R we set
g(p) =
Z
jg(s)j
p
ds;
(gh)(t) =
Z
g(s)h(s t) ds:(2.9)
For example
g
0
(p) is associated with g
0
by the rst denition above,and
g(2) =
(gg)(0).
Note that,as n!1,
g(p)
n
= k
n
g(p) +O(1);
g
0
(p)
n
= k
1p
n
g
0
(p) +O(k
p
n
):(2.10)
5
With any process Y = (Y
t
)
t0
we associate the following random variables
Y
n
i
= Y
i
n
;
n
i
Y = Y
i
n
Y
(i1)
n
;
Y (g)
n
i
=
P
k
n
1
j=1
g
n
j
n
i+j
Y =
P
k
n
j=1
g
0n
j
Y
n
i+j1
;
b
Y (g)
n
i
=
P
k
n
j=1
(g
0n
j
n
i+j
Y )
2
;
9
>
>
=
>
>
;
(2.11)
and we dene the elds F
n
i
= F
i
n
and G
n
i
= G
i
n
.
Now we can dene the processes of interest for this paper.Below,p and r are nonneg
ative reals,and typically the process Y will be X or Z:
V (Y;g;p;r)
n
t
=
[t=
n
]k
n
X
i=0
j
Y (g)
n
i
j
p
j
b
Y (g)
n
i
j
r
:(2.12)
We end this section by stating a number of assumptions on X,which are needed for
some of the results below.
One of these assumptions is that X is an It^o semimartingale.This means that its
characteristics are absolutely continuous with respect to Lebesgue measure,or equivalently
that it can be written as
X
t
= X
0
+
Z
t
0
b
s
ds +
Z
t
0
s
dW
s
+(1
fjj1g
)?(
)
t
+(1
fjj>1g
)?
t
;(2.13)
where W is a Brownian motion and
and
are a Poisson random measure on R
+
E
and its compensator (dt;dz) = dt
(dz) (where (E;E) is an auxiliary space and a
nite measure).The required regularity and boundedness conditions on the coecients
b;; are gathered in the following:
Hypothesis (H):The process X has the form (2.13) (on (
(0)
;F
(0)
;(F
(0)
t
);P
(0)
)),and
further:
a) the process (b
t
) is optional and locally bounded;
b) the processes (
t
) is cadlag (= rightcontinuous with left limits) and adapted;
c) the function is predictable,and there is a bounded function in L
2
(E;E;) such
that the process sup
z2E
(j(!
(0)
;t;z)j ^1)= (z) is locally bounded.2
In particular,a continuous It^o semimartingale is of the form
X
t
= X
0
+
Z
t
0
b
s
ds +
Z
t
0
s
dW
s
:(2.14)
where the processes b and are optional (relative to (F
(0)
t
)) and such that the integrals
above make sense.When this is the case,we sometimes need the process itself to be an
It^o semimartingale:it can then be written as in (2.13),but another way of expressing this
property is as follows (we are again on the space (
(0)
;F
(0)
;(F
(0)
t
);P
(0)
)):
t
=
0
+
Z
t
0
e
b
s
ds +
Z
t
0
e
s
dW
s
+M
t
+
X
st
s
1
fj
s
j>vg
;(2.15)
6
where M is a local martingale orthogonal to W and with bounded jumps and hM;Mi
t
=
R
t
0
a
s
ds,and the compensator of
P
st
1
fj
s
j>vg
is
R
t
0
a
0
s
ds,and where
e
b
t
,a
t
,a
0
t
and e
t
are
optional processes,the rst three ones being locally integrable and the fourth one being
locally squareintegrable.Then we set:
Hypothesis (K):We have (2.14) and (2.15),and the processes
e
b
t
,a
t
,a
0
t
are locally
bounded,whereas the processes b
t
and e
t
are leftcontinuous with right limits.2
Remark 2.1 The intuition behind the quantities
Z(g)
n
i
and
b
Z(g)
n
i
can be explained as
follows.Assume for simplicity that X is a continuous It^o semimartingale of the form(2.14)
and the noise process is independent of X.Now,conditionally on F
n
i
,it holds that
1=4
n
Z(g)
n
i
asy
N
0;
g(2)
2
i
n
+
g
0
(2)
2
i
n
when the processes and are continuous on the interval (i
n
;(i + k
n
)
n
].Thus,
1=4
n
Z(g)
n
i
contains a"biased information"about
2
i
n
(which is usually the main object
of interest).On the other hand,we have that
b
Z(g)
n
i
2
g
0
(2)
k
n
2
i
n
when the process is continuous on the interval (i
n
;(i + k
n
)
n
] (this approximation
holds even for all semimartingales X).It is now intuitively clear that a certain combination
of the quantities
Z(g)
n
i
and
b
Z(g)
n
i
can be used to recover some functions of
i
n
.In
particular,a proper linear combination of V (Y;g;p 2l;l)
n
t
,l = 0;:::;p=2,for an even
number p,converges in probability to
R
t
0
j
s
j
p
ds.This intuition is formalized in Theorem
3.3 and 3.4.
3 Results:the Laws of Large Numbers
3.1 LLN for all semimartingales.
We consider here an LLN which holds for all semimartingales,and we start with the
version without noise,that is Z = X.For the sake of comparison,we recall the following
classical result:
[t=
n
]
X
i=1
j
n
i
Xj
p
P
!
(
P
st
jX
s
j
p
if p > 2
[X;X]
t
if p = 2:
(3.1)
Below,and throughout the paper,g always denotes a weight function satisfying (2.7).
Theorem 3.1 For any t 0 which is not a xed time of discontinuity of X we have
1
k
n
V (X;g;p;0)
n
t
P
!
(
g(p)
P
st
jX
s
j
p
if p > 2
g(2) [X;X]
t
if p = 2:
(3.2)
7
This convergence also holds for any t such that t=
n
is an integer for all n,if this
happens,but it never holds in the Skorokhod sense,except of course when X is continuous.
Taking in (2.12) test functions which are f(x) = jxj
p
is essential.For this we do not need
the full force of (2.6),but only that u
n
!0 and k
n
!1.
Next we have the version with noise,again for an arbitrary semimartingale X.The
reader will have noticed in the previous theorem that nothing is said about V (X;g;p;r)
n
t
when r 1,and in fact those functionals are of little interest.However,when noise is
present,we need those processes to remove an intrinsic bias,as in (b) below,and so we
provide their behavior,or at least some (rough) estimates on them.
Theorem 3.2 a) For any t 0 which is not a xed time of discontinuity of X we have
p > 2 and (Np) holds )
1
k
n
V (Z;g;p;0)
n
t
P
!
g(p)
X
st
jX
s
j
p
:(3.3)
Moreover if r > 0 and p +2r > 2 and if (N(p +2r)) holds,then
the sequence
k
r
p+4r
p+2r
n
V (Z;g;p;r)
n
t
is tight.(3.4)
b) Under (N2) we have for all t as above:
1
k
n
V (Z;g;2;0)
n
t
1
2k
n
V (Z;g;0;1)
n
t
P
!
g(2) [X;X]
t
:(3.5)
It is worth emphasizing that the behaviors of V (Z;g;p;0)
n
and of V (X;g;p;0)
n
are
basically the same when p > 2,at least for the convergence in probability.That is,by
using the preaveraging procedure we wipe out completely the noise in this case.On the
opposite,when p = 2 the two processes V (Z;g;2;0)
n
and V (X;g;2;0)
n
behave dierently,
even at the level of convergence in probability.
3.2 LLN for continuous It^o semimartingales  1.
When X is continuous,Theorem 3.2 gives a vanishing limit when p > 2,so it is natural
in this case to look for a normalization which provides a nontrivial limit.This is possible
only when X is a continuous It^o semimartingale,of the form (2.14).
Theorem 3.3 Assume (Nq) for some q > 2 and that X is given by (2.14).Assume also
that b is locally bounded and that and are cadlag.Then if 0 < p q=2 we have
1p=4
n
V (Z;g;p;0)
n
t
u.c.p.
!m
p
Z
t
0
g(2)
2
s
+
g
0
(2)
2
s
p=2
ds;(3.6)
where m
p
denotes the pth absolute moment of N(0;1).
8
(The assumption p q=2 could be replaced by p < q,with some more work.) This
result should be compared to the well known result which states that,under the same
assumptions on X,the processes
1p=2
n
P
[t=
n
]
i=1
j
n
i
Xj
p
converge to the limiting process
m
p
R
t
0
j
s
j
p
ds.
This theorem is not really satisfactory,since contrary to what happens in Theorem
3.2(a) the limit depends on the noise process,through
s
,and further we do not know
how to prove a CLT associated to it,because of an intrinsic bias due again to the noise,see
Remark 2.1.However,at least when p is an even integer,we can prove a useful substitute.
That is,by an application of the binomial formula and the estimation of the terms that
involve the process
s
,we obtain (up to a constant factor) the process
R
t
0
j
s
j
p
ds in the
limit.This result,which we explain below,is much more useful for practical applications.
For any even integer p 2 we introduce the numbers
p;l
for l = 0; ;p=2 which are
the solutions of the following triangular system of linear equations (C
p
q
=
q!
p!(qp)!
denote
the binomial coecients):
p;0
= 1;
P
j
l=0
2
l
m
2j2l
C
2j2l
p2l
p;l
= 0;j = 1;2; ;p=2:
)
(3.7)
These could of course be explicitly computed,and for example we have
p;1
=
1
2
C
2
p
;
p;2
=
3
4
C
4
p
;
p;3
=
15
8
C
6
p
:(3.8)
Then for any process Y and for p 2 an even integer we set
V (Y;g;p)
n
t
=
p=2
X
l=0
p;l
V (Y;g;p 2l;l)
n
t
:(3.9)
Theorem 3.4 a) Let X be an arbitrary semimartingale,and assume (Np) for some even
integer p 2.Then for all t 0 we have
1
k
n
V (Z;g;p)
n
t
P
!
(
g(p)
P
st
jX
s
j
p
if p 4
g(2) [X;X]
t
if p = 2:
(3.10)
b) Let X satisfy (2.14),and assume (N2p) for some even integer p 2.Assume also
that b is locally bounded and that and are cadlag.Then we have
1p=4
n
V (Z;g;p)
n
t
u.c.p.
!m
p
(
g(2))
p=2
Z
t
0
j
s
j
p
ds:(3.11)
The rst part of (3.10) is an obvious consequence of (a) of Theorem 3.2,whereas the
second part of (3.10) is nothing else than (3.5),because
2;1
= 1=2.
9
3.3 LLN for continuous It^o semimartingales  2.
For statistical applications we need to have"estimates"for the conditional variance which
will appear in the CLTs associated with some of the previous LLNs.In other words,we
need to provide some other laws of large numbers,which a priori seem articial but are
motivated by potential applications.
To this end we need a few,somehow complicated,notation.Some of it will be of
use for the CLTs below.First,we consider two independent Brownian motions W
1
and
W
2
,given on another auxiliary ltered probability space (
0
;F
0
;(F
0
t
)
t0
;P
0
).With any
function g satisfying (2.7),and extended as before on R by setting it to be 0 outside [0;1],
we dene the following Wiener integral processes
L(g)
t
=
Z
g(s t)dW
1
s
;L
0
(g)
t
=
Z
g
0
(s t)dW
2
s
:(3.12)
If h is another function satisfying (2.7),we dene L(h) and L
0
(h) likewise,with the same
W
1
and W
2
.The four dimensional process U:= (L(g);L
0
(g);L(h);L
0
(h)) is continu
ous in time,centered,Gaussian and stationary.Clearly (L(g);L(h)) is independent of
(L
0
(g);L
0
(h)),and the variables U
t
and U
t+s
are independent if s 1.
We set
m
p
(g;;) = E
0
((L(g)
0
+L
0
(g)
0
)
p
)
m
p;q
(g;h;;) =
R
2
0
E
0
(L(g)
1
+L
0
(g)
1
)
p
(L(h)
t
+L
0
(h)
t
)
q
dt:
9
=
;
(3.13)
These could of course be expressed by the mean of expectations with respect to the joint
law of U above and,considered as functions of (;),they are C
1
.In particular,since
L(g)
0
and L
0
(g)
0
are independent centered Gaussian variables with respective variances
g(2) and
g
0
(2),when p in an integer we have
m
p
(g;;) =
(
P
p=2
v=0
C
2v
p
(
2
g(2))
v
(
2
g
0
(2))
p=2v
m
2v
m
p2v
if p is even
0 if p is odd.
(3.14)
Next,recalling (3.7),we set for p 2 an even integer:
p
(g;;) =
P
p=2
r=0
p;r
(2
2
g
0
(2))
r
m
p2r
(g;;)
2p
(g;h;;) =
P
p=2
r;r
0
=0
p;r
p;r
0
(2
2
g
0
(2))
r
(2
2
h
0
(2))
r
0
m
p2r;p2r
0
(g;h;;)
2p
(g;h;;) =
2p
(g;h;;) 2
p
(g;;)
p
(h;;):
9
>
>
=
>
>
;
(3.15)
The following lemma will be useful in the sequel:
Lemma 3.5 We have
p
(g;;) = m
p
p
g(2)
p=2
:(3.16)
Moreover if g
i
is a nite family of functions satisfying (2.7),for any (;) the matrix with
entries
2p
(g
i
;g
j
;;) is symmetric nonnegative.
10
We need a nal notation,associated with any process Y and any even integer p:
M(Y;g;h;p)
n
t
=
p=2
X
r;r
0
=0
p;r
p;r
0
[t=
n
]3k
n
X
i=0
(
b
Y (g)
n
i
)
r
(
b
Y (h)
n
i
)
r
0
j
Y (g)
n
i+k
n
j
p2r
1
k
n
2k
n
X
j=1
j
Y (h)
n
i+j
j
p2r
0
2j
Y (g)
n
i
j
p2r
j
Y (h)
n
i+k
n
j
p2r
0
:(3.17)
Then our last LLN goes as follows:
Theorem 3.6 Let X satisfy (2.14),and let p 2 be an even integer.Assume (N2p),
that b is locally bounded and that and are cadlag.Then if p q=2 and if g and h are
two functions satisfying (2.7),we have
1p=2
n
M(Z;g;h;p)
n
t
u.c.p.
!
p=2
Z
t
0
2p
(g;h;
s
;
s
) ds:(3.18)
The reader will observe that the limit in (3.18) is symmetrical in g and h,although
M(Y;g;h;p)
n
t
is not.
4 Results:the Central Limit Theorems
4.1 CLT for continuous It^o semimartingales
As mentioned before,we do not know whether a CLT associated with the convergence
(3.6) exists.But there is one associated with (3.11) when p is an even integer.Below we
give a joint CLT for several weight functions g at the same time.We use the notation
e
V (g;p)
n
t
=
1
1=4
n
1p=4
n
V (Z;g;p)
n
t
m
p
(
g(2))
p=2
Z
t
0
j
s
j
p
ds
:(4.1)
In view of Lemma 3.5,the squareroot matrix referred to below exists,and by
a standard selection theorem one can nd a measurable version for it.For the stable
convergence in law used below,we refer for example to [9].
Theorem 4.1 Assume (K) and (N4p),where p is an even integer,and also that the
processes and (3) are cadlag.If (g
i
)
1id
is a family of functions satisfying (2.7),for
each t 0 the variables (
e
V (g
i
;p)
n
t
)
1id
converge stably in law to a ddimensional variable
of the form
1=2p=2
d
X
j=1
Z
t
0
ij
(
s
;
s
) dB
j
s
1id
;(4.2)
where B is a ddimensional Brownian motion independent of F (and dened on an exten
sion of the space),and is a measurable dd matrixvalued function such that (
?
)(;)
is the matrix with entries
2p
(g
i
;g
j
;;),as dened by (3.15).
11
Observe that,up to the multiplicative constant
1p=2
,the covariance of the jth and
kth components of the limit above,conditionally on the eld F,is exactly the right side
of (3.18) for g = g
j
and h = g
k
.
Remark 4.2 An application of Theorem 3.6 and the properties of stable convergence
gives now a a feasible version of Theorem 4.1.We obtain,for example,that the quantity
e
V (g;p)
n
t
q
1p=2
1p=2
n
M(Z;g;g;p)
n
t
converges stably in law (for any xed t) to a variable U N(0;1) that is independent of
F.The latter can be used to construct condence regions for the quantity
R
t
0
j
s
j
p
ds for
even p's.
Remark 4.3 We only have above the stable convergence in law for a given (arbitrary)
time t.Obviously this can be extended to the convergence along any nite family of times,
but we do not know whether a functional stable convergence in law holds,although it is
quite likely.
4.2 CLT for discontinuous It^o semimartingales
Now we turn to the case when X jumps.There is a CLT for Theorem 3.2,at least when
p = 2 and p > 3,exactly as in [10] for the processes of type (3.1).The CLT for Theorem
3.4,when p is an even integer,takes exactly the same form.In this subsection we are
interested in the case p > 3,whereas the case p = 2 is dealt with in the next subsection.
In viewof statistical applications,and as in the previous subsection,we need to consider
a family (g
i
)
1id
of weight functions.We use the notation
e
V
?
(g;p)
n
t
=
1
1=4
n
1
k
n
V (Z;g;p;0)
n
t
g(p)
X
st
jX
s
j
p
(4.3)
and,when further p 4 is an even integer,
V
?
(g;p)
n
t
=
1
1=4
n
1
k
n
V (Z;g;p)
n
t
g(p)
X
st
jX
s
j
p
:(4.4)
These are the processes whose asymptotic behavior is studied,but to describe the
limit we need some rather cumbersome notation,which involves the d weight functions
g
j
satisfying (2.7),in which we are interested.For any real x and any p > 0 we write
fxg
p
= jxj
p
sign(x).Then we introduce four dd symmetric matrices
p
,
p+
,
p
and
p+
with entries:
ij
p
=
R
1
0
R
1
t
fg
i
(s)g
p1
g
i
(s t)ds
R
1
t
fg
j
(s)g
p1
g
j
(s t)ds
dt
ij
p+
=
R
1
0
R
1t
0
fg
i
(s)g
p1
g
i
(s +t)ds
R
1t
0
fg
j
(s)g
p1
g
j
(s +t)ds
dt
ij
p
=
R
1
0
R
1
t
fg
i
(s)g
p1
g
0
i
(s t)ds
R
1
t
fg
j
(s)g
p1
g
0
j
(s t)ds
dt
ij
p+
=
R
1
0
R
1t
0
fg
i
(s)g
p1
g
0
i
(s +t)ds
R
1t
0
fg
j
(s)g
p1
g
0
j
(s +t)ds
dt
9
>
>
>
>
>
>
>
=
>
>
>
>
>
>
>
;
(4.5)
12
These matrices are semidenite positive,and we can thus consider four independent se
quences of i.i.d.ddimensional variables (U
m
)
m1
,(U
m+
)
m1
,(
U
m
)
m1
and (
U
m+
)
m1
,
dened on an extension of the space,independent of F,and such that for each m the
ddimensional variables U
m
,U
m+
,
U
m
and
U
m+
are centered Gaussian vectors with
respective covariances
p
,
p+
,
p
and
p+
.Note that these variables also depend on
p and on the family (g
j
),although it does not show in the notation.
Now let (T
m
)
m1
be a sequence of stopping times with pairwise disjoint graphs,such
that X
t
6= 0 implies that t = T
m
for some m.As is well known (see [10]),the following d
dimensional processes are welldened when p > 3 and is cadlag,and are Fconditional
martingales:
U(p)
t
= p
P
m1
n
X
T
m
o
p1
p
T
m
U
m
+
T
m
p
U
m
+
p
T
m
U
m+
+
T
m
p
U
m+
1
fT
m
tg
:
(4.6)
Moreover,although these processes obviously depend on the choice of the times T
m
,their
Fconditional laws do not;so if the stable convergence in law below holds for a particular
"version"of U(p)
t
,it also holds for all other versions.
Theorem 4.4 Assume (H) and let p > 3.Assume also (N2p) and that the process is
cadlag.If (g
i
)
1id
is a family of functions satisfying (2.7),for each t 0 the variables
(
e
V
(g
i
;p)
n
t
)
1id
converge stably in law to the ddimensional variable U(p)
t
.
The same holds for the sequence (
V
(g
i
;p)
n
t
)
1id
if further p is an even integer.
4.3 CLT for the quadratic variation
Finally we give a CLT for the quadratic variation,associated with (3.5) when p = 2,or
equivalently with (3.10) which is exactly the same in this case.In contrast to the preceding
results the function g is kept xed,thus we will only show a onedimensional result.So
the processes of interest are simply
V
n
t
=
1
1=4
n
1
k
n
V (Z;g;2)
n
t
g(2) [X;X]
t
:(4.7)
In order to describe the limit,we introduce an extension of the space on which are
dened a Brownian motion B and variables U
m
;
U
m
;U
m+
;
U
m+
indexed by m 1,
all these being independent one from the others and independent of F,and such that the
variables U
m
,U
m+
,
U
m
,
U
m+
are centered Gaussian variables with respective variances
11
2
,
11
2+
,
11
2
and
11
2+
,as dened in (4.5).
As in the previous section,(T
m
)
m1
is a sequence of stopping times with pairwise
disjoint graphs,such that X
t
6= 0 implies that t = T
m
for some m.Then we associate
with these data the process U(2),as dened by (4.6).The result goes as follows:
13
Theorem 4.5 Assume (H).Assume also (N4) and that the process is cadlag.Then
for each t the variables
V
n
t
converges stably in law to the variable
U
t
=
1=2
Z
t
0
p
4
(g;g;
s
;
s
) dB
s
+U(2)
t
;(4.8)
where
4
(g;g;;) is dened by (3.15),which here takes the form
4
(g;g;;) = 4
Z
1
0
2
Z
1
s
g(u)g(u s)du +
2
Z
1
s
g
0
(u)g
0
(u s)du
2
ds:(4.9)
When further X is continuous,the processes
V
n
converge stably (in the functional sense)
to the process (4.8),with U(2) = 0 in this case.
When X is continuous,we exactly recover Theorem 4.1 when d = 1 and g
1
= g,for
p = 2.Note that we do not need Hypothesis (K) here,because of the special feature of
the case p = 2.When X has jumps,though,the functional convergence does not hold.
5 The proofs
In the whole proof,we denote by K a constant which may change from line to line.This
constant may depend on the characteristics of the process X and the law of the noise ,on
and the two sequences (k
n
)
n1
and (
n
)
n1
in (2.6),but it does not depend on n itself,
nor on the index i of the increments
n
i
X or
n
i
Z under consideration.If it depends on
an additional parameter q,we write it K
q
.
For the proof of all the results we can use a localization procedure,described in details
in [10] for instance,and which allows to systematically replace the hypotheses (Nq),(H)
or (K),according to the case,by the following strengthened versions:
Hypothesis (SNq):We have (Nq),and further
R
Q
t
(!
(0)
;dz) jzj
q
K.2
Hypothesis (SH):We have (H),and the processes b
t
,
t
,sup
z2E
j(t;z)j= (z) and X
itself are bounded.2
Hypothesis (SK):We have (K),and the processes b
t
,
t
,
e
b
t
,a
t
,a
0
t
,e
t
and X itself are
bounded.2
Observe that under (SK),and upon taking v large enough in (2.15) (changing v changes
the coecients
e
b
t
and a
t
without altering their boundedness),we can also suppose that
the last term in (2.15) vanishes identically,that is
t
=
0
+
Z
t
0
e
b
s
ds +
Z
t
0
e
s
dW
s
+M
t
:(5.1)
Recall that jg
0n
j
j K=k
n
.Then the fact that conditionally on F
(0)
the
t
's are in
dependent and centered,plus Holder inequality,give us that under (SNq) we have (the
14
elds F
n
i
and G
n
i
have been dened after (2.11)):
p q ) E(j
(g)
n
i
j
p
j G
n
i
) K
p
k
p=2
n
2r q ) E(jb(g)
n
i
j
r
j G
n
i
) K
r
k
r
n
:
)
(5.2)
We will also often use the following property,valid for all semimartingales Y:
Y (g)
n
i
=
Z
i
n
+u
n
i
n
g
n
(s i
n
)dY
s
;where g
n
(s) =
k
n
1
X
j=1
g
n
j
1
((j1)
n
;j
n
]
(s):(5.3)
5.1 Proof of Theorem 3.1.
We start with an arbitrary semimartingale X,written as (2.1).The proof follows several
steps.
Step 1) Denote by B
0
the variation process of B,and let C = hX
c
;X
c
i.The process
B
0
+C +(x
2
^1)? is predictable increasing nitevalued,hence locally bounded.Then
by an obvious localization procedure it is enough to prove the result under the assumption
that
B
0
1
+C
1
+(x
2
^1)?
1
K (5.4)
for some constant K.
For each"2 (0;1] we set:
X(") = (x1
fjxj>"g
)?;M(") = (x1
fjxj"g
)?( )
A(") = hM(");M(")i;B(") = B (x1
f"<jxj1g
)?
A
0
(") = (x
2
1
fjxj"g
)?;B
0
(") = variation process of B(");
9
>
=
>
;
(5.5)
so that we have
X = X
0
+B(") +X
c
+M(") +X("):(5.6)
We also denote by T
n
(") the successive jump times of X("),with the convention T
0
(") = 0
(which of course is not a jump time).If 0 <"< 1 we have
A(") A
0
(");B
0
(") ";jM(")j 2"
B
0
(") B
0
+
1
"
A
0
() +
1
(x
2
^1)?:
)
(5.7)
Finally,we write V (Y;p)
n
= V (Y;g;p;0)
n
and
Y
n
i
=
Y (g)
n
i
in this proof.We also
set (Y;u;t) = sup
srs+u;rt
jY
r
Y
s
j.Observe that
Y
n
i
=
P
k
n
j=1
(g((j + 1)=k
n
)
g(j=k
n
))(Y
(i+j)
n
Y
i
n
).Hence,since the derivative g
0
is bounded,we obtain
i [t=
n
] k
n
+1 ) j
Y
n
i
j K(Y;u
n
;t):(5.8)
Step 2) Here we study B(").By (5.8) and (B(");u;t) (B
0
(");u;t) we obtain for
p > 1:
V (B(");p)
n
t
Kk
n
B
0
(")
t
(B
0
(");u
n
;t)
p1
:
15
Since B
0
(") "we have limsup
n!1
(B
0
(");u
n
;t) ",so by (5.4) and (5.7) we have
limsup
n
1
k
n
V (B
0
(");p)
n
t
K"
p1
1
+
1
"
A
0
()
t
for all 0 <"< 1.Since A
0
()
t
!0
as !0,we deduce (choose rst small,then"smaller) that for p 2:
lim
"!0
limsup
n
1
k
n
V (B(");p)
n
t
= 0:(5.9)
Step 3) In this step,we consider a squareintegrable martingale Y with D = hY;Y i
bounded.In view of (5.3)),
E((
Y
n
i
)
2
) = E
Z
i
n
+u
n
i
n
g
n
(s i
n
)
2
dD
s
KE(D
i
n
+u
n
D
i
n
):
On the other hand,E(
Y
n
i
Y
n
i+j
) = 0 whenever j k
n
.Therefore
E
(V (Y;2)
n
t
)
2
k
n
[t=
n
]k
n
X
i=0
E((
Y
n
i
)
2
) Kk
2
n
E(D
t
):(5.10)
We rst apply this with Y = M("),hence D = A(").In view of (5.10) and since
A
0
(")
t
!0 as"!0 and A
0
(")
t
K,we deduce
lim
"!0
sup
n
E
1
k
n
V (M(");2)
n
t
2
= 0:
Since by (5.8) we have V (M(");p)
n
t
KV (M(");2)
n
t
(M(");u
n
;t)
p2
when p > 2,and
since limsup
n
(M(");u
n
;t) 2",we get for p 2:
p 2; > 0 ) lim
"!0
limsup
n!1
P
1
k
n
V (M(");p)
n
t
>
= 0:(5.11)
Next,(5.10) with Y = X
c
yields that the sequence
1
k
n
V (X
c
;2)
n
t
is bounded in L
2
.
Exactly the same argument as above,where now (X
c
;u
n
;t)!0,yields
p > 2; > 0 ) lim
"!0
limsup
n!1
P
1
k
n
V (X
c
;p)
n
t
>
= 0:(5.12)
Step 4) In this step we study V (X(");p)
n
t
.We x t > 0 such that P(X
t
6= 0) = 0.For
any m 1 we set
I(m;n;") = inf(i:i
n
T
m
(")):
We consider the set
n
(t;") on which all intervals between two successive jumps of X(")
in [0;t] are of length bigger than u
n
,and also [0;u
n
) and [t u
n
;t] contain no jump.Then
u
n
!0 and P(X
t
6= 0) = 0 yield
n
(t;")!
a.s.as n!1.On the set
n
(t;") we
have for i [t=
n
] k
n
+1:
X(")
n
i
=
8
<
:
g
n
I(m;n;")i
X
T
m
(")
if I(m;n;") k
n
+1 i I(m;n;") 1
for some m
0 otherwise.
(5.13)
16
Therefore on the set
n
(t;") we have
V (X(");p)
n
t
=
g(p)
n
X
st
jX
s
j
p
1
fjX
s
j>"g
;
and (2.10) yields
1
k
n
V (X(");p)
n
t
!
g(p)
X
st
jX
s
j
p
1
fjX
s
j>"g
:(5.14)
Step 5) In this step we study V (X
c
;2)
n
t
.For easier notation we write Y = X
c
and
Y (n;i)
s
=
R
s
i
n
g
n
(r i
n
)dY
r
when s > i
n
.Using (5.3) and It^o's formula,we get
(
Y
n
i
)
2
=
n
i
+
0n
i
,where
n
i
=
Z
i
n
+u
n
i
n
g
n
(s i
n
)
2
dC
s
;
0n
i
= 2
Z
i
n
+u
n
i
n
Y (n;i)
s
dY
s
:
On the one hand,
P
[t=
n
]k
n
i=0
n
i
is equal to
g(2)
n
C
t
plus a term smaller in absolute value
than KC
u
n
and another term smaller than K(C
t
C
tu
n
).Then obviously
1
k
n
[t=
n
]k
n
X
i=0
n
i
!
g(2) C
t
:(5.15)
On the other hand,we have E(
0n
i
0n
i+j
) = 0 when j k
n
,and
E((
0n
i
)
2
) 4E
(C
i
n
+u
n
C
i
n
) sup
s2[i
n
;i
n
+u
n
]
Y (n;i)
2
s
:
Now,by Doob's inequality E
sup
s2[i
n
;i
n
+u
n
]
Y (n;i)
4
s
KE((C
i
n
+u
n
C
i
n
)
2
),hence
CauchySchwarz inequality yields
E((
0n
i
)
2
) KE((C
i
n
+u
n
C
i
n
)
2
) KE
(C
i
n
+u
n
C
i
n
)(C;u
n
;t)
whenever i [t=
n
] k
n
+1.At this point,the same argument as for (5.10) gives
E
[t=
n
]k
n
X
i=0
0n
i
2
Kk
2
n
E(C
t
(C;u
n
;t)) Kk
2
n
E((C;u
n
;t)):
But (C;u
n
;t) tends to 0 and is smaller uniformly in n than a squareintegrable variable.
We then deduce that
1
k
n
P
[t=
n
]k
n
i=0
0n
i
P
!0,and this combined with (5.15) yields
1
k
n
V (X
c
;2)
n
t
P
!
g(2) C
t
:(5.16)
Step 6) It remains to put all the previous partial results together.For this we use the
following obvious property:for any p 2 and > 0 there is a constant K
p;
such that
x;y 2 R )
jx +yj
p
jxj
p
K
p;
jyj
p
+jxj
p
:(5.17)
17
Suppose rst that p > 2.Applying (5.17) and (5.6),we get
V (X;p)
n
t
V (X(");p)
n
t
V (X(");p)
n
t
+K
p;
V (B(");p)
n
t
+V (X
c
;p)
n
t
+V (M(");p)
n
t
:
Then by (5.9),(5.11),(5.12) and (5.14),plus
P
st
jX
s
j
p
1
fjX
s
j>"g
!
P
st
jX
s
j
p
as
"!0,and by taking arbitrarily small in the above,we obtain the rst part of (3.2).
Next suppose that p = 2.The same argument shows that it is enough to prove that
1
k
n
V (X
c
+X(");2)
n
t
P
!
g(2)
C
t
+
X
st
jX
s
j
2
1
fjX
s
j>"g
:(5.18)
On the set
n
(t;"),one easily sees that
V (X
c
+X(");2)
n
t
= V (X
c
;2)
n
t
+V (X(");2)
n
t
+
X
m1:T
m
(")t
n
m
;
where
n
m
=
P
I(m;n;")1
i=I(m;n;")k
n
+1
(m;n;i) and (with again Y = X
c
)
(m;n;i) =
g
n
I(m;n;")i
X
T
m
(")
+
Y
n
i
2
g
n
I(m;n;")i
X
T
m
(")
2
j
Y
n
i
j
2
:
In view of (5.8),we deduce from (5.17) that for all > 0,
j(m;n;i)j K
(X
c
;u
n
;t)
2
+KjX
T
m
(")
j
2
if I(m;n;") k
n
< i < I(m;n;") and T
m
(") t.Then obviously (since is arbitrarily
small) we have
n
m
=k
n
!0 for all m with T
m
(") t.Hence (5.18) follows from (5.16) and
(5.14),and we are nished.
5.2 Proof of Theorem 3.2.
Now we turn to the case where noise is present.X is still an arbitrary semimartingale,
and as in the previous theorem we can assume by localization that (5.4) holds.
We rst prove (a),and we assume (Nq) with q = p for proving (3.3) and q = p+2r for
proving (3.4).Another localization allows to assume (SNq),in which case (5.2) implies
E(V (;g;q;0)
n
t
) +E(V (;g;0;q=2)
n
t
)
Kt
n
k
q=2
n
Ktk
2q=2
n
:(5.19)
We deduce from (5.17) that,for all > 0,
V (Z;g;q;0)
n
t
V (X;g;q;0)
n
t
V (X;g;q;0)
n
t
+K
q;
V (;g;q;0)
n
t
;(5.20)
and thus (3.3) follows from (3.2) and (5.19).
18
Next,Holder's inequality yields when p;r > 0 with p +2r = q > 2:
V (Z;g;p;r)
n
t
(V (Z;g;q;0)
n
t
)
p=q
(V (Z;g;0;q=2)
n
t
)
2r=q
:
By (3.3) applied with q instead of p we see that the sequence k
1
n
V (Z;g;q;0)
n
t
is tight,so
for (3.4) it is enough to show that the sequence k
q=22
n
V (Z;g;0;q=2)
n
t
is also tight.
To see this we rst deduce from jg
0n
j
j K=k
n
that
b
X(g)
n
i
K
k
2
n
i+k
n
1
X
j=i
(
n
j
X)
2
;(5.21)
implying by Holder inequality (recall q > 2) that (
b
X(g)
n
i
)
q=2
K
k
1+q=2
n
P
i+k
n
1
j=i
j
n
j
Xj
q
,
hence by (3.1) the sequence k
q=2
n
V (X;g;0;q=2)
n
t
is tight.Second,(5.19) yields that
the sequence k
q=22
n
V (;g;0;q=2)
n
t
is tight,and (3.4) follows because V (Z;g;0;q=2)
n
t
K
q
(V (X;g;0;q=2)
n
t
+V (;g;0;q=2)
n
t
).
Now we turn to (b),and by localization we can assume (SN2).The left side of (3.5)
can be written as
1
k
n
V (X;g;2;0)
n
t
+
1
k
n
4
X
l=1
U(l)
n
t
;
where
U(l)
n
t
=
8
>
>
>
>
>
<
>
>
>
>
>
:
2
P
[t=
n
]k
n
i=0
X(g)
n
i
(g)
n
i
if l = 1
P
[t=
n
]k
n
i=0
P
k
n
j=1
(g
0n
j
)
2
n
i+j
X
n
i+j
if l = 2
1
2
V (X;g;0;1)
n
t
if l = 3
V (;g;2;0)
n
t
1
2
V (;g;0;1)
n
t
if l = 4
and by (3.2) it is enough to prove that for l = 1;2;3;4,
1
k
n
U(l)
n
t
P
!0:(5.22)
First,(5.21) yields jU(3)
n
t
j
K
k
n
P
[t=
n
]
i=1
(
n
i
X)
2
,so (5.22) for l = 3 follows from (3.1).
Next,(2.5) implies E(U(l)
n
t
j F
(0)
) = 0 for l = 1;2,hence (5.22) for l = 1;2 will be implied
by
E
1
k
n
U(l)
n
t
2
j F
(0)
P
!0:(5.23)
By (2.5) and (2.11) and (5.2),the variables jE(
(g)
n
i
(g)
n
j
j F
(0)
)j vanish if j k
n
and are
smaller than K=k
n
otherwise,whereas the variable jE(
n
i
n
i+j
j F
(0)
)j are bounded,
and vanish if j 2.Then we get
E((U(1)
n
t
)
2
j F
(0)
)
K
k
n
[t=
n
]k
n
X
i=0
k
n
X
j=1
X(g)
n
i
X(g)
n
i+j
KV (X;g;2;0)
n
t
;
19
E((U(2)
n
t
)
2
j F
(0)
)
K
k
4
n
[t=
n
]k
n
X
i;i
00
=0
k
n
1
X
j;j
0
=0
j
n
i+j
X
n
i
0
+j
0 Xj1
fji
0
+j
0
ijj2g
K
k
2
n
[t=
n
]
X
i=1
(
n
i
X)
2
;
and (5.23) follows from (3.2) when l = 1 and from (3.1) when l = 2.
Finally,an easy calculation shows that U(4)
n
t
= U(5)
n
t
+U(6)
n
t
,where
U(5)
n
t
=
[t=
n
]
X
i=0
n
i
k
n
X
j=1
n
ij
n
i+j
;U(6)
n
t
=
k
n
X
i=0
0n
i
(
n
i
)
2
+
00n
i
(
n
i+[t=
n
]k
n
)
2
for some coecients
n
ij
;
0n
i
;
00n
i
,all smaller than K=k
n
.Then obviously E(jU(6)
n
t
j) K
and E(U(5)
n
t
) = 0 and,since E(
n
i
n
i+j
n
i
0
n
i
0
+j
0
) vanishes unless i = i
0
and j = j
0
when
j;j
0
1,we also have E((U(5)
n
t
)
2
) Kt=k
n
n
Ktk
n
.Then (5.22) and (5.23) hold for
l = 6 and l = 5 respectively,and thus (5.22) nally holds for l = 4.2
5.3 A key lemma.
In this section we prove a key result,useful for deriving the other LNNs,when the process
X is continuous,and for all CLTs.Before that,we prove Lemma 3.5.
Proof of Lemma 3.5.By virtue of (3.14) we have
p
(g;;) =
p=2
X
v=0
m
2v
(
2
g(2))
v
(
2
g
0
(2))
p=2v
p=2v
X
r=0
C
2v
p2r
p;r
2
r
m
p2r2v
:
By (3.7) the last sum above vanishes if v < p=2 and equals 1 when v = p=2,hence (3.16).
Next,we put a
i
=
p
(g
i
;;) and U
i
t
= L(g
i
)
t
+L
0
(g
i
)
t
and,for T 2,
V
i
T
=
p=2
X
r=0
p;r
(2
2
g
0
i
(2))
r
Z
T
0
jU
i
t
j
p2r
dt:
The process (L(g
i
);L
0
(g
i
)) is stationary,hence E
0
(V
i
T
) = Ta
i
for some constant a
i
.More
over if
f
ij
(s;t) =
p=2
X
r;r
0
=0
p;r
p;r
0
(2
2
g
i
0
(2))
r
(2
2
g
j
0
(2))
r
0
E
0
(jU
i
s
j
p2r
jU
j
t
j
p2r
0
) a
i
a
j
;
then f
ij
satises f
ij
(s;t) = f
ij
(s +u;t +u) and f
ij
(s;t) = 0 if js tj > 1.Thus if T > 2,
Cov(V
i
T
;V
j
T
) =
Z
[0;T]
2
f
ij
(s;t)dsdt
=
Z
1
0
ds
Z
s+1
0
f
ij
(s;t)dt +
Z
T
T1
ds
Z
T
s1
f
ij
(s;t)dt +
Z
T1
1
ds
Z
s+1
s1
f
ij
(s;t)dt
20
Therefore
1
T
Cov(V
i
T
;V
j
T
) converges to
R
2
0
f
ij
(1;u)du as T!1,and this limit equals
2p
(g
i
;g
j
;;).Since the limit of a sequence of covariance matrices is symmetric nonneg
ative,we have the result.2
Now,we x a sequence i
n
of integers,and we associate the following processes,with g
an arbitrary function satisfying (2.7):
L(g)
n
t
=
p
k
n
W(g)
n
i
n
+[k
n
t]
;
L
0
(g)
n
t
=
p
k
n
(g)
n
i
n
+[k
n
t]
;
b
L
0
(g)
n
t
= k
n
b(g)
n
i
n
+[k
n
t]
:
(5.24)
We do not mention the sequence i
n
in this notation,but those processes obviously depend
on it.
Below,we x a family (g
l
)
1ld
of weight functions satisfying (2.7).We denote by
L
n
t
and
L
0n
t
and
b
L
0n
t
the ddimensional processes with respective components
L(g
l
)
n
t
and
L
0
(g
l
)
n
t
and
b
L
0
(g
l
)
n
t
.These processes can be considered as variables with values in the
Skorokhod space D
d
of all cadlag functions from R
+
into R
d
.The processes L
t
and L
0
t
with components L(g
l
)
t
and L
0
(g
l
)
t
,dened by (3.12) with the same Wiener processes
W
1
and W
2
for all components,are also D
d
valued variables,and the probability on
D
2d
= D
d
D
d
which is the law of the pair (L;L
0
) is denoted by R = R
(g
v
)
= R(dx;dy).
We also have a sequence (f
n
) of functions on D
3d
,which all depend on w 2 D
3d
only
through its restriction to [0;m+1] for some m 0,and which satisfy the following property
for some q
0
2 (below,x;y;z 2 D
d
,so v = (x;y) 2 D
2d
and (x;y;z) = (v;z) 2 D
3d
,and
the same for x
0
;y
0
;z
0
,and v
0
;moreover for any multidimensional function u on R
+
we put
u
?
m
= sup
s2[0;m+1]
ku(s)k):
jf
n
(v;z)j K (1 +(v
?
m
)
q
0
+(z
?
m
)
q
0
=2
)
jf
n
(v;z) f
n
(v
0
;z
0
)j K((v v
0
)
?
m
+(z z
0
)
m
) (1 +(v
?
m
)
q
0
1
+(v
0?
m
)
q
0
1
+(z
?
m
)
q
0
=21
+(z
0?
m
)
q
0
=21
):
9
>
=
>
;
(5.25)
We can now state the main result of this subsection:
Lemma 5.1 Assume (SNq) for some q > 4 and that the process is bounded.Let be
the set of all times s 0 such that both and are almost surely continuous at time s.
Take any sequence (i
n
) of integers such that s
n
= i
n
n
converges to some s 2 .If the
sequence (f
n
) satises (5.25) for some q
0
< q and converges pointwise to a limit f,we
have the almost sure convergence:
E
f
n
(
s
n
L
n
;
L
0n
;
b
L
0n
) j F
s
n
!
Z
f(
s
x;
s
y;2(
s
)
2
z
0
) R(dx;dy);(5.26)
where z
0
is the constant function with components (
g
0
l
(2))
1ld
.
Proof.1) We rst prove an auxiliary result.Let
(0)
s
be the set of all!
(0)
such that both
(!
(0)
) and (!
(0)
) are continuous at time s.We have P
(0)
(
(0)
s
) = 1 because s 2 ,and
21
we x!
(0)
2
(0)
s
.We consider the probability space (
(1)
;F
(1)
;Q),where Q = Q(!
(0)
;:),
and our aim is to show that under Q,
L
0n
L
!
s
(!
(0)
)L
0
(5.27)
(functional convergence in law in D
d
),with L
0
= (L
0
t
) the process introduced after (5.24).
We rst prove the nitedimensional convergence.Let 0 < t
1
< < t
r
.By (5.24)
and (2.11) the rddimensional variable Z
n
= (
L
0n;l
t
i
:1 l d;1 i r) is
Z
n
=
P
1
j=1
z
n
j
;where z
n
j
=
n
j
a
n
j
;
n
j
=
1
p
k
n
n
i
n
+j1
and
a
n;l;i
j
=
k
n
(g
l
)
0n
j[k
n
t
i
]
if 1 +[k
n
t
i
] j k
n
+[k
n
t
i
]
0 otherwise:
9
>
=
>
;
(5.28)
Under Q the variables
n
j
are independent and centered with E
Q
(j
n
j
j
4
) Kk
2
n
by (SN
q),recall q > 4.The numbers a
n;l;i
j
being uniformly bounded and equal to 0 when j >
k
n
+[k
n
tr],we deduce that under Q again the variables z
n
j
are independent,with
E
Q
(z
n
j
) = 0;E
Q
(kz
n
j
k
4
) Kk
2
n
;
1
X
j=1
E
Q
(kz
n
j
k
4
)!0:(5.29)
Next,
1
X
j=1
E
Q
(z
n;l;i
j
z
n;l
0
;i
0
j
) =
1
k
n
1
X
j=1
(i
n
+j1)
n
(!
(0)
)
2
a
n;l;i
j
a
n;l
0
;i
0
j
:
On the one hand
(i
n
+j1)
n
(!
(0)
)
2
converges uniformly in j k
n
+ [t
r
k
n
] to
s
(!
(0)
)
2
because s 7!
s
(!
(0)
) is continuous at s.On the other hand (recall g
l
= 0 outside [0;1]),
1
k
n
1
X
j=1
a
n;l;i
j
a
n;l
0
;i
0
j
= k
n
1
X
j=1
Z
j=k
n
(j1)=k
n
g
0
l
(u
[k
n
t
i
]
k
n
)du
Z
j=k
n
(j1)=k
n
g
0
l
0 (u
[k
n
t
i
0 ]
k
n
)du
which clearly converges to c
l;i;l
0
;i
0
=
R
g
0
l
(v t
i
)g
0
l
0
(v t
i
0 )dv by the mean value theorem,
the piecewise continuity of each g
0
l
,and Riemann approximation.Hence
1
X
j=1
E
Q
(z
n;l;i
j
z
n;l
0
;i
0
j
)!c
l;i;l
0
;i
0
s
(!
(0)
)
2
:(5.30)
Then a standard limit theorem on rowwise independent triangular arrays of innitesimal
variables yield that Z
n
converges in law under Q to a centered Gaussian variable with
covariance matrix (c
l;i;l
0
;i
0
),see e.g.Theorem VII236 of [9].Now,in view of (3.12),this
matrix is the covariance of the centered Gaussian vector (L
0;l
t
i
:1 l d;1 i q),and
the nitedimensional convergence in (5.27) is proved.
To obtain the functional convergence in (5.27) it remains to prove that for each compo
nent the processes
L
0
(g
l
)
n
are Ctight.For this we use a criterion given in [7] for example.
Namely,since q > 2,the tightness of the sequence
L
0
(g
l
)
n
is implied by
0 < v 1 ) E
Q
(j
L
0
(g
l
)
n
t+v
L
0
(g
l
)
n
t
j
q
) Kv
q=2
:(5.31)
22
A simple computation shows
L
0
(g
l
)
n
t+v
L
0
(g
l
)
n
t
=
P
j
n
j
n
j
for suitable coecients
n
j
,
such that at most 2[k
n
v] are smaller that K
1
=
p
k
n
,and at most k
n
of them are smaller
than K
2
v=
p
k
n
,and all others vanish.Then BurkholderDavisGundy inequality yields
E
Q
(j
L
0
(g
l
)
n
t+v
L
0
(g
l
)
n
t
j
q
) KE
Q
X
j
(
n
j
n
j
)
2
q=2
K(q)(!
(0)
)
K
q
1
(2v)
q=2
+K
q
2
v
q
;
and (5.31) follows.Note also that the same argument implies
E
Q
sup
vt
(j
L
0
(g
l
)
n
v
j
q
)
K
t
:(5.32)
2) In exactly the same setting than in the previous step,we want to prove here that
b
L
0
(g
l
)
n
t
u.c.p.
!2(
s
(!
(0)
)
2
g
0
l
(2)
E
Q
sup
vt
j
b
L
0
(g
l
)
n
v
j
q=2
K
t
9
=
;
(5.33)
(under Q again).Under Q the variable
n
t;j
= k
n
(g
0
l
(j=k
n
)
n
i
n
+[k
n
t]+j
)
2
satises
a
n
t;j
:= E
Q
(
n
t;j
) = k
n
(g
0
l
(j=k
n
))
2
((!
(0)
)
(i
n
+[k
n
t]+j)
n
)
2
+((!
(0)
)
(i
n
+[k
n
t]+j1)
n
)
2
E
Q
(j
n
t;j
j
q=2
) K=k
q=2
n
:
In view of the continuity of (!
(0)
) at time s and of (2.10),and since
b
L
0
(g
l
)
n
t
=
P
k
n
j=1
n
t;j
,
we see that B
n
t
= E
Q
(
b
L
0
(g
l
)
n
t
) =
P
k
n
j=1
a
n
t;j
converges locally uniformly to the\constant"
2(
s
(!
(0)
)
2
g
0
l
(2),and also B
n
t
K.Hence it is enough to prove that V
n
t
=
b
L
0
(g
l
)
n
t
B
n
t
u.c.p.
!0 and that the second part of (5.33) holds when
b
L
0
(g
l
)
n
t
is substituted with V
n
t
.
Now,V
n
t
is the sum of the k
n
centered variables
n
t;j
a
n
t;j
,with (q=2)th absolute
moment smaller than K=k
q=2
n
,and
n
t;j
is independent of (
n
t;l
:jl jj 2).Then obviously
E
Q
((V
n
t
)
2
) K=k
n
!0.Moreover if v 2 (0;1],
b
L
0
(g
l
)
n
t+v
b
L
0
(g
l
)
n
t
=
P
i
n
i
(
n
i
)
2
for
suitable coecients
n
j
,such that at most 2[k
n
v] are smaller that K
1
=k
n
,and at most k
n
of
them are smaller than K
2
v=k
n
,and all others vanish.Then by BurkholderDavisGundy
inequality (applied separately for the sum of even indices and the sum of odd indices,to
ensure the independence of the summands),we have
E
Q
(jV
n
t+v
V
n
t
j
q=2
) KE
Q
X
j
(
n
j
n
j
)
2
q=4
Kv
q=4
:
The second part of (5.33) for V
n
t
follows and,together with the property q > 4 and the
fact that V
n
t
P
!0 for all t,it also implies V
n
t
u.c.p.
!0.Therefore we have (5.33).
3) Now we draw some consequences of the previous facts.We set for y;z 2 D
d
,and
with z
0
the constant function with components
g
0
l
(2):
f
n
!
(0)
(y;z) = f
n
(
s
n
(!
(0)
)L
n
(!
(0)
);y;z);
23
A
n
j
(!
(0)
) =
8
<
:
R
Q(!
(0)
;d!
(1)
) f
n
!
(0)
(
L
0n
(!
(1)
);
b
L
0n
(!
(1)
));j = 1
R
f
n
!
(0)
(
s
n
(!
(0)
)y;2
2
s
z
0
) R(dx;dy);j = 2:
The F
(0)
measurable variables
n
= 1 + sup
v2[0;(m+1)u
n
]
p
k
n
jW
s
n
+v
W
s
n
j
satisfy E(
u
n
) K
u
for any u > 0,by scaling of the Brownian motion W,whereas
j
L(g
l
)
n
t
j K
n
if t m.Then we deduce from (5.25) and from the boundedness of
and that if y;y
0
;z;z
0
are in D
d
and u = (y;z) and u
0
= (y
0
;z
0
):
jf
n
!
(0)
(u)j K
n
(!
(0)
)
q
0
(1 +(y
?
m
)
q
0
+(z
?
m
)
q
0
=2
)
jf
n
!
(0)
(u) f
n
!
(0)
(u
0
)j K
n
(!
(0)
)
q
0
(u u
0
)
?
m
(1 +(y
?
m
)
q
0
1
+(y
0?
m
)
q
0
1
+(z
?
m
)
q
0
=21
+(z
0?
m
)
q
0
=21
):
9
>
=
>
;
Moreover
s
n
(!
(0)
)!
s
(!
(0)
),so by the Skorokhod representation theoremaccording
to which,in case of convergence in law,one can replace the original variables by variables
having the same laws and converging pointwise,one deduces from (5.27) and (5.32) and
(5.33) (these imply that the variables f
n
!
(0)
(
L
0n
;
b
L
0n
) are uniformly integrable,since q
0
< q),
that
!
(0)
2
(0)
s
) A
n
1
(!
(0)
) A
n
2
(!
(0)
)!0;
E
jA
n
j
j
q=q
0
K:
9
=
;
(5.34)
Next,we make the following observation:due to the F
(0)
conditional independence of
the
t
's,a version of the conditional expectation in (5.26) is E(A
n
1
j F
s
n
).Therefore in
view of (5.34) (which ensures the uniform integrability and the a.s.convergence to 0 of
the sequence A
n
1
A
n
2
),(5.26) is implied by
E(A
n
2
j F
s
n
)!F(
s
;
s
) a.s.;(5.35)
where
F(;) =
Z
f(x;y;2()
2
z
0
) R(dx;dy):
4) For proving (5.35) we start again with an auxiliary result,namely
L
n L
!L:(5.36)
For this,we see that Z
n
= (
L
n;l
t
i
:1 l d;1 i r) is given by (5.28),except that
n
j
=
p
k
n
n
i
n
+j
W;a
n;l;i
j
=
(g
l
)
n
j[k
n
t
i
]
if 1 +[k
n
t
i
] j k
n
+[k
n
t
i
]
0 otherwise:
Then the proof of (5.36),both for the nitedimensional convergence and the Ctightness,
is exactly the same as for (5.27) (note that the right side of (5.30) is now
2
R
g
l
(v
24
t
i
)g
l
0
(v t
i
0
)dv,which is the covariance matrix of (L
l
t
i
:1 l d;1 i r)).Further,
an elementary calculation yields
E
sup
vt
(j
L(g
l
)
n
v
j
q
K
t
:(5.37)
5) Now we introduce some functions on R
2
:
F
n
(;) =
R
E
f
n
(
L
n
;y;2()
2
z
0
)
R(dx;dy):
F
0
n
(;) =
R
E
f
n
(L;y;;2()
2
z
0
)
R(dx;dy):
Under R the canonical process is locally in time bounded in each L
r
.Then in view of
(5.25) we deduce from (5.36) and (5.37),and exactly as for (5.34),that F
n
F
0
n
!0
locally uniformly in R
2
.We also deduce from (5.25) that F
0
n
(
n
;
n
) F
0
n
(;)!0
whenever (
n
;
n
)!(;),and also that F
0
n
!F pointwise because f
n
!f pointwise,
hence we have F
n
(
n
;
n
)!F(;).
At this point it remains to observe that,because (W
s
n
+t
W
s
n
)
t0
is independent of
F
s
n
,we have E(A
n
2
j F
s
n
) = F
n
(
s
n
;
s
n
).Since (
s
n
;
s
n
)!(
s
;
s
) a.s.,we readily
deduce (5.26),and we are done.2
Remark 5.2 In the previous lemma,suppose that all f
n
(hence f as well) only depend
on (x;y) and not on z;that is,the processes
b
L
0n
do not enter the picture.Then it is easily
seen from the previous proof that we do not need q > 4,but only q > 2.2
5.4 Asymptotically negligible arrays.
An array (
n
i
) of nonnegative variables is called AN (for\asymptotically negligible") if
p
n
sup
0jk
n
E
[t=u
n
]
X
i=0
n
ik
n
+j
!0;j
n
i
j K (5.38)
for all t > 0.With all processes and reals p > 0 and integers mwe associate the variables
( ;m)
n
i
= sup
t2[i
n
;i
n
+(m+1)u
n
]
j
t
i
n
j;
0
( ;m)
n
i
= E(( ;m)
n
i
j F
n
i
):
)
(5.39)
Lemma 5.3 a) If (
n
i
) is an AN array,we have
n
E
[t=
n
]
X
i=1
n
i
!0 (5.40)
for all t > 0,and the array ((
n
i
)
q
) is also AN for each q > 0.
b) If is a cadlag bounded process,then for all m 1 the two arrays (( ;m)
n
i
) and
(
0
( ;m)
n
i
) are AN.
25
Proof.a) The left side of (5.40) is smaller than a constant times the left side of (5.38),
hence the rst claim.The second claim follows from Holder inequality if q < 1,and from
P
i2I
(
n
i
)
q
K
P
i2I
n
i
if q > 1 (recall that j
n
i
j K).
b) Let
n
i
= ( ;m)
n
i
.If"> 0,denote by N(")
t
the number of jumps of with size
bigger than"on the interval [0;t],and by v(";t;) the supremum of j
s
r
j over all pairs
(r;s) with s r s + and s t and such that N(")
s
N(")
r
= 0.Since is bounded,
u
n
sup
0jk
n
E
[t=u
n
]
X
i=0
n
ik
n
+j
E
t v(";t +1;(m+1)u
n
) +(Kt) ^(Ku
n
N(")
t+1
)
as soon as (m+2)u
n
1.Since limsup
n!1
v(";t +1;(m+1)u
n
) ",Fatou's lemma im
plies that the limsup of the left side above is smaller than Kt",so we have (5.38) because
"is arbitrarily small.Since E(
0
( ;m)
n
i
) = E(( ;m)
n
i
),the second claim follows.2
5.5 Some estimates.
In this subsection we provide a (somewhat tedious) list of estimates,under the following
assumption for some q > 2:
we have (2.14) and (SNq) and b and are bounded,and and are cadlag.(5.41)
We rst introduce some notation,where i and j are integers,Y is an arbitrary process,
and
p;l
is given by (3.7),and i +j 1 in the rst line below,and p an even integer in
(5.43):
n
i;j
=
n
i
n
i+j
W +
n
i+j
;
n
i;j
=
n
i+j
Z
n
i;j
=
n
i+j
X
n
i
n
i+j
W
(g)
n
i;j
=
P
k
n
1
l=1
g
n
l
n
i;j+l
;
(g)
n
i;j
=
P
k
n
1
l=1
g
n
l
n
i;j+l
;
b
(g)
n
i;j
=
P
k
n
l=1
(g
0n
l
n
i;j+l
)
2
)
(5.42)
(Y;g;p)
n
i
=
P
p=2
l=0
p;l
(
Y (g)
n
i
)
p2l
(
b
Y (g)
n
i
)
l
;
(g;p)
n
i;j
=
P
p=2
l=0
p;l
(
(g)
n
i;j
)
p2l
(b(g)
n
i;j
)
l
:
9
=
;
(5.43)
Note that,by (3.9),
V (Y;g;p)
n
t
=
[t=
n
]k
n
X
i=0
(Y;g;p)
n
i
:(5.44)
In the forthcoming inequalities,we have 0 j mk
n
,where m is a xed integer.
First,if we use (5.3) and the boundedness of g,and also (5.2),we obtain for u > 0:
E(j
X(g)
n
i
j
u
+j
W(g)
n
i
j
u
j F
n
i
) K
u
u=4
n
;
u q ) E(j
Z(g)
n
i
j
u
+j
(g)
n
i
j
u
j F
n
i
) K
u
u=4
n
9
=
;
(5.45)
Next,
n
i;j
=
R
(i+j)
n
(i+j1)
n
b
s
ds +(
s
n
i
)dW
s
(g)
n
i;j
=
R
(i+j+k
n
)
n
(i+j)
n
g
n
(s (i +j)
n
)
b
s
ds +(
s
n
i
)dW
s
:
9
=
;
(5.46)
26
Hence we obtain for u 1,and recalling that (;m)
n
i
K:
E(j
n
i;j
j
u
j F
n
i
K
u
u=2
n
u=2
n
+
0
(;m)
n
i
)
;
E(j
(g)
n
i;j
j
u
j F
n
i
K
u
u=4
n
u=4
n
+
0
(;m)
n
i
)
:
9
=
;
(5.47)
If u is an odd integer,(5.45),(5.47) and an expansion of (
n
i
W(g)
n
i
+
(g)
n
i
)
u
yield
E((
W(g)
n
i
)
u
j F
n
i
) = 0;E((
X(g)
n
i
)
u
j F
n
i
K
u
u=4
n
1=4
n
+
q
0
(;m)
n
i
:(5.48)
Next,using jg
0n
i
j K=k
n
and (5.3) and the rst part of (5.47),plus Holder inequality
and the denition of
b
Y (g)
n
i
,plus the obvious fact that E(j
n
i;j
j
u
j F
n
i
) K
u
if u q,and
after some calculations,we get for u 1:
E(j
b
X(g)
n
i
j
u
+j
c
W(g)
n
i
j
u
j F
n
i
) K
u
3u=2
n
;
u q=2 ) E(j
b
Z(g)
n
i+j
j
u
+jb(g)
n
i+j
j
u
j F
n
i
) K
u
u=2
n
u q ) E(j
b
Z(g)
n
i+j
b(g)
n
i+j
j
u
j F
n
i
) K
u
u
n
:
9
>
>
=
>
>
;
(5.49)
Then,if we combine (5.45),(5.47) and (5.49),and use again Holder inequality,we
obtain for all reals l;u 1 and r 0:
(l +2r)u q ) E
(
Z(g)
n
i+j
)
l
(
b
Z(g)
n
i+j
)
r
(
(g)
n
i;j
)
l
(b(g)
n
i;j
)
r
u
j F
n
i
K
u;l;r
ul=4+ur=2
n
u=4
n
+(
0
(;m)
n
i
)
1u(l+2r1)=q
;
2ru q ) E
(
b
Z(g)
n
i+j
)
r
(b(g)
n
i;j
)
r
u
j F
n
i
K
u;r
ru=2+u=2
n
:
9
>
>
>
=
>
>
>
;
(5.50)
Finally,by (5.43),this readily gives for p 2 an even integer and u 1 a real,such that
pu q:
E(j(Z;g;p)
n
i+j
j
u
+j(g;p)
n
i;j
j
u
j F
n
i
) K
u;p
pu=4
n
E(j(Z;g;p)
n
i+j
(g;p)
n
i;j
j
u
j F
n
i
) K
u;p
pu=4
n
u=4
n
+(
0
(;m)
n
i
)
1u(p1)=q
:
(5.51)
5.6 Proof of Theorem 3.3.
By localization we can and will assume (5.41).We set
n
i
=
p=4
n
j
Z(g)
n
i
j
p
;
n
i
=
p=4
n
j
(g)
n
i;0
j
p
;
t
= m
p
g(2)
2
t
+
g
0
(2)
2
t
p=2
:
We deduce from (5.50) with r = 0 and Lemma 5.3 that
n
P
[t=
n
]k
n
i=0
j
n
i
n
i
j
u.c.p.
!0.
Then it remains to prove
n
[t=
n
]k
n
X
Enter the password to open this PDF file:
File name:

File size:

Title:

Author:

Subject:

Keywords:

Creation Date:

Modification Date:

Creator:

PDF Producer:

PDF Version:

Page Count:

Preparing document for printing…
0%
Comments 0
Log in to post a comment