Richard F. Bassand David A. Levin

johnnepaleseΗλεκτρονική - Συσκευές

10 Οκτ 2013 (πριν από 3 χρόνια και 11 μήνες)

108 εμφανίσεις

Transition Probabilities for
Symmetric Jump Processes
Richard F.Bass
1
and David A.Levin
2
Department of Mathematics
The University of Connecticut
Storrs,CT 06269
Abstract.We consider symmetric Markov chains on the integer lattice in d dimensions,
where  2 (0;2) and the conductance between x and y is comparable to jx − yj
−(d+)
.
We establish upper and lower bounds for the transition probabilities that are sharp up to
constants.
Keywords.Harnack inequality,jump processes,stable processes,Markov chains,transi-
tion probabilities
AMS Subject classication.Primary 60J05;Secondary 60J35
1
Research partially supported by NSF grant DMS-9988496.
2
Current mailing address:P.O.Box 368,Annapolis Junction,Maryland 20701-0368.
E-mail:levin@member.ams.org.
1
1.Introduction.
There is a huge literature on the subject of transition probabilities of randomwalks
on graphs.For a recent and comprehensive account,see the book [Wo].The vast majority
of the work,however,has been for nearest neighbor Markov chains.The purpose of this
paper is to obtain good transition probability estimates for Markov chains on the integer
lattice Z
d
in d dimensions in the case when the probability of a jump from a point x to a
point y is comparable to that of a symmetric stable process of index  2 (0;2).
To be more precise,for x;y 2 Z
d
with x 6
= y,let C
xy
be positive nite numbers such
that
P
z
C
xz
< 1for all x.Set C
xx
= 0 for all x.We call C
xy
the conductance between x
and y.Dene a symmetric Markov chain by
P(X
1
= y j X
0
= x) =
C
xy
P
z
C
xz
;x;y 2 Z
d
:(1:1)
In this paper we will assume that  2 (0;2) and there exists  > 1 such that for all x 6
= y

−1
jx −yj
d+
 C
xy


jx −yj
d+
:(1:2)
Write p(n;x;y) for P
x
(X
n
= y).The main result of this paper is
Theorem 1.1.There exist positive nite constants c
1
and c
2
such that
p(n;x;y)  c
1

n
−d=
^
n
jx −yj
d+

;(1:3)
and for n  2
p(n;x;y)  c
2

n
−d=
^
n
jx −yj
d+

:(1:4)
If n = 1 and x 6
= y,(1.4) also holds.
The Markov chain X
n
is discrete in time and in space.Closely related to X
n
is
the continuous time process Y
t
,which is the process that waits at a point in Z
d
a length
of time that is exponential with parameter 1,jumps according to the jump probabilities
of X,then waits at the new point a length of time that is exponential with parameter 1
and independent of what has gone before,and so on.A continuous-time continuous state
space process related to both X
t
and Y
t
is the process U
t
on R
d
whose Dirichlet form is
E(f;f) =
Z
R
d
Z
R
d
(f(y) −f(x))
2
C(x;y) dxdy;
where C(x;y) is a measurable function with

−1
jx −yj
d+
 C(x;y) 

jx −yj
d+
:
2
The process U
t
stands in the same relationship to X
n
as the diusion process corresponding
to a uniformly elliptic operator in divergence formdoes to a nearest neighbor Markov chain.
The methods of this paper allowone to obtain bounds for the transition probabilities
of Y
t
and the transition densities of U
t
.In fact,these are considerably easier than the
bounds for X
n
,so we concentrate in this paper only on the estimates for X
n
.Some results
for Y
t
are needed,however,along the way.
Our methods are quite dierent from those used for diusions or nearest neighbor
chains.Recall that for a nearest neighbor Markov chain on Z
d
,the transition probabilities
are bounded above and below by expressions of the form
c
1
n
−d=2
exp(−c
2
jx −yj
2
=n)
as long as jx − yj is not larger than n;see [SZ].One way of obtaining these results is
to use a method of Davies as developed in [CKS].The lack of a suitably fast decay in
the conductances in (1.2) makes the powerful theorem of [CKS] only partially successful.
We use that theorem to handle the small jumps and use a perturbation argument to
handle the large jumps.Another diculty that shows up is that,unlike the diusion case,
P
x
(jX
n
− yj < 1) is not comparable to P
x
(max
kn
jX
k
− xj > jx − yj) when jx − yj is
relatively large.We circumvent this by proving a parabolic Harnack inequality and using
another perturbation argument.
Previous work related to this paper includes [Kl] and [Km].In both these works
partial results were obtained for estimates for the process U
t
mentioned above.[SY] studies
nearest neighbor chains on Z
d
.In [HS-C] upper bounds of Gaussian type were obtained
for Markov chains whose jumps had bounded range or where the conductances decayed at
a Gaussian rate.
After some preliminaries,we obtain in Section 2 a tightness (or large deviations)
estimate for our Markov chain X
n
.This is followed in Section 3 by a parabolic Harnack
inequality.In Section 4 we obtain the upper bound in Theorem 1.1,and in Section 5 we
prove the lower bound.
2.Tightness.
We denote the ball of radius r centered at x by B(x;r);throughout we use the
Euclidean metric.T
A
will denote the rst hit of a set A by whichever process is under
consideration,while 
A
will denote the rst exit.The letter c with subscripts will denote
positive nite constants whose exact value is unimportant and may change fromoccurrence
to occurrence.
We assume we are given reals C
xy
satisfying (1.2) and we dene the transition
probabilities for the Markov chain X
n
by
p(1;x;y) = P
x
(X
1
= y) =
C
xy
C
x
;x 6
= y;(2:1)
3
where C
x
=
P
z
C
xz
,and p(1;x;x) = 0 for every x.The process X
n
is symmetric (or
reversible):C
x
is an invariant measure for which the kernel C
x
p(1;x;y) is symmetric in
x;y.Note that c
−1
1
 C
x
=C
y
 c
1
for some positive and nite constant c
1
.
Our main goal in this section is to get a tightness,or large deviations,estimate for
X
n
.See Theorem 2.8 for the exact statement.
We will need Y
t
,the continuous time version of X
n
,which we construct as follows:
Let U
1
;U
2
;:::be an i.i.d.sequence of exponential random variables with parameter 1
that is independent of the chain X
n
.Let T
0
= 0 and T
k
=
P
k
i=1
U
i
.Dene Y
t
= X
n
if
T
n
 t < T
n+1
.If we dene A(x;y) = jx−yj
d+
C
xy
=C
x
,then by (1.2),
−1
 A(x;y)  ,
and the innitesimal generator of Y
t
is
X
y6
=x
[f(y) −f(x)]
A(x;y)
jx −yj
d+
:
We introduce now several processes related to Y
t
,needed in what follows.The
rescaled process V
t
= D
−1
Y
D

t
takes values in S = D
−1
Z
d
and has innitesimal generator
X
y2S;y6
=x
[f(y) −f(x)]
A
D
(x;y)
D
d
jx −yj
d+
;
where A
D
(x;y) = A(Dx;Dy) for x;y 2 S.If the large jumps of V
t
are removed,we obtain
the process W
t
with innitesimal generator
Af(x) =
X
y2S;y6
=x
jx−yj1
[f(y) −f(x)]
A
D
(x;y)
jx −yj
d+
:
To analyze W
t
,we compare it to a Levy process with a comparable transition kernel:
Let Z
t
be the Levy process which has no drift and no Gaussian component and whose Levy
measure is
n
Z
(dh) =
X
y6
=0;jyj1
y2S
1
D
d
jyj
d+

y
(dh):
Write q
Z
(t;x;y) for the transition density for Z
t
.
Proposition 2.1.There exist c
1
;c
2
such that the transition density q
Z
(t;x;y) satises
q
Z
(t;x;y) 

c
1
D
−d
t
−d=
;t  1,
c
2
D
−d
t
−d=2
;t > 1.
Proof.The characteristic function'
t
(u) of Z
t
is periodic with period 2D since Z
t
is
supported on S = D
−1
Z
d
.By the Levy-Khintchine formula and the symmetry of n
Z
,
'
t
(u) = exp

−2t
X
x2S;jxj1
[1 −cos u  x]
1
D
d
jxj
d+

:(2:2)
4
Let
Q(a) = f(u
1
;:::;u
d
):−a < u
i
 a;i = 1;:::;dg:(2:3)
We estimate'
t
as follows.
Case 1:juj 
1
2
.
Since jxj  1,we have 1 − cos u  x  c
3
(u  x)
2
= c
3
juj
2
jxj
2
h
u
(x),where h
u
(x) =
(u  x)
2
=juj
2
jxj
2
.Thus
X
jxj1
[1 −cos u  x]
1
D
d
jxj
d+
 c
3
juj
2
X
jxj1
h
u
(x)jxj
2−d−
D
−d
 c
4
D
−2
juj
2
Z
B(0;D)
jxj
2−d−
h
u
(x)dx
= c
4
D
−2
juj
2
Z
D
0
r
1−
"
Z
S(r)
h
u
(s)
r
(ds)
#
dr;
where S(r) is the (d − 1)-dimensional sphere of radius r centered at 0,and 
r
(ds) is
normalized surface measure on S(r).Since h
u
(x) depends on x only through x=jxj,the
inner integral does not depends on r.Furthermore,by rotational invariance,it does not
depend on u.Thus,
X
jxj1
[1 −cos u  x]
1
D
d
jxj
d+
 c
5
juj
2
:
Case 2:
1
2
 juj  D=32.
Let A = fx 2 S:
1
4juj
 jxj 
4
juj
^1;1  ux 
1
16
g.If x 2 A,then [1−cos ux]  c
6
,
the minimum value of jxj
−d−
is c
7
juj
d+
,and a bit of geometry shows that there are at
least c
8
juj
−d
D
d
points in A.(Notice that juj < D=32 is required to prevent A from being
empty.) We then have
X
jxj1
[1 −cos u  x]
1
D
d
jxj
d+

X
A
[1 −cos u  x]
1
D
d
jxj
d+
 c
6
c
7
juj
d+
c
8
juj
−d
= c
9
juj

:
Case 3:D=32 < juj,u 2 Q(D).
At least one component of u must be larger than c
10
D where c
10
= 1=(32
p
d);
without loss of generality we may assume it is the rst component.Let y
0
= (D
−1
;0;:::;0).
Since ju
1
j  D and u  y
0
 c
10
,then 1 −cos u  y
0
 c
11
.Hence
X
jxj1
[1 −cos u  x]
1
D
d
jxj
d+
 c
11
D
−d
jy
0
j
−d−
 c
12
D

 c
13
juj

;
since u 2 Q(D).
5
For u 2 Q(D),we then have that'
t
(u) is real and
0 <'
t
(u)  e
−c
14
tjuj
2
+e
−c
15
tjuj

:
Since Z
t
is supported on S,
q
Z
(t;x;y) =
1
jQ(D)j
Z
Q(D)
e
iu(x−y)
'
t
(u)du

1
jQ(D)j
Z
Q(D)
'
t
(u)du

c
16
D
d
Z
R
d
(e
−c
14
tjuj
2
+e
−c
15
tjuj

)du;
where jQ(D)j denotes the Lebesgue measure of Q(D).Our result follows from applying
a change of variables to each of the integrals on the right hand side.
We now obtain bounds for the transition probabilities of W
t
:
Proposition 2.2.If q
W
(t;x;y) is the transition density for W,then
q
W
(t;x;y) 

c
1
D
−d
t
−d=
;t  1,
c
2
D
−d
t
−d=2
;t > 1.
The proof of Proposition 2.2 is almost identical with that of Theorem 1.2 in [BBG],and
is omitted here.
To obtain o-diagonal bounds for q
W
we again proceed as in [BBG].Let
Γ(f;f)(x) =
X
y2S
0<jx−yj1
(f(x) −f(y))
2
A
D
(x;y)
D
d
jx −yj
d+
;
( )
2
= ke
−2
Γ(e

;e

)k
1
_ke
2
Γ(e

;e

)k
1
;
E(t;x;y) = supfj (x) − (y)j −t( )
2
:( ) < 1g:
Proposition 2.3.For t  1 and x;y 2 S,
q
W
(t;x;y)  c
1
D
−d
t
−d=
e
−E(2t;x;y)
:
Proof.Allowing for slight dierences in notation,the proof is very similar to the proof
of Lemma 1.4 in [BBG].The principal dierence is the following.Let K be an integer
larger than
1
2
+
1

.Let M be a suciently regular manifold with volume growth given by
V (x;r)  r
2Kd
,r > 1 and V (x;r)  r
d
,r < 1,where V (x;r) is the volume of the ball
6
in M of radius r centered at x.We can then nd a symmetric Markov process
e
V
t
on M
independent of W whose transition density with respect to a measure m on M satises
q
e
V
(t;x;y)  c
2
t
−d=2
;0 < t  1;
q
e
V
(t;x;y)  c
2
t
−dK
;1 < t < 1;
q
e
V
(t;x;x)  c
3
t
−d=2
;0 < t  1;
q
e
V
(t;x;x)  c
3
t
−dK
;1 < t < 1:
Then q
W
(t;x;y)q
e
V
(t;x
0
;y
0
)  c
4
D
−d
t
−d(
1
2
+
1

)
for all t while q
W
(t;x;y)q
e
V
(t;0;0) 
c
5
D
−d
t
−d(
1
2
+
1

)
for t  1.With these changes,the proof is now as in [BBG].
The next step is to estimate E(t;x;y) and use this in Proposition 2.3.
Proposition 2.4.Suppose t  1.Then
q
W
(t;x;y)  c
1
D
−d
t
−d=
e
−jx−yj
:
In particular,for
1
4
 t  1,
q
W
(t;x;y)  c
1
D
−d
e
−jx−yj
:
Proof.Let () = B  ,where B = (y − x)=jy − xj.Note that if j − j  1,then
(e
()− ()
−1)
2
= (e
B(−)
−1)
2
is bounded by c
2
jBj
2
j −j
2
= c
2
j −j
2
.Hence
e
−2 ()
Γ(e

;e

)() =
X
2S
0<j−j1
(e
()− ()
−1)
2
A
D
(;)
D
d
j −j
d+
is bounded by
c
3
X
2S
0<j−j1
D
−d
j −j
2−d−
:
Since the sum is over  2 S that are within a distance 1 from ,this in turn is bounded
by c
4
.We have the same bound when is replaced by − ,so ( )
2
 c
2
4
.Moreover the
bound does not depend on x or y.On the other hand,
(y) − (x) = (y −x)  (y −x)=jy −xj = jy −xj:
Using this in Proposition 2.3 and recalling t  1,we have our result.
From the above estimate we can obtain a tightness estimate for W
t
.
7
Proposition 2.5.There exists c
1
such that if t  1 and  > 0,then
P
x
(sup
st
jW
s
−xj > )  c
1
e
−=8
:
Proof.From Proposition 2.4 and summing,if t 2 [
1
4
;1] and  > 0,
P
x
(jW
t
−xj  ) 
X
y2S
jy−xj
c
2
t
−d=
D
−d
e
−jy−xj
 c
3
e
−=2
:(2:4)
Let S

= infft:jW
t
−W
0
j  g.Then using (2.4),
P
x
( sup
s1=2
jW
s
−xj  ) = P
x
(S

 1=2)
= P
x
(jW
1
−xj > =2) +P
x
(S

 1=2;jW
1
−xj  =2)
 c
3
e
−=4
+
Z
1=2
0
P
x
(jW
1
−W
s
j > =2;S

2 ds):
By the Markov property,the last term on the right is bounded by
Z
1=2
0
E
x
[P
W
s
(jW
1−s
−W
0
j > =2);S

2 ds]  c
3
e
−=4
Z
1=2
0
P
x
(S

2 ds)  c
3
e
−=4
;
using (2.4) again.
Adding gives
P
x
(sup
st
jW
s
−xj > )  c
4
e
−=4
(2:5)
as long as t 
1
2
.For t 2 (
1
2
;1],note that if sup
st
jW
s
−xj > ,then sup
s
1
2
jW
s
−xj > =2
or sup
1
2
<s1
jW
s
−W
1=2
j > =2).The probability of the rst event is bounded using (2.5),
while the probability of the second event is bounded using the Markov property at time
1
2
and (2.5).
Dene B to be the innitesimal generator of V
t
without small jumps:
Bf(x) =
X
y2S
jy−xj>1
[f(y) −f(x)]
A
D
(x;y)
D
d
jx −yj
d+
:
Our next goal is to obtain tightness estimates for the process V
t
= D
−1
Y
D

t
,whose
generator is A+B.
8
Proposition 2.6.Let V
t
be the process whose generator is A+B.There exist c
1
;c
2
and

0
such that if   
0
and   1,then
P
x
(sup
s
jV
s
−xj > )  c
1
e
−c
2

+c
1
:
Proof.Since summing D
d
jy −xj
−d−
over jy −xj  1 is a constant,
jBf(x)j  c
3
kfk
1
;(2:6)
and hence B is a bounded operator on L
1
.
Dene Q
W
t
f(x) =
P
q
W
(t;x;y)f(y).Let Q
V
t
be the corresponding transition semi-
group for V
t
.Let S
0
(t) = Q
W
t
and for n  1,let S
n
(t) =
R
t
0
S
n−1
BQ
W
t−s
ds.Then
Q
V
t
=
1
X
n=0
S
n
(t);
see [Le],Theorem 2.2,for example.Obviously Q
W
t
is a bounded operator on L
1
of norm
1,so for t < 
0
= 1=(2c
3
) the sum converges by (2.6).In particular,for t    
0
,we
have
jQ
W
t
f(x) −Q
V
t
f(x)j  c
4
kfk
1
:
Fix x and apply this to f(y) = 1
B(x;)
(y).We obtain
P
x
(jV
t
−xj > ) = Q
V
t
f(x)  Q
W
t
f(x) +c
4
 = P
x
(jW
t
−xj > ) +c
4
  c
5
e
−=8
+c
4
:
We now obtain our result by applying the method of proof of Proposition 2.5.
Now notice that Y
t
= DV
t=D

.Translating Proposition 2.6 in terms of Y
t
,we have
Corollary 2.7.If   1 and   
0
,
P
x
( sup
sD

jY
s
−xj > D)  c
1
e
−c
2

+c
1
:(2:7)
for every D.
We can now obtain the tightness result for X
n
.
Theorem 2.8.Given C > 1 and  2 (0;1),there exists γ such that
P
x
( max
k[γS

]
jX
k
−xj > CS)   (2:8)
9
for all S > 0.
Proof.Let  2 (0;1).By Corollary 2.7 we may choose  and   
0
=2 so that
P
x
( sup
s2D

jY
s
−xj > D)  =2
for every D.Dene D = CS=.We may suppose Y is constructed as in Section 2.Then
P
x
( max
k[D

]
jX
k
−xj > CS)
 P
x
( sup
s2D

jY
s
−xj > CS)
+P
x
(jT
[D

]
−[D

]j > [D

])


2
+
c
3
D

:
We used Chebyshev's inequality and the fact that T
[D

]
is the sumof i.i.d.exponentials to
bound the second probabilityon the right hand side.Choose S
0
large so that c
3
=D

< =2
if S  S
0
.We thus have the desired result of S  S
0
.
Finally choose γ smaller if necessary so that γS

0
< 1.If S < S
0
,then γS

< 1.
But X
k
needs at least one unit of time to make a step;hence the left hand side of (2.8) is
0 if S < S
0
.
Remark 2.9.Given the above tightness estimate,one could formulate a central limit the-
orem.Under a suitable normalization a sequence of Markov chains whose jump structure
is similar to that of a symmetric stable process should converge weakly to a process such
as the U
t
described in Section 1.
Remark 2.10.We expect that our techniques could also give tightness for Markov chains
where the conductances decay more rapidly than the rates given in this paper.In this
case one might have a central limit theorem where the limiting distributions are those
of processes corresponding to elliptic operators in divergence form.It would be quite
interesting to formulate a central limit theoremfor Markov chains where the limit processes
are diusions but the Markov chains do not have bounded range.
3.Harnack inequality.
It is fairly straightforward at this point to follow the argument of [BL] and obtain
a Harnack inequality of Moser type for functions that are harmonic with respect to X
n
.
In this paper,however,we are primarily interested in transition probability estimates.As
a tool for obtaining these,we turn to a parabolic Harnack inequality.
10
Let T = f0;1;2;:::gZ
d
.We will study here the T -valued Markov chain (V
k
;X
k
),
where V
k
= V
0
+ k.We write P
(j;x)
for the law of (V
k
;X
k
) started at (j;x).Let F
j
=
((V
k
;X
k
):k  j).A bounded function q(k;x) on T will be said to be parabolic on D  T
if q(V
k^
D
;X
k^
D
) is a martingale.
Dene
Q(k;x;r) = fk;k +1;:::;k +[γr

]g B(x;r):(3:0)
Our goal in this section is the following result:
Theorem 3.1.There exists c
1
such that if q is bounded and nonnegative on T and
parabolic on Q(0;z;R),then
max
(k;y)2Q([γR

];z;R=3)
q(k;y)  c
1
min
y2B(z;R=3)
q(0;y):
We prove this after rst establishing a few intermediate results.
From Theorem 2.8 there exists γ such that for all r > 0
P
x
( max
k[γr

]
jX
k
−xj > r=2) 
1
4
:(3:1)
Without loss of generality we may assume γ 2 (0;
1
3
).
We will often write 
r
for 
Q(0;x;r)
.For A  Q(0;x;r) set A(k) = fy:(k;y) 2 Ag.
Dene N(k;x) to be P
(k;x)
(X
1
2 A(k +1)) if (k;x) =2 A and 0 otherwise.
Lemma 3.2.Let
J
n
= 1
A
(V
n
;X
n
) −1
A
(V
0
;X
0
) −
n−1
X
k=0
N(V
k
;X
k
):
Then J
n^T
A
is a martingale.
Proof.We have
E[J
(k+1)^T
A
−J
k^T
A
j F
k
] = E[1
A
(V
(k+1)^T
A
;X
(k+1)^T
A
) −1
A
(V
k^T
A
;X
k^T
A
)
−N(V
k^T
A
;X
k^T
A
) j F
k
]:
On the event fT
A
 kg,this is 0.If T
A
> k,this is equal to
P
(V
k
;X
k
)
((V
1
;X
1
) 2 A) −N(V
k
;X
k
) = P
X
k
(X
1
2 A(V
k
+1)) −N(V
k
;X
k
) = 0:
Given a set A  T,we let jAj denote the cardinality of A.
11
Proposition 3.3.There exists 
1
such that if A  Q(0;x;r=2) and A(0) =;,then
P
(0;x)
(T
A
< 
r
)  
1
jAj
r
d+
:
Proof.Observe that T
A
cannot equal 
r
.If P
(0;x)
(T
A
 
r
) 
1
4
we are done,so assume
without loss of generality that P
(0;x)
(T
A
 
r
) <
1
4
.Let S = T
A
^ 
r
.From Lemma 3.2
and optional stopping we have
E
(0;x)
1
A
(S;X
S
)  E
(0;x)
S−1
X
k=0
N(k;X
k
):
Note that if (k;x) 2 Q(0;x;r)
N(k;x) = P
(k;x)
(X
1
2 A(k +1)) 
X
y2A(k+1)
c
1
jx −yj
d+

c
2
r
d+
jA(k +1)j:
So on the set (S  [γr

]) we have
P
S−1
k=0
N(k;X
k
)  c
3
jAj=r
d+
:Therefore,since 
r

[γr

],
E
(0;x)
1
A
(S;X
S
)  c
4
jAj
r
d+
P
x
(S  [γr

])
 c
4
jAj
r
d+
[1 −P
x
(T
A
 
r
) −P
x
(
r
< [γr

])]:
Now P
x
(
r
< [γr

]) 
1
4
by (3.1).Therefore E
(0;x)
1
A
(S;X
S
)  c
5
jAj=r
d+
.Since A 
Q(0;x;r=2),the proposition follows.
With Q(k;x;r) dened as in (3.0),let U(k;x;r) = fkg B(x;r).
Lemma 3.4.There exists 
2
such that if (k;x) 2 Q(0;z;R=2),r  R=4,and k  [γr

]+2,
then
P
(0;z)
(T
U(k;x;r)
< 
Q(0;z;R)
)  
2
r
d+
=R
d+
:
Proof.Let Q
0
= fk;k −1;:::;k −[γr

]g B(x;r=2).By Proposition 3.3,
P
(0;z)
(T
Q
0
< 
Q(0;z;R)
)  c
1
r
d+
=R
d+
:
Starting at a point in Q
0
,by (3.1) there is probability at least
3
4
that the chain stays
in B(x;r) for at least time γr

.So by the strong Markov property,there is probability
at least
3
4
c
1
r
d+
=R
d+
that the chain hits Q
0
before exiting Q(0;z;R) and stays within
B(x;r) for an additional time c
2
r

,hence hits U(k;x;r) before exiting Q(0;z;R).
12
Lemma 3.5.Suppose H(k;w) is nonnegative and 0 if w 2 B(x;2r).There exists 
3
(not
depending on x;r,or H) such that
E
(0;x)
[H(V

r
;X

r
)]  
3
E
(0;y)
[H(V

r
;X

r
)];y 2 B(x;r=3):
Proof.Fix x and r and suppose k  [γr

] and w =2 B(x;2r).Assume for now that
[γr

]  4.We claim there exists c
1
such that
M
j
= 1
(k;w)
(V
j^
r
;X
j^
r
) −
j−1
X
i=0
c
1
jw −xj
d+
1
(i<
r
)
1
k−1
(V
i
)
is a submartingale.To see this we observe
E[1
(k;w)
(V
(i+1)^
r
;X
(i+1)^
r
) −1
(k;w)
(V
i^
r
;X
i^
r
) j F
i
]
is 0 if i  
r
and otherwise it equals
E
(V
i
;X
i
)
1
(k;w)
(V
1^
r
;X
1^
r
):
This is 0 unless k = V
i
+1.When k = V
i
+1 and i < 
r
this quantity is equal to
P
X
i
(X
1
= w) 
c
2
jX
i
−wj
d+

c
3
jx −wj
d+
:
Thus E[M
i+1
−M
i
j F
i
] is 0 if i  
r
or k 6
= V
i
+1 and greater than or equal to 0 otherwise
if c
1
is less than c
3
,which proves the claim.
Since P
y
(max
i[γr

]
jX
i
−X
0
j > r=2) 
1
4
,then
E
(0;y)

r
 [γr

]P
(0;x)
(
r
 [γr

])  [γr

]=2:(3:3)
The random variable 
r
is obviously bounded by [γr

],so by optional stopping,
P
(0;y)
((V

r
;X

r
) = (k;w)) 

E
(0;y)

r
−1

c
4
jx −wj
d+

c
4
r

jx −wj
d+
:
Similarly,there exists c
5
such that
1
(k;w)
(V
j^
r
;X
j^
r
) −
j−1
X
i=1
c
5
jw −xj
d+
1
(i<
r
)
1
k−1
(V
i
)
is a supermartingale and so
P
(0;x)
((V

r
;X

r
) = (k;w)) 

E
(0;x)

r

c
6
jx −wj
d+

c
6
r

jx −wj
d+
:
13
Letting 
3
= c
6
=c
4
,we have
E
(0;x)
[1
(k;w)
(V

r
;X

r
)]  
3
E
(0;y)
[1
(k;w)
(V

r
;X

r
)]:
It is easy to check that 
3
can be chosen so that this inequality also holds when [γr

] < 4.
Multiplying by H(k;w) and summing over k and w proves our lemma.
Proposition 3.6.For each n
0
and x
0
,the function q(k;x) = p(n
0
−k;x;x
0
) is parabolic
on f0;1;:::;n
0
g Z
d
.
Proof.We have
E[q(V
k+1
;X
k+1
) j F
k
] = E[p(n
0
−V
k+1
;X
k+1
;x
0
) j F
k
]
= E
(V
k
;X
k
)
[p(n
0
−V
1
;X
1
;x
0
)]
=
X
z
p(1;X
k
;z)p(n
0
−V
k
−1;z;x
0
):
By the semigroup property this is
p(n
0
−V
k
;X
k
;x
0
) = q(V
k
;X
k
):
Proof of Theorem 3.1.By multiplying by a constant,we may suppose
min
y2B(z;R=3)
q(0;y) = 1:
Let v be a point in B(z;R=3) where q(0;v) takes the value one.Suppose (k;x) 2
Q([γR

];z;R=3) with q(k;x) = K.By Proposition 3.3 there exists c
2
 1 such that
if r < R=3,C  Q(k +1;x;r=3),and jCj=jQ(k +1;x;r=3)j 
1
3
,then
P
(k;x)
(T
C
< 
r
)  c
2
:(3:4)
Set
 =
c
2
3
; =
1
3
^(
3
):(3:5)
Dene r to be the smallest number such that
jQ(0;x;r=3)j
R
d+

3

1
K
(3:6)
and
r
d+
R
d+

2
K
2
:(3:7)
14
This implies
r=R = c
3
K
−1=(d+)
:(3:8)
Let
A = f(i;y) 2 Q(k +1;x;r=3):q(i;y)  Kg:
Let U = fkg B(x;r=3).If q  K on U,we would then have by Lemma 3.4 that
1 = q(0;v) = E
(0;v)
q(V
T
U
^
Q(0;z;R)
;X
T
U
^
Q(0;z;R)
)
 KP
(0;v)
(T
U
< 
Q(0;z;R)
) 

2
r
d+
K
R
d+
;
a contradiction to our choice of r.So there must exist at least one point in U for which q
takes a value less than K.
If E
(k;x)
[q(V

r
;X

r
);X

r
=2 B(x;2r)]  K,then by Lemma 3.5 we would have
q(k;y)  E
(k;y)
[q(V

r
;X

r
);X

r
=2 B(x;2r)]
 
3
E
(k;x)
[q(V

r
;X

r
);X

r
=2 B(x;2r)]  
3
K  K
for y 2 B(x;r=3),a contradiction to the preceding paragraph.Therefore
E
(k;x)
[q(V

r
;X

r
);X

r
=2 B(x;2r)]  K:(3:9)
By Proposition 3.3,
1 = q(0;v)  E
(0;v)
[q(V
T
A
;X
T
A
);T
A
< 
Q(0;z;R)
]
 KP
(0;v)
(T
A
< 
Q(0;z;R)
) 

1
jAjK
R
d+
;
hence
jAj
jQ(k +1;x;r=3)j

R
d+

1
jQ(k +1;x;r=3)jK

1
3
:
Let C = Q(k +1;x;r=3) −A.Let M = max
Q(k+1;x;2r)
q.We write
q(k;x) = E
(k;x)
[q(V
T
C
;X
T
C
);T
C
< 
r
]
+E
(k;x)
[q(V

r
;X

r
);
r
< T
C
;X

r
=2 B(x;2r)]
+E
(k;x)
[q(V

r
;X

r
);
r
< T
C
;X

r
2 B(x;2r)]:
The rst term on the right is bounded by KP
(k;x)
(T
C
< 
r
):The second term on the
right is bounded by K.The third term is bounded by MP
(k;x)
(
r
< T
C
).Therefore
K  KP
(k;x)
(T
C
< 
r
) +K+M(1 −P
(k;x)
(T
C
< 
r
)):
15
It follows that
M=K  1 +
for some  not depending on x or r,and so there exists a point (k
0
;x
0
) 2 Q(k +1;x;2r)
such that q(k
0
;x
0
)  (1 +)K.
We use this to construct a sequence of points:suppose there exists a point (k
1
;x
1
)
in Q([γR

];z;R=6) such that q(k
1
;x
1
) = K.We let x = x
1
;k = k
1
in the above and
construct r
1
= r;x
2
= x
0
;and k
2
= k
0
.We dene r
2
by the analogues of (3.6) and
(3.7).We then use the above (with (k;x) replaced by (k
2
;x
2
) and (k
0
;x
0
) replaced by
(k
3
;x
3
)) to construct k
3
;x
3
,and so on.We thus have a sequence of points (k
i
;x
i
) for
which k
i+1
−k
i
 (2r
i
)

,jx
i+1
−x
i
j  2r
i
,and q(k
i
;x
i
)  (1 +)
i−1
K.By (3.8) there
exists K
0
such that if K  K
0
,then (k
i
;x
i
) 2 Q([γR

];z;R=3) for all i.We show this
leads to a contradiction.One possibility is that for large i we have r
i
< 1,which means
that B(x
i
;r
i
) is a single point and that contradicts the fact that there is at least one
point in B(x
i
;r
i
) for which q(k
i
;) is less than (1 + )
i−1
K.The other possibility is
that q(k
i
;x
i
)  (1 +)
i−1
K
0
> kqk
1
for large i,again a contradiction.We conclude q is
bounded by K
0
in Q([γR

];z;R=3).
4.Upper bounds.
In this section our goal is to obtain upper bounds on the transition probabilities for
our chain X
n
.We start with a uniform upper bound.
Let is begin by considering the Levy process Z
t
whose Levy measure is
n(dx) =
X
y2Z
d
;y6
=0
jyj
−(d+)

y
(dx):
Proposition 4.1.The transition density for Z
t
satises q
Z
(t;x;y)  c
1
t
−d=
.
Proof.The proof is similar to Proposition 2.1 (with D = 1).The characteristic function
'
t
(u) is given by
'
t
(u) = exp

−2t
X
x2Z
d
[1 −cos(u  x)]
1
jxj
d+

:
For juj  1=32,we proceed similarly to Case 2 of the proof of Proposition 2.1:we set
D = 1,set A = fx 2 Z
d
:
1
4juj
 jxj 
4
juj
;1  u  x 
1
16
g,and obtain
X
[1 −cos u  x]
1
jxj
d+
 c
2
juj

:
Let Q(a) be dened by (2.3).For juj > 1=32 with u 2 Q(),we proceed as in Case 3
of the proof of Proposition 2.1 and obtain the same estimate.We then proceed as in the
remainder of the proof of Proposition 2.1 to obtain our desired result.
16
Proposition 4.2.The transition densities for Y
t
satisfy
q
Y
(t;x;y)  c
1
t
−d=
:
Proof.This is similar to the proof of Proposition 2.2,but considerably simpler,as we do
not have to distinguish between t  1 and t > 1.
Now we can obtain global bounds for the transition probabilities for X
n
.
Theorem 4.3.There exists c
1
such that the transition probabilities for X
n
satisfy
p(n;x;y)  c
1
n
−d=
;x;y 2 Z
d
:
Proof.Recall the construction of Y
t
in Section 1.First,by the law of large numbers
T
n
=n!1 a.s.Thus there exists c
2
such that P(T
[n=2]

3
4
n < T
n
)  c
2
for all n.
Let C
x
=
P
z
C
xz
,and set r(n;x;y) = C
x
p(2n;x;y).Since C
x
p(1;x;y) is symmet-
ric,it can be seen by induction that C
x
p(n;x;y) is symmetric.The kernel r(n;x;y) is
nonnegative denite because
X
x
X
y
f(x)r(n;x;y)f(y) =
X
x
X
y
X
z
f(x)C
x
p(n;x;z)p(n;z;y)f(y)
=
X
x
X
y
X
z
f(x)f(y)C
z
p(n;z;x)p(n;z;y)
=
X
z
C
z

X
x
f(x)p(n;z;x)

2
 0:
If we set r
M
(n;x;y) = r(n;x;y) if jxj;jyj  M and 0 otherwise,we have an eigenfunction
expansion for r
M
:
r
M
(n;x;y) =
X
i

n
i
'
i
(x)'
i
(y);(4:1)
where each 
i
2 [0;1].By Cauchy-Schwarz,
r
M
(n;x;y) 

X
i

n
i
'
i
(x)
2

1=2

X
i

n
i
'
i
(y)
2

1=2
= r
M
(n;x;x)
1=2
r
M
(n;y;y)
1=2
:
Also,by (4.1) r
M
(n;x;x) is decreasing in n.Letting M!1 we see that p(2n;x;x) is
decreasing in n and
p(2n;x;y)  p(2n;x;x)
1=2
p(2n;y;y)
1=2
:
17
Suppose now that n is even and n  8.It is clear from (1.2) and (2.1) that
there exists c
3
such that p(3;z;z)  c
3
for all z 2 Z
d
.If k is even and k  n,then
P
x
(X
k
= x)  P
x
(X
n
= x).If k is odd and k  n,then
P
x
(X
k
= x) = p(k;x;x)  p(k −3;x;x)p(3;x;x)  c
3
P
x
(X
k−3
= x)  c
3
P
x
(X
n
= x):
Setting t =
3
4
n,using Proposition 4.2,and the independence of the T
i
from the X
k
,we
have
c
4
t
−d=
 P
x
(Y
t
= x) =
1
X
k=0
P
x
(X
k
= x;T
k
 t < T
k+1
)

X
[n=2]kn
P
x
(X
k
= x)P(T
k
 t < T
k+1
)
 c
3
P
x
(X
n
= x)P
x
(T
[n=2]
 t < T
n
)  c
2
c
3
P
x
(X
n
= x):
We thus have an upper bound for p(n;x;x) when n is even,and by the paragraph above,
for p(n;x;y) when n  8 is even.
Now suppose n is odd and n  5.Then
c
5
(n +3)
−d=
 p(n +3;x;y)  p(n;x;y)p(3;y;y)  c
3
p(n;x;y);
which implies the desired bound when n is odd and n  5.
Finally,since p(n;x;y) = P
x
(X
n
= y)  1,we have our bound for n  8 by taking
c
1
larger if necessary.
We now turn to the o-diagonal bounds,that is,when jx − yj=n
1=
is large.We
begin by bounding P
x
(Y
t
0
2 B(y;rt
1=
0
)).To do this,it is more convenient to look at
W
t
= t
−1=
0
Y
t
0
t
and to obtain a bound on P
x
(W
1
2 B(y;r)) for x;y 2 S = t
−1=
0
Z
d
.The
innitesimal generator for W
t
is
X
y2S
[f(y) −f(x)]
A
t
1=
0
(x;y)
t
d=
0
jx −yj
d+
:
Fix D and let E = D
1=2
.Let Q
t
be the transition operator for the process V
t
corresponding to the generator
Af(x) =
X
y2S
jy−xjE
[f(y) −f(x)]
A
t
1=
0
(x;y)
t
d=
0
jx −yj
d+
:
Dene
Bf(x) =
X
y2S
jy−xj>E
[f(y) −f(x)]
A
t
1=
0
(x;y)
t
d=
0
jx −yj
d+
and kfk
1
=
P
S
jf(y)j.
18
Proposition 4.4.There exists c
1
such that
kQ
t
fk
1
 c
1
kfk
1
;kQ
t
fk
1
 kfk
1
:(4:2)
Also
kBfk
1

c
1
E

kfk
1
;kBfk
1

c
1
E

kfk
1
:(4:3)
Proof.The second inequality in (4.2) follows because Q
t
is a Markovian semigroup.
Notice that C
x
Q
t
(x;y) is symmetric in x;y.Then
kQ
t
fk
1

X
x
X
y
Q
t
(x;y)jf(y)j =
X
y
jf(y)j
X
x
Q
t
(x;y)  c
2
X
y
jf(y)j
because
P
x
Q
t
(x;y) =
P
x
C
y
C
x
Q
t
(y;x)  c
2
P
x
Q
t
(y;x) = c
2
.This establishes the rst
inequality.
Note
X
y2S
jy−xj>E
A
t
1=
0
(x;y)
t
d=
0
jx −yj
d+
 c
3
E
−
:(4:4)
Then
jBf(x)j  2kfk
1
X
y2S
jy−xj>E
A
t
1=
0
(x;y)
t
d=
0
jx −yj
d+
 2c
3
E
−
kfk
1
:
To get the rst inequality in (4.3),
X
x
jBf(x)j 
X
x
X
jy−xj>E
jf(y)j
A
t
1=
0
(x;y)
t
d=
0
jx −yj
d+
+
X
x
jf(x)j
X
jy−xj>E
A
t
1=
0
(x;y)
t
d=
0
jx −yj
d+

X
y
jf(y)j
X
jx−yj>E
A
t
1=
0
(x;y)
t
d=
0
jx −yj
d+
+c
3
E
−
X
x
jf(x)j:
Applying (4.4) completes the proof.
Let K be the smallest integer larger than 2(d +)= and let
A
n
= D
(1=2)+(n=4K)
:
Let us say that a function g is in L(n;) if
jg(z)j  
h
1
D
d+
+
1
jz −yj
d+
1
B(y;A
n
)
c
(z) +H(z)
i
for all z,where H is a nonnegative function supported in B(y;A
n
) with kHk
1
+kHk
1
 1.
19
Lemma 4.5.Suppose D
1=(4K)
 4 and n  K.There exists c
1
such that if g 2 L(n;),
then
(a) Bg 2 L(n +1;c
1
);
(b) for each s  1,Q
s
g 2 L(n +1;c
1
).
Proof.In view of (4.2) and (4.3),kB(D
−d+
)k
1
 c
2
D
−d+
and the same bound holds
when B is replaced by Q
t
.
Next,set
v(z) =
1
jz −yj
d+
1
B(y;A
n
)
c
(z):
Note kvk
1
+kvk
1
 c
3
,where c
3
does not depend on n or D.Let
J
0
(z) = jB(v +H)(z)j1
B(y;A
n+1
)
(z)
and J(z) = J
0
(z)=(kJ
0
k
1
+ kJ
0
k
1
).Because of (4.2),we see that J
0
has L
1
and L
1
norms bounded by a constant,so J is a nonnegative function supported on B(y;A
n+1
)
with kJk
1
+kJk
1
 1.The same argument serves for Q
t
in place of B.
It remains to get suitable bounds on jBvj and Q
t
v when jz −yj  A
n+1
.We have
jBv(z)j 
X
jw−zj>E
v(w)
c
4
jw −zj
d+
+
X
jw−zj>E
v(z)
c
4
jw −zj
d+
:(4:5)
Clearly the second sum is bounded by c
5
v(z) as required.We now consider the rst sum.
Let C = fw:jw −zj  jw−yjg.If w 2 C,then jw −zj  jy −zj=2.Hence
X
w2C;jz−wj>E
1
jw−zj
d+
1
jw −yj
d+
 c
6
1
jy −zj
d+
X
jw−yj>1
1
jw −yj
d+

c
7
jy −zj
d+
:
If w 2 C
c
,then jw − yj  jy − zj=2,and we get a similar bound.Combining gives the
desired bound for (4.5).
Finally,we examine Q
t
v(z) when z 2 B(y;A
n+1
)
c
.We write
Q
t
v(z) =
X
jz−wjA
n+1
=2
Q
t
(z;w)v(w) +
X
jz−wj>A
n+1
=2
Q
t
(z;w)v(w):(4:6)
If jz − yj  A
n+1
and jz −wj  A
n+1
=2,then jw −yj  jz − yj=2.For such w,v(w) 
c
8
=jz −yj
d+
,and hence the rst sum in (4.6) is bounded by
c
8
jz −yj
d+
X
w
Q
t
(z;w) =
c
8
jz −yj
d+
:
20
For jz −wj > A
n+1
=2,v is bounded,and the second sum in (4.6) is less than or equal to
X
jz−wj>A
n+1
=2
Q
t
(z;w)  P
z
(jV
t
−zj  A
n+1
=2)  c
9
e
−c
10
(A
n+1
=2A
n
)
using Proposition 2.5.This is less than
c
11

A
n
A
n+1

8K
2
 c
12
D
−d−
:
Combining the estimates proves the lemma.
Proposition 4.6.There exists c
1
such that P
x
(Y
t
2 B(y;1))  c
1
=jx −yj
d+
:
Proof.Let D = jx −yj.Assume rst that D  D
0
,where D
0
= 4
4K
.Let f = 1
B(y;1)
.
Clearly there exists  such that f 2 L(1;).Then Q
t
f 2 L(2;c
2
) for all t  1 by
Lemma 4.5.Set S
0
(t) = Q
t
and S
1
(t) =
R
t
0
Q
s
BQ
t−s
ds.Since Q
1
f 2 L(2;c
2
) and
jx −yj = D > A
2
we have
jS
0
(1)f(x)j  c
3
jx −yj
−d−
:
By Lemma 4.5,for each s  t  1,Q
s
BQ
t−s
f 2 L(4;c
3
2
).Hence jQ
s
BQ
t−s
f(x)j 
c
4
D
−d−
.Integrating over s  t,we have
jS
1
(t)f(x)j  c
4
D
−d−
:
Set S
2
(t) =
R
t
0
S
1
(s)BQ
t−s
ds =
R
t
0
R
s
0
Q
r
BQ
s−r
BQ
t−s
dr ds.By Lemma 4.5 we see that
Q
r
BQ
s−r
BQ
t−s
f 2 L(6;c
5
2
) and therefore jQ
r
BQ
s−r
BQ
t−s
f(x)j  c
6
D
−d−
.Integrating
over r and s,we have
jS
2
(t)f(x)j  c
6
D
−d−
:
We continue in this fashion and nd that for all n  K we have
jS
n
(1)f(x)j  c
7
(n)D
−d−
:
On the other hand,by Proposition 4.4
kBk
1
 c
8
=E

:
Take D
0
larger if necessary so that c
8
D
−=2
0
<
1
2
.If D  D
0
,we have by the argument of
Proposition 2.6 that
kS
n
(1)fk
1
 (c
8
=E

)
n
:
21
Consequently,
1
X
n=K
jS
n
f(x)j  c
9
=E
K
 c
10
D
−d−
:
If we set P
t
=
P
1
n=0
S
n
(t),we then have
jP
1
f(x)j 

K
X
n=0
c
7
(n) +c
10

D
−d−
= c
11
D
−d−
:
This is precisely what we wanted to showbecause by [Le],P
t
is the semigroup corresponding
to W
t
.
This proves the result for D  D
0
.For D < D
0
we have our result by taking c
1
larger if necessary.
From the probabilities of being in a set for Y
t
we can obtain hitting probabilities.
Proposition 4.7.There exist c
1
and c
2
such that
P
x
(Y
t
hits B(y;c
1
t
1=
0
) before time t
0
)  c
2

t
1=
0
jx −yj

d+
:
Proof.There is nothing to prove unless jx−yj=t
1=
0
is large.Let D = jx−yj and let A be
the event that Y
t
hits B(y;t
1=
0
) before time t
0
.Let C be the event that sup
st
0
jY
s
−Y
0
j 
c
3
t
1=
0
.FromTheorem2.8,P
z
(C) 
1
2
if c
3
is large enough.By the strong Markov property,
P
x
(Y
t
0
2 B(y;(1 +c
3
)t
1=
0
))  E
x
[P
Y
S
(C);A] 
1
2
P
x
(A);
where S = infft:Y
t
2 B(y;t
1=
0
)g.We can cover B(y;(1 + c
3
)t
1=
0
) by a nite number
of balls of the form B(z;t
1=
0
),where the number M of balls depends only on c
3
and the
dimension d.Then by Proposition 4.6,the left hand side is bounded by c
4
M(t
1=
0
=D)
d+
.
We now get the corresponding result for X
n
.We suppose that Y
t
is constructed in
terms of X
n
and stopping times T
n
as in Section 2.
Proposition 4.8.There exist c
1
and c
2
such that
P
x
(X
n
hits B(y;c
1
n
1=
0
) before time n
0
)  c
2

n
1=
0
jx −yj

d+
:
Proof.Let A be the event that X
n
hits B(y;n
1=
0
) before time n
0
,C the event that Y
t
hits B(y;n
1=
0
) before time 2n
0
,and D the event that T
n
0
 2n
0
.By the independence of
A and D,we have
P
x
(A)P(D) = P
x
(A\D)  P
x
(C):
22
Using the bound on P
x
(C) from Proposition 4.7 and the fact that P(D) > c
2
,where c
2
does not depend on n
0
,proves the proposition.
We now come to the main result of this section.
Theorem 4.9.There exists c
1
such that
p(n;x;y)  c
1

n
−d=
^
n
jx −yj
d+

:
Proof.Let D = jx −yj.Fix c
2
suciently large.If D  c
2
n
1=
,the result follows from
Theorem 4.3.So suppose D > c
2
n
1=
.Let m= n +[γn].By Proposition 4.8,
P
x
(X
m
2 B(y;m
1=
))  c
3
m
1+d=
D
d+
:
On the other hand,the left hand side is
P
z2B(y;m
1=
)
p(m;x;z):So for at least one z 2
B(y;m
1=
),we have p(m;x;z)  c
4
m=D
d+
 c
5
n=D
d+
.Let
q(k;w) = p(n +[γn] −k;w;x):
By Proposition 3.6,q is parabolic in f0;1;:::;[γn]g Z
d
,and we have shown that
min
w2B(z;n
1=
)
q(0;w)  c
5
n=D
d+
:
Thus by Theorem 3.1 we have
p(n;x;y) =
C
y
C
x
p(n;y;x) =
C
y
C
x
q([γn];y)  c
6
n=D
d+
:
5.Lower bounds.
Lower bounds are considerably easier to prove.
Proposition 5.1.There exist c
1
ands c
2
such that if jx −yj  c
1
n
1=
and n  2,then
p(n;x;y)  c
2
n
−d=
:
Proof.Let m= n −[γn].By Theorem 2.8 there exists c
3
not depending on x or m such
that
P
x
(max
km
jX
k
−xj > c
3
m
1=
) 
1
2
:
23
By Theorem 4.9 provided m is suciently large,there exists c
4
< c
3
=2 not depending on
x or m such that
P
x
(X
m
2 B(x;c
4
m
1=
)) 
1
4
:
Let E = B(x;c
3
m
1=
) −B(x;c
4
m
1=
).Therefore
P
x
(X
m
2 E) 
1
4
:
This implies,since P
x
(X
m
2 E) =
P
z2E
p(m;x;z),that for some z 2 E we have
p(m;x;z)  c
5
m
−d−
 c
6
n
−d−
.If w 2 E,then by Theorem 3.1 with q(k;) =
p(n −k;x;),we have
p(n;x;w)  c
7
n
−d=
:
This proves our proposition when n is greater than some n
1
.
By (1.2) and (2.1) it is easy to see that there exists c
8
such that
p(2;x;x)  c
8
;p(3;x;x)  c
8
:
If n  n
1
and n = 2`+1 is odd,
p(n;x;y)  p(2;x;x)
`
p(1;x;y)  c
`
8
n
−d=
:
The case when n  b
1
and n is even is done similarly.
Theorem 5.2.There exists c
1
such that if n  2
p(n;x;y)  c
1

n
−d=
^
n
jx −yj
d+

:
Proof.Again,our result follows for small n as a consequence of (1.2) and (2.1),so
we may suppose n is larger than some n
1
.In view of Proposition 5.1 we may suppose
jx −yj  c
2
n
1=
.Let A = B(y;n
1=
).Let N(z) = P
z
(X
1
2 A) if z =2 A and 0 otherwise.
For z 2 B(x;n
1=
) note N(z)  c
3
n
d=
=D
d+
.As in the proof of Lemma 3.2,
1
A
(X
j^T
A
) −1
A
(X
0
) −
j^T
A
X
i=1
N(X
i
)
is a martingale.By optional stopping at the time S = n ^ 
B(x;n
1=
)
we have
P
x
(X
S
2 A) = E
x
S
X
i=1
N(X
i
) 
c
3
n
d=
D
d+
E
x
[S −1]:
24
Arguing as in (3.3),E
x
S  c
4
n.We conclude that
P
x
(X
n
hits B(y;n
1=
) before time n)  c
5
n
1+d=
=D
d+
:
By Theorem2.8,starting at z 2 B(y;n
1=
),there is positive probability not depending on
z or n such that the chain does not move more than c
6
n
1=
in time n.Hence by the strong
Markov property,there is probability at least c
7
n
1+d=
=D
d+
that X
n
2 B(y;c
8
n
1=
).Let
m= n −[γn] Applying the above with m in place of n,
P
x
(X
m
2 B(y;c
9
n
1=
))  c
10
m
1+d=
D
d+
 c
11
n
1+d=
D
d+
:
However this is also
P
w2B(y;c
9
n
1=
)
p(m;x;w).So there must exist w 2 B(y;c
9
n
1=
) such
that p(m;x;w)  c
12
n=D
d+
.A use of Theorem3.1 as in the proof of Theorem5.1 nishes
the current proof.
Proof of Theorem 1.1.This is a combination of Theorems 4.9 and 5.2.If n = 1 and
x 6
= y,the result follows from (1.2) and (2.1).
Remark 5.3.Similar (but a bit easier) arguments show that the transition probabilities
for Y
t
satisfy
c
1

t
−d=
^
t
jx −yj
d+

 q
Y
(t;x;y)  c
2

t
−d=
^
t
jx −yj
d+

:(5:1)
One can also showthat the transition densities of the process U
t
described in Section
1 also satisfy bounds of the form (5.1).One can either modify the proofs suitably or else
approximate U
t
by a sequence of processes of the form Y
t
but with state space"Z
d
,and
then let"!0.
References.
[BBG] M.T.Barlow,R.F.Bass,and C.Gui,The Liouville property and a conjecture of
De Giorgi.Comm.Pure Appl.Math.53 (2000),1007{1038.
[BL] R.F.Bass and D.A.Levin,Harnack inequalities for jump processes.Preprint.
[CKS] E.A.Carlen,S.Kusuoka,and D.W.Stroock,Upper bounds for symmetric Markov
transition functions.Ann.Inst.H.Poincare Probab.Statist.23 (1987),no.2,
suppl.,245{287.
25
[HS-C] W.Hebisch and L.Salo-Coste,Gaussian estimates for Markov chains and random
walks on groups.Ann.Probab.21 (1993),673{709.
[Kl] V.Kolokoltsov,Symmetric stable laws and stable-like jump-diusions.Proc.Lon-
don Math.Soc.80 (2000),725{768.
[Km] T.Komatsu,Uniformestimates for fundamental solutions associated with non-local
Dirichlet forms.Osaka J.Math.32 (1995),833{860.
[Le] T.Leviatan,Perturbations of Markov processes.J.Functional Analysis 10 (1972),
309{325.
[SZ] D.W.Stroock and W.Zheng,Markov chain approximations to symmetric diu-
sions.Ann.Inst.H.Poincare Probab.Statist.33 (1997),619{649.
[Wo] W.Woess,RandomWalks on Innite Graphs and Groups,Cambridge Univ.Press
(2000).
26