# central limit theorems for multicolor urns with dominated ... - Economia

Ηλεκτρονική - Συσκευές

8 Οκτ 2013 (πριν από 4 χρόνια και 7 μήνες)

77 εμφανίσεις

CENTRAL LIMIT THEOREMS FOR MULTICOLOR
URNS WITH DOMINATED COLORS
PATRIZIA BERTI,IRENE CRIMALDI,LUCA PRATELLI,AND PIETRO RIGO
Abstract.An urn contains balls of d ≥ 2 colors.At each time n ≥ 1,a ball is
drawn and then replaced together with a random number of balls of the same
color.Let A
n
=diag

A
n,1
,...,A
n,d

be the n-th reinforce matrix.Assuming
EA
n,j
= EA
n,1
for all n and j,a few CLT’s are available for such urns.In
real problems,however,it is more reasonable to assume
EA
n,j
= EA
n,1
whenever n ≥ 1 and 1 ≤ j ≤ d
0
,
liminf
n
EA
n,1
> limsup
n
EA
n,j
whenever j > d
0
,
for some integer 1 ≤ d
0
≤ d.Under this condition,the usual weak limit
theorems may fail,but it is still possible to prove CLT’s for some slightly
diﬀerent random quantities.These random quantities are obtained neglecting
dominated colors,i.e.,colors from d
0
+1 to d,and allow the same inference
on the urn structure.The sequence (A
n
:n ≥ 1) is independent but need not
be identically distributed.Some statistical applications are given as well.
1.The problem
An urn contains a
j
> 0 balls of color j ∈ {1,...,d} where d ≥ 2.At each time
n ≥ 1,a ball is drawn and then replaced together with a random number of balls
of the same color.Say that A
n,j
≥ 0 balls of color j are added to the urn in case
X
n,j
= 1,where X
n,j
is the indicator of {ball of color j at time n}.Let
N
n,j
= a
j
+
n
￿
k=1
X
k,j
A
k,j
be the number of balls of color j in the urn at time n and
Z
n,j
=
N
n,j
￿
d
i=1
N
n,i
,M
n,j
=
￿
n
k=1
X
k,j
n
.
Fix j and let n → ∞.Then,under various conditions,Z
n,j
a.s.
−→ Z
(j)
for some
random variable Z
(j)
.This typically implies M
n,j
a.s.
−→Z
(j)
.A CLT is available as
well.Deﬁne in fact
C
n,j
=

n
￿
M
n,j
−Z
n,j
￿
and D
n,j
=

n
￿
Z
n,j
−Z
(j)
￿
.
As shown in [4],under reasonable conditions one obtains
(C
n,j
,D
n,j
) −→N(0,U
j
) ×N(0,V
j
) stably
Date:September 2,2009.
2000 Mathematics Subject Classiﬁcation.60F05,60G57,60B10.
Key words and phrases.Central limit theorem – Clinical trials – Random probability measure
– Stable convergence – Urn model.
1
2 PATRIZIA BERTI,IRENE CRIMALDI,LUCA PRATELLI,AND PIETRO RIGO
for certain random variables U
j
and V
j
.A nice consequence is

n
￿
M
n,j
−Z
(j)
￿
= C
n,j
+D
n,j
−→N(0,U
j
+V
j
) stably.
Stable convergence,in the sense of Aldous and Renyi,is a strong formof convergence
in distribution.The deﬁnition is recalled in Section 3.
For (C
n,j
,D
n,j
) to converge,it is fundamental that EA
n,j
= EA
n,1
for all n and
j.In real problems,however,it is more sound to assume that
EA
n,j
= EA
n,1
whenever n ≥ 1 and 1 ≤ j ≤ d
0
,
liminf
n
EA
n,1
> limsup
n
EA
n,j
whenever j > d
0
,
for some integer 1 ≤ d
0
≤ d.Roughly speaking,when d
0
< d some colors (those
labelled from d
0
+ 1 to d) are dominated by the others.In this framework,for
j ∈ {1,...,d
0
},meaningful quantities are
C

n,j
=

n
￿
M

n,j
−Z

n,j
￿
and D

n,j
=

n
￿
Z

n,j
−Z
(j)
￿
where
M

n,j
=
￿
n
k=1
X
k,j
1 +
￿
d
0
i=1
￿
n
k=1
X
k,i
,Z

n,j
=
N
n,j
￿
d
0
i=1
N
n,i
.
If d
0
= d,then D

n,j
= D
n,j
and |C

n,j
−C
n,j
| ≤
1

n
.If d
0
< d,in a sense,dealing
with (C

n,j
,D

n,j
) amounts to neglecting dominated colors.
Our problem is to determine the limiting distribution of (C

n,j
,D

n,j
),under
reasonable conditions,when d
0
< d.
2.Motivations
Possibly,when d
0
< d,Z
n,j
and M
n,j
have a more transparent meaning than
their counterparts Z

n,j
and M

n,j
.Accordingly,a CLT for (C
n,j
,D
n,j
) is more
intriguing than a CLT for (C

n,j
,D

n,j
).So,why dealing with (C

n,j
,D

n,j
)?
The main reason is that (C
n,j
,D
n,j
) merely fails to converge in case
liminf
n
EA
n,j
>
1
2
liminf
n
EA
n,1
for some j > d
0
.(1)
Fix in fact j ≤ d
0
.Under some conditions,Z
n,j
a.s.
−→ Z
(j)
with Z
(j)
> 0 a.s.;
see Lemma 3.Furthermore,condition (1) yields

n
￿
d
i=d
0
+1
Z
n,i
a.s.
−→ ∞.(This
follows from Corollary 2 of [9] for d = 2,but it can be shown in general).Hence,
D

n,j
−D
n,j
≥ Z
n,j

n
d
￿
i=d
0
+1
Z
n,i
a.s.
−→∞.
Since D

n,j
converges stably,as proved in Theorem 4,D
n,j
fails to converge in
distribution under (1).
A CLT for D
n,j
,thus,is generally not available.A way out could be looking for
the right norming factors,that is,investigating whether
α
n

n
D
n,j
converges stably
for suitable constants α
n
.This is a reasonable solution but we discarded it.In fact,
as proved in Corollary 5,(C
n,j
,D
n,j
) converges stably whenever
limsup
n
EA
n,j
<
1
2
liminf
n
EA
n,1
for all j > d
0
.(1*)
So,the choice of α
n
depends on whether (1) or (1*) holds,and this is typi-
cally unknown in applications (think to clinical trials).In addition,dealing with
CLT FOR MULTICOLOR URNS 3
(C

n,j
,D

n,j
) looks natural (to us).Loosely speaking,as the problemoccurs because
there are some dominated colors,the trivial solution is just to neglect dominated
colors.
A next point to be discussed is the practical utility (if any) of a CLT for
(C

n,j
,D

n,j
) or (C
n,j
,D
n,j
).To ﬁx ideas,we refer to (C

n,j
,D

n,j
) but the same
n,j
,D
n,j
) provided a CLT for the latter is available.It is
convenient to distinguish two situations.With reference to a real problem,sup-
pose the subset of non dominated colors is some J ⊂ {1,...,d} and not necessarily
{1,...,d
0
}.
If J is known,the main goal is to make inference on Z
(j)
,j ∈ J.To this
end,the limiting distribution of D

n,j
is useful.Knowing such distribution,for
instance,asymptotic conﬁdence intervals for Z
(j)
are easily obtained.An example
(cf.Example 6) is given in Section 4.
But in various frameworks,J is actually unknown (think to clinical trials again).
Then,the main focus is to identify J and the limiting distribution of C

n,j
can help.
If such distribution is known,the hypothesis
H
0
:J = J

can be (asymptotically) tested for any J

⊂ {1,...,d} with card(J

) ≥ 2.Details
are in Examples 7 and 8.
A last remark is that our results become trivial for d
0
= 1.On one hand,this is
certainly a gap,as d
0
= 1 is important in applications.On the other hand,d
0
= 1
is itself a trivial case.Indeed,Z
(1)
= 1 a.s.,so that no inference on Z
(1)
is required.
This paper is the natural continuation of [4].While the latter deals with d
0
= d,
the present paper focus on d
0
< d.Indeed,our results hold for d
0
≤ d,but they
are contained in Corollary 9 of [4] in the particular case d
0
[4],a few papers which inspired and aﬀected the present one are [1] and [9].Other
related references are [2],[3],[5],[7],[8],[10],[12].
The paper is organized as follows.Section 3 recalls some basic facts on stable
convergence.Section 4 includes the main results (Theorem 4 and Corollary 5).
Precisely,conditions for
(C

n,j
,D

n,j
) −→N(0,U
j
) ×N(0,V
j
) stably and
(C
n,j
,D
n,j
) −→N(0,U
j
) ×N(0,V
j
) stably under (1*)
are given,U
j
and V
j
being the same random variables mentioned in Section 1.As
a consequence,

n
￿
M

n,j
−Z
(j)
￿
= C

n,j
+D

n,j
−→N(0,U
j
+V
j
) stably and

n
￿
M
n,j
−Z
(j)
￿
= C
n,j
+D
n,j
−→N(0,U
j
+V
j
) stably under (1*).
Also,it is worth noting that D

n,j
and D
n,j
actually converge in a certain stronger
sense.
have been conﬁned in Section 5 and in a ﬁnal Appendix.
3.Stable convergence
Let (Ω,A,P) be a probability space and S a metric space.A kernel on S (or a
random probability measure on S) is a measurable collection N = {N(ω):ω ∈ Ω}
4 PATRIZIA BERTI,IRENE CRIMALDI,LUCA PRATELLI,AND PIETRO RIGO
of probability measures on the Borel σ-ﬁeld on S.Measurability means that
N(ω)(f) =
￿
f(x) N(ω)(dx)
is A-measurable,as a function of ω ∈ Ω,for each bounded Borel map f:S →R.
Let (Y
n
) be a sequence of S-valued random variables and N a kernel on S.Both
(Y
n
) and N are deﬁned on (Ω,A,P).Say that Y
n
converges stably to N in case
P
￿
Y
n
∈ ∙ | H
￿
−→E
￿
N(∙) | H
￿
weakly
for all H ∈ A such that P(H) > 0.
Clearly,if Y
n
→N stably,then Y
n
converges in distribution to the probability law
E
￿
N(∙)
￿
(just let H = Ω).We refer to [5] and references therein for more on stable
convergence.Here,we mention a strong form of stable convergence,introduced in
[5].Let F = (F
n
) be any sequence of sub-σ-ﬁelds of A.Say that Y
n
converges
F-stably in strong sense to N in case
E
￿
f(Y
n
) | F
n
￿
P
−→N(f) for all bounded continuous functions f:S →R.
Finally,we give two lemmas from[4].In both,G = (G
n
) is an increasing ﬁltration.
Given kernels M and N on S,let M ×N denote the kernel on S ×S deﬁned as
￿
M ×N
￿
(ω) = M(ω) ×N(ω) for all ω ∈ Ω.
Lemma 1.Let Y
n
and Z
n
be S-valued random variables and M and N kernels on
S,where S is a separable metric space.Suppose σ(Y
n
) ⊂ G
n
and σ(Z
n
) ⊂ G

for
all n,where G

= σ(∪
n
G
n
).Then,
(Y
n
,Z
n
) −→M ×N stably
provided Y
n
→M stably and Z
n
→N G-stably in strong sense.
Lemma 2.Let (Y
n
) be a G-adapted sequence of real randomvariables.If
￿

n=1
EY
2
n
n
2
<
∞ and E
￿
Y
n+1
| G
n
￿
a.s.
−→Y,for some random variable Y,then
n
￿
k≥n
Y
k
k
2
a.s.
−→Y and
1
n
n
￿
k=1
Y
k
a.s.
−→Y.
4.Main results
In the sequel,X
n,j
and A
n,j
,n ≥ 1,1 ≤ j ≤ d,are real random variables on the
probability space (Ω,A,P) and G = (G
n
:n ≥ 0),where
G
0
= {∅,Ω},G
n
= σ
￿
X
k,j
,A
k,j
:1 ≤ k ≤ n,1 ≤ j ≤ d
￿
.
Let N
n,j
= a
j
+
￿
n
k=1
X
k,j
A
k,j
where a
j
> 0 is a constant.We assume that
X
n,j
∈ {0,1},
d
￿
j=1
X
n,j
= 1,0 ≤ A
n,j
≤ β for some constant β,(2)
￿
A
n,j
:1 ≤ j ≤ d
￿
independent of G
n−1
∨σ
￿
X
n,j
:1 ≤ j ≤ d
￿
,
Z
n,j
= P
￿
X
n+1,j
= 1 | G
n
￿
=
N
n,j
￿
d
i=1
N
n,i
a.s..
Given an integer 1 ≤ d
0
≤ d,let us deﬁne
λ
0
= 0 if d
0
= d and λ
0
= max
d
0
<j≤d
limsup
n
EA
n,j
if d
0
< d.
CLT FOR MULTICOLOR URNS 5
We also assume that
EA
n,j
= EA
n,1
for n ≥ 1 and 1 ≤ j ≤ d
0
,(3)
m:= lim
n
EA
n,1
,m> λ
0
,q
j
:= lim
n
EA
2
n,j
for 1 ≤ j ≤ d
0
.
A few useful consequences are collected in the following lemma.Deﬁne
S

n
=
d
0
￿
i=1
N
n,i
and S
n
=
d
￿
i=1
N
n,i
.
Lemma 3.Under conditions (2)-(3),as n →∞,
S

n
n
a.s.
−→m and
S
n
n
a.s.
−→m,
n
1−λ
d
￿
i=d
0
+1
Z
n,i
a.s.
−→0 whenever d
0
< d and λ >
λ
0
m
,
Z
n,j
a.s.
−→Z
(j)
for each 1 ≤ j ≤ d
0
,
where each Z
(j)
is a random variable such that Z
(j)
> 0 a.s..
For d = 2,Lemma 3 follows from results in [9] and [10].For arbitrary d,it
is possibly known but we do not know of any reference.Accordingly,a proof of
Lemma 3 is given in the Appendix.We also note that,apart from a few particular
cases,the probability distribution of Z
(j)
is not known (even if d
0
= d).
We aim to settle the asymptotic behavior of
C
n,j
=

n
￿
M
n,j
−Z
n,j
￿
,D
n,j
=

n
￿
Z
n,j
−Z
(j)
￿
,
C

n,j
=

n
￿
M

n,j
−Z

n,j
￿
,D

n,j
=

n
￿
Z

n,j
−Z
(j)
￿
,
where j ∈ {1,...,d
0
} and
M
n,j
=
￿
n
k=1
X
k,j
n
,M

n,j
=
￿
n
k=1
X
k,j
1 +
￿
n
k=1
￿
d
0
i=1
X
k,i
,Z

n,j
=
N
n,j
￿
d
0
i=1
N
n,i
.
Let N(a,b) denote the one-dimensional Gaussian law with mean a and variance
b ≥ 0 (where N(a,0) = δ
a
).Note that N(0,L) is a kernel on R for each real non
negative random variable L.We are in a position to state our main result.
Theorem 4.If conditions (2)-(3) hold,then
C

n,j
−→N(0,U
j
) stably and
D

n,j
−→N(0,V
j
) G-stably in strong sense
for each j ∈ {1,...,d
0
},where U
j
= V
j
−Z
(j)
(1 −Z
(j)
)
and V
j
=
Z
(j)
m
2
￿
q
j
(1 −Z
(j)
)
2
+ Z
(j)
￿
i≤d
0
,i￿=j
q
i
Z
(i)
￿
.
In particular (by Lemma 1),
(C

n,j
,D

n,j
) −→N(0,U
j
) ×N(0,V
j
) stably.
As noted in Section 2,Theorem 4 has been thought for the case d
0
< d,and
it reduces to Corollary 9 of [4] in the particular case d
0
= d.We also remark
that some assumptions can be stated in a diﬀerent form.In particular,under
6 PATRIZIA BERTI,IRENE CRIMALDI,LUCA PRATELLI,AND PIETRO RIGO
suitable extra conditions,Theorem 4 works even if (A
n,1
,...,A
n,d
) independent of
G
n−1
∨σ(X
n,1
,...,X
n,d
) is weakened into
(A
n,1
,...,A
n,d
) conditionally independent of (X
n,1
,...,X
n,d
) given G
n−1
;
see Remark 8 of [4].
The proof of Theorem 4 is deferred to Section 5.Here,we stress a few of its
consequences.
We already know (from Section 2) that (C
n,j
,D
n,j
) may fail to converge when
d
0
< d.There is a remarkable exception,however.
Corollary 5.Under conditions (2)-(3),if 2λ
0
< m (that is,(1*) holds) then
C
n,j
−→N(0,U
j
) stably and D
n,j
−→N(0,V
j
) G-stably in strong sense
for each j ∈ {1,...,d
0
}.In particular (by Lemma 1),
(C
n,j
,D
n,j
) −→N(0,U
j
) ×N(0,V
j
) stably.
Proof.By Theorem4,it is enough to prove D

n,j
−D
n,j
P
−→0 and C

n,j
−C
n,j
P
−→0.
It can be assumed d
0
< d.Note that
￿
￿
￿
D

n,j
−D
n,j
￿
￿
￿
=

nZ
n,j
￿
S
n
S

n
−1
￿

S
n
S

n

n
d
￿
i=d
0
+1
Z
n,i
,
C

n,j
−C
n,j
= D
n,j
−D

n,j
+ M
n,j

n
￿
d
i=d
0
+1
M
n,i

1
n
1
n
+
￿
d
0
i=1
M
n,i
.
By Lemma 3 and 2λ
0
< m,there is α >
1
2
such that n
α
￿
d
i=d
0
+1
Z
n,i
a.s.
−→0.Thus,
it remains only to see that

nM
n,i
a.s.
−→ 0 for each i > d
0
.Fix i > d
0
and deﬁne
L
n,i
=
￿
n
k=1
X
k,i
−Z
k−1,i

k
.Since (L
n,i
:n ≥ 1) is a G-martingale and
￿
n
E
￿
(L
n+1,i
−L
n,i
)
2
| G
n
￿
=
￿
n
Z
n,i
(1 −Z
n,i
)
n +1

￿
n
n
α
Z
n,i
n
1+α
< ∞ a.s.,
then L
n,i
converges a.s..By Kronecker lemma,
1

n
n
￿
k=1
(X
k,i
−Z
k−1,i
) =
1

n
n
￿
k=1

k
X
k,i
−Z
k−1,i

k
a.s.
−→ 0.
Since
1

n
￿
n
k=1
k
−α
−→0 and Z
k,i
=o(k
−α
) a.s.,it follows that

nM
n,i
=
1

n
n
￿
k=1
(X
k,i
−Z
k−1,i
) +
1

n
n−1
￿
k=0
Z
k,i
a.s.
−→ 0.
￿
Theorem 4 has some statistical implications as well.
Example 6.(A statistical use of D

n,j
).Suppose d
0
> 1,conditions (2)-(3)
hold,and ﬁx j ≤ d
0
.Let (V
n,j
:n ≥ 1) be a sequence of consistent estimators of
V
j
,that is,V
n,j
P
−→V
j
and σ(V
n,j
) ⊂ D
n
for each n where
D
n
= σ
￿
X
k,i
A
k,i
,X
k,i
:1 ≤ k ≤ n,1 ≤ i ≤ d
￿
CLT FOR MULTICOLOR URNS 7
is the σ-ﬁeld corresponding to the ”available data”.Since (V
n,j
Theorem 4 yields
(D

n,j
,V
n,j
) −→N(0,V
j
) ×δ
V
j
G-stably in strong sense.
Since d
0
> 1,then 0 < Z
(j)
< 1 a.s.,or equivalently V
j
> 0 a.s..Hence,
I
{V
n,j
>0}
D

n,j
￿
V
n,j
−→N(0,1) G-stably in strong sense.
For large n,this fact allows to make inference on Z
(j)
.For instance,
Z

n,j
±
u
α

n
￿
V
n,j
provides an asymptotic conﬁdence interval for Z
(j)
with (approximate) level 1 −α,
where u
α
is such that N(0,1)(u
α
,∞) =
α
2
.
An obvious consistent estimator of V
j
is
V
n,j
=
1
m
2
n
￿
Q
n,j
(1 −Z
n,j
)
2
+ Z
2
n,j
￿
i≤d
0
,i￿=j
Q
n,i
￿
where
m
n
=
￿
n
k=1
￿
d
i=1
X
k,i
A
k,i
n
and Q
n,i
=
￿
n
k=1
X
k,i
A
2
k,i
n
.
In fact,E(X
n+1,i
A
2
n+1,i
| G
n
) = Z
n,i
EA
2
n+,i
a.s.
−→ Z
(i)
q
i
for all i ≤ d
0
,so that
Lemma 2 implies Q
n,i
a.s.
−→Z
(i)
q
i
.Similarly,m
n
a.s.
−→m.Therefore,V
n,j
a.s.
−→V
j
.
Finally,Theorem4 also implies

n
￿
M

n,j
−Z
(j)
￿
= C

n,j
+D

n,j
−→N(0,U
j
+V
j
)
stably.So,another asymptotic conﬁdence interval for Z
(j)
is M

n,j
±
u
α

n
￿
G
n,j
where G
n,j
is a consistent estimator of U
j
+V
j
.One merit of the latter interval is
that it does not depend on the initial composition a
i
,i = 1,...,d
0
(provided this
is true for G
n,j
as well).
Example 7.(A statistical use of C

n,j
).Suppose
EA
n,j
= µ
j
and var(A
n,j
) = σ
2
j
> 0 for all n ≥ 1 and 1 ≤ j ≤ d.
Suppose also that conditions (2)-(3) hold with some J ⊂ {1,...,d} in the place of
{1,...,d
0
},where card(J) > 1,that is
µ
r
= m> µ
s
whenever r ∈ J and s/∈ J.
Both J and card(J) are unknown,and we aim to test the hypothesis H
0
:J = J

where J

⊂ {1,...,d} and card(J

) > 1.Note that U
j
can be written as
U
j
=
Z
(j)
m
2
￿
(1 −Z
(j)
)
2
σ
2
j
+Z
(j)
￿
i∈J,i￿=j
Z
(i)
σ
2
i
￿
,j ∈ J.
Fix j ∈ J

.Under H
0
,a consistent estimator of U
j
is
U
n,j
=
Z
n,j
￿m
2
n
￿￿
i∈J

Z
n,i
￿
4
￿
(1 −Z
n,j
)
2
￿σ
2
n,j
+Z
n,j
￿
i∈J

,i￿=j
Z
n,i
￿σ
2
n,i
￿
where
￿m
n
=
1
card(J

)
￿
i∈J

￿m
n,i
,￿m
n,i
=
￿
n
k=1
X
k,i
A
k,i
￿
n
k=1
X
k,i
,￿σ
2
n,i
=
￿
n
k=1
X
k,i
￿
A
k,i
− ￿m
n,i
)
2
￿
n
k=1
X
k,i
.
8 PATRIZIA BERTI,IRENE CRIMALDI,LUCA PRATELLI,AND PIETRO RIGO
A couple of remarks are in order.First,
F
n
:=
￿
i∈J

Z
n,i
a.s.
−→1 under H
0
.
Indeed,the factor F
−4
n
has been inserted into the deﬁnition of U
n,j
in order that
K
n,j
fails to converge in distribution to N(0,1) when H
0
is false,where K
n,j
is
deﬁned a few lines below.Second,
￿
n
k=1
X
k,i
> 0,eventually a.s.,so that ￿m
n,i
and
￿σ
2
n,i
are well deﬁned.Similarly,￿m
n
> 0 eventually a.s..
Next,deﬁning C

n,j
in the obvious way (i.e.,with J

in the place of {1,...,d
0
}),
Theorem 4 implies
K
n,j
:= I
{U
n,j
>0}
C

n,j
￿
U
n,j
−→N(0,1) stably under H
0
.
The converse is true as well,i.e.,K
n,j
fails to converge in distribution to N(0,1)
when H
0
is false.(This can be proved arguing as in Remark 10;we omit a formal
proof).Thus,an asymptotic critical region for H
0
,with approximate level α,is
￿
|K
n,j
| ≥ u
α
￿
with u
α
satisfying N(0,1)(u
α
,∞) =
α
2
.In real problems,some-
times,it is known in advance that j
0
∈ J for some j
0
∈ J

.Then,j = j
0
is a
natural choice in the previous test.Otherwise,an alternative option is a critical
region of the type
￿
i∈J

￿
|K
n,i
| ≥ u
i
￿
for suitable u
i
.This results in a more pow-
erful test but requires the joint limit distribution of
￿
K
n,i
:i ∈ J

￿
under H
0
.Such
a distribution is given in [4] when J

= {1,...,d},and can be easily obtained for
arbitrary J

using the techniques of this paper.
Example 8.(Another statistical use of C

n,j
).As in Example 7 (and under
the same assumptions),we aim to test H
0
:J = J

.Contrary to Example 7,
however,we are given observations A
k,j
,1 ≤ k ≤ n,1 ≤ j ≤ d,but no urn is
explicitly assigned.This is a main problem in statistical inference,usually faced
by the ANOVA techniques and their very many ramiﬁcations.A solution to this
problem is using C

n,j
,as in Example 7,after simulating the X
n,j
.The simulation
is not hard.Take in fact an i.i.d.sequence (Y
n
:n ≥ 0),independent of the A
k,j
,
with Y
0
uniformly distributed on (0,1).Let a
i
= 1,Z
0,i
=
1
d
for i = 1,...,d,and
X
1,j
= I
{F
0,j−1
<Y
0
≤F
0,j
}
where F
0,j
=
j
￿
i=1
Z
0,i
and F
0,0
= 0.
By induction,for each n ≥ 1,
X
n+1,j
= I
{F
n,j−1
<Y
n
≤F
n,j
}
where F
n,j
=
j
￿
i=1
Z
n,i
,
F
n,0
= 0 and Z
n,i
=
1 +
￿
n
k=1
X
k,i
A
k,i
d +
￿
d
r=1
￿
n
k=1
X
k,r
A
k,r
.
Now,H
0
can be asymptotically tested as in Example 7.In addition,since A
k,i
is
actually observed (unlike Example 7,where only X
k,i
A
k,i
is observed),￿m
n,i
and
￿σ
2
n,i
can be taken as
￿m
n,i
=
￿
n
k=1
A
k,i
n
and ￿σ
2
n,i
=
￿
n
k=1
￿
A
k,i
− ￿m
n,i
)
2
n
.
CLT FOR MULTICOLOR URNS 9
Clearly,this procedure needs to be much developed and investigated.By now,
however,it looks (to us) potentially fruitful.
5.Proof of Theorem 4
Next result,of possible independent interest,is inspired by ideas in [4] and [5].
Proposition 9.Let F = (F
n
) be an increasing ﬁltration and (Y
n
sequence of real integrable random variables.Suppose Y
n
a.s.
−→Y for some random
variable Y and H
n
∈ F
n
are events satisfying P(H
c
n
i.o.) = 0.Then,

n(Y
n
−Y ) −→N(0,V ) F-stably in strong sense,
for some random variable V,whenever
E
￿
I
H
n
￿
E(Y
n+1
| F
n
) −Y
n
￿
2
￿
= o(n
−3
),(4)

nE
￿
I
H
n
sup
k≥n
|E(Y
k+1
| F
k
) −Y
k+1
|
￿
−→0,(5)
n
￿
k≥n
(Y
k
−Y
k+1
)
2
P
−→V.(6)
Proof.We base on the following result,which is a consequence of Corollary 7 of [5].
Let (L
n
) be an F-martingale such that L
n
a.s.
−→L.Then,

n(L
n
−L) −→N(0,V )
F-stably in strong sense whenever
(i) lim
n

nE
￿
I
H
n
sup
k≥n
|L
k
−L
k+1
|
￿
= 0;(ii) n
￿
k≥n
(L
k
−L
k+1
)
2
P
−→V.
Next,deﬁne the F-martingale
L
0
= Y
0
,L
n
= Y
n

n−1
￿
k=0
E
￿
Y
k+1
−Y
k
| F
k
￿
.
Deﬁne also T
n
= E
￿
Y
n+1
−Y
n
| F
n
￿
.By (4),

n
￿
k≥n
E|I
H
k
T
k
| ≤

n
￿
k≥n
￿
E(I
H
k
T
2
k
) =

n
￿
k≥n
o(k
−3/2
) −→0.(7)
In particular,
￿

k=0
E|I
H
k
T
k
| < ∞ so that
￿
n−1
k=0
I
H
k
T
k
converges a.s..Since Y
n
converges a.s.and P(I
H
n
￿= 1 i.o.) = 0,
L
n
= Y
n

n−1
￿
k=0
T
k
a.s.
−→L for some random variable L.
Next,write
(L
n
−L) −(Y
n
−Y ) =
￿
k≥n
(L
k
−L
k+1
) −
￿
k≥n
(Y
k
−Y
k+1
) =
￿
k≥n
T
k
.
Recalling

n
￿
k≥n
|I
H
k
T
k
|
P
−→0 (thanks to (7)),one obtains
￿
￿
￿

n(L
n
−L) −

n(Y
n
−Y )
￿
￿
￿
=

n
￿
￿
￿
￿
k≥n
T
k
￿
￿
￿

n
￿
k≥n
|I
H
k
T
k
| +

n
￿
k≥n
|(1 −I
H
k
) T
k
|
P
−→0.
10 PATRIZIA BERTI,IRENE CRIMALDI,LUCA PRATELLI,AND PIETRO RIGO
Thus,it suﬃces to prove

n(L
n
−L) −→N(0,V ) F-stably in strong sense,that
is,to prove conditions (i) and (ii).Condition (i) reduces to (5) after noting that
L
k
−L
k+1
= E(Y
k+1
| F
k
) −Y
k+1
.
As to (ii),since L
k
−L
k+1
= Y
k
−Y
k+1
+T
k
,condition (6) yields
n
￿
k≥n
(L
k
−L
k+1
)
2
= V + n
￿
k≥n
￿
T
2
k
+2T
k
(Y
k
−Y
k+1
)
￿
+o
P
(1).
By (4),E
￿
n
￿
k≥n
I
H
k
T
2
k
￿
= n
￿
k≥n
o(k
−3
) −→ 0.Since P(I
H
n
￿= 1 i.o.) = 0,
then n
￿
k≥n
T
2
k
P
−→0.Because of (6),this also implies
￿
n
￿
k≥n
T
k
(Y
k
−Y
k+1
)
￿
2
≤ n
￿
k≥n
T
2
k
∙ n
￿
k≥n
(Y
k
−Y
k+1
)
2
P
−→0.
Therefore,condition (ii) holds and this concludes the proof.￿
We next turn to Theorem 4.From now on,it is assumed d
0
< d (the case d
0
= d
has been settled in [4]).Recall the notations S

n
=
￿
d
0
i=1
N
n,i
and S
n
=
￿
d
i=1
N
n,i
.
Note also that,by a straightforward calculation,
Z

n+1,j
−Z

n,j
=
X
n+1,j
A
n+1,j
S

n
+A
n+1,j
− Z

n,j
d
0
￿
i=1
X
n+1,i
A
n+1,i
S

n
+A
n+1,i
.
Proof of Theorem 4.The proof is split into two steps.
(i) D

n,j
−→N(0,V
j
) G-stably in strong sense.
By Lemma 3,Z

n,j
=
Z
n,j

d
0
i=1
Z
n,i
a.s.
−→ Z
(j)
.Further,P(2S

n
< nm i.o.) = 0 since
S

n
n
a.s.
−→m.Hence,by Proposition 9,it suﬃces to prove conditions (4)-(5)-(6) with
F
n
= G
n
,Y
n
= Z

n,j
,Y = Z
(j)
,H
n
= {2S

n
≥ nm},V = V
j
.
Conditions (4) and (5) trivially hold.As to (4),note that
Z

n,j
d
0
￿
i=1
Z
n,i
= Z
n,j
d
0
￿
i=1
Z

n,i
= Z
n,j
.
Therefore,
E
￿
Z

n+1,j
−Z

n,j
| G
n
￿
= Z
n,j
E
￿
A
n+1,j
S

n
+A
n+1,j
| G
n
￿
− Z

n,j
d
0
￿
i=1
Z
n,i
E
￿
A
n+1,i
S

n
+A
n+1,i
| G
n
￿
= −Z
n,j
E
￿ A
2
n+1,j
S

n
(S

n
+A
n+1,j
)
| G
n
￿
+ Z

n,j
d
0
￿
i=1
Z
n,i
E
￿ A
2
n+1,i
S

n
(S

n
+A
n+1,i
)
| G
n
￿
,
so that I
H
n
￿
￿
￿ E
￿
Z

n+1,j
−Z

n,j
| G
n
￿
￿
￿
￿ ≤ I
H
n
d
0
β
2
(S

n
)
2

4 d
0
β
2
m
2
1
n
2
.
CLT FOR MULTICOLOR URNS 11
As to (5),
￿
￿
￿ E
￿
Z

k+1,j
| G
k
￿
−Z

k+1,j
￿
￿
￿ ≤

S

k
+ N
k,j
￿
￿
￿E
￿
1
S

k+1
| G
k
￿

1
S

k+1
￿
￿
￿

2 β
S

k
+ N
k,j
￿
1
S

k

1
S

k

￿

S

k
,
so that I
H
n
sup
k≥n
|E(Z

k+1,j
| G
k
) −Z

k+1,j
| ≤ I
H
n
3 β
S

n

6 β
m
1
n
.
Finally,let us turn to (6).For every i ∈ {1,...,d
0
},
n
2
E
￿
A
2
n+1,i
(S

n
+A
n+1,i
)
2
| G
n
￿
≤ n
2
EA
2
n+1,i
(S

n
)
2
a.s.
−→
q
i
m
2
and
n
2
E
￿ A
2
n+1,i
(S

n
+A
n+1,i
)
2
| G
n
￿
≥ n
2
EA
2
n+1,i
(S

n
+β)
2
a.s.
−→
q
i
m
2
.
Since X
n+1,r
X
n+1,s
= 0 for r ￿= s,it follows that
n
2
E
￿
(Z

n+1,j
−Z

n,j
)
2
| G
n
￿
= n
2
Z
n,j
(1 −Z

n,j
)
2
E
￿
A
2
n+1,j
(S

n
+A
n+1,j
)
2
| G
n
￿
+
+n
2
(Z

n,j
)
2
￿
i≤d
0
,i￿=j
Z
n,i
E
￿ A
2
n+1,i
(S

n
+A
n+1,i
)
2
| G
n
￿
a.s.
−→Z
(j)
(1 −Z
(j)
)
2
q
j
m
2
+ Z
2
(j)
￿
i≤d
0
,i￿=j
Z
(i)
q
i
m
2
= V
j
.
Let R
n+1
= (n +1)
2
I
H
n
(Z

n+1,j
−Z

n,j
)
2
.Since H
n
∈ G
n
and P(I
H
n
￿= 1 i.o.) = 0,
then E(R
n+1
| G
n
)
a.s.
−→V
j
.On noting that |Z

n+1,j
−Z

n,j
| ≤
d
0
β
S

n
,
ER
2
n
n
2
≤ (d
0
β)
4
n
2
E
￿
I
H
n−1
(S

n−1
)
4
￿

￿
2d
0
β
m
￿
4
n
2
(n −1)
4
.
By Lemma 2 (applied with Y
n
= R
n
),
n
￿
k≥n
I
H
k
(Z

k+1,j
−Z

k,j
)
2
=
n
n +1
(n +1)
￿
k≥n+1
R
k
k
2
a.s.
−→ V
j
.
Since P(I
H
n
￿= 1 i.o.) = 0 then n
￿
k≥n
(Z

k+1,j
−Z

k,j
)
2
a.s.
−→ V
j
,that is,condition
(6) holds.
(ii) C

n,j
−→N(0,U
j
) stably.
Deﬁne T
n,i
=
￿
n
k=1
X
k,i
,T
0,i
= 0,and note that
C

n,j
= −

nZ

n,j
1 +
￿
d
0
i=1
T
n,i
+
n
1 +
￿
d
0
i=1
T
n,i
T
n,j
−Z

n,j
￿
d
0
i=1
T
n,i

n
and
T
n,j
−Z

n,j
d
0
￿
i=1
T
n,i
=
n
￿
k=1
￿
X
k,j
−Z

k,j
d
0
￿
i=1
T
k,i
+Z

k−1,j
d
0
￿
i=1
T
k−1,i
￿
=
n
￿
k=1
￿
X
k,j
−Z

k−1,j
d
0
￿
i=1
X
k,i

d
0
￿
i=1
T
k,i
(Z

k,j
−Z

k−1,j
)
￿
.
12 PATRIZIA BERTI,IRENE CRIMALDI,LUCA PRATELLI,AND PIETRO RIGO
Deﬁne also H
n
= {2S

n
≥ nm} and
C
∗∗
n,j
=
1

n
n
￿
k=1
I
H
k−1
￿
X
k,j
−Z

k−1,j
d
0
￿
i=1
X
k,i
+
d
0
￿
i=1
T
k−1,i
￿
E(Z

k,j
| G
k−1
) −Z

k,j
￿￿
.
Recalling (from point (i)) that P(I
H
n
￿= 1 i.o.) = 0,lim
n

d
0
i=1
T
n,i
n
= 1 a.s.,and
I
H
k−1
￿
￿
￿
E
￿
Z

k,j
−Z

k−1,j
| G
k−1
￿
￿
￿
￿

c
(k−1)
2
a.s.for some constant c,it is not hard to
see that C

n,j
−→N stably if and only if C
∗∗
n,j
−→N stably for any kernel N.
We next prove C
∗∗
n,j
−→N(0,U
j
) stably.For k = 1,...,n,let F
n,k
= G
k
and
Y
n,k
=
I
H
k−1
￿
X
k,j
− Z

k−1,j
￿
d
0
i=1
X
k,i
+
￿
d
0
i=1
T
k−1,i
(E(Z

k,j
| G
k−1
) −Z

k,j
)
￿

n
.
Since E(Y
n,k
| F
n,k−1
) = 0 a.s.,the martingale CLT (see Theorem 3.2 of [6])
applies.As a consequence,C
∗∗
n,j
=
￿
n
k=1
Y
n,k
−→N(0,U
j
) stably provided
sup
n
E
￿
max
1≤k≤n
Y
2
n,k
￿
< ∞;max
1≤k≤n
|Y
n,k
|
P
−→0;
n
￿
k=1
Y
2
n,k
P
−→U
j
.
As shown in point (i),I
H
k−1
￿
￿
￿E(Z

k,j
| G
k−1
) − Z

k,j
￿
￿
￿ ≤
d
k−1
a.s.for a suitable
constant d.Hence,the ﬁrst two conditions follow from
Y
2
n,k

2
n
+
2
n
I
H
k−1
(k −1)
2
￿
E(Z

k,j
| G
k−1
) −Z

k,j
￿
2

2 (1 +d
2
)
n
a.s..
To conclude the proof,it remains to see that
￿
n
k=1
Y
2
n,k
P
−→ U
j
.After some
(long) algebra,the latter condition is shown equivalent to
1
n
n
￿
k=1
I
H
k−1
￿
X
k,j
−Z

k−1,j
+k
￿
Z

k−1,j
−Z

k,j
￿￿
2
P
−→U
j
.(8)
Let R
n+1
= (n +1)
2
I
H
n
(Z

n+1,j
−Z

n,j
)
2
.Since E(R
n+1
| G
n
)
a.s.
−→V
j
,as shown in
point (i),Lemma 2 implies
1
n
n
￿
k=1
I
H
k−1
k
2
￿
Z

k−1,j
−Z

k,j
￿
2 a.s.
−→V
j
.
A direct calculation shows that
1
n
n
￿
k=1
I
H
k−1
(X
k,j
−Z

k−1,j
)
2
a.s.
−→Z
(j)
(1 −Z
(j)
).
Finally,observe the following facts
￿
Z

n,j
−Z

n+1,j
￿
(X
n+1,j
−Z

n,j
) = −(1 −Z

n,j
)
X
n+1,j
A
n+1,j
S

n
+A
n+1,j
−Z

n,j
(Z

n,j
−Z

n+1,j
),
(n +1) Z

n,j
I
H
n
￿
￿
￿E
￿
Z

n,j
−Z

n+1,j
| G
n
￿
￿
￿
￿ ≤
c (n +1)
n
2
a.s.
−→0,
(n +1) E
￿
X
n+1,j
A
n+1,j
S

n
+A
n+1,j
| G
n
￿

n +1
S

n
Z
n,j
EA
n+1,j
a.s.
−→Z
(j)
,
(n +1) E
￿
X
n+1,j
A
n+1,j
S

n
+A
n+1,j
| G
n
￿

n +1
S

n

Z
n,j
EA
n+1,j
a.s.
−→Z
(j)
.
CLT FOR MULTICOLOR URNS 13
Therefore,
(n +1) I
H
n
E
￿
(Z

n,j
−Z

n+1,j
) (X
n+1,j
−Z

n,j
) | G
n
￿
a.s.
−→−Z
(j)
(1 −Z
(j)
)
and Lemma 2 again implies
2
n
n
￿
k=1
I
H
k−1
k (Z

k−1,j
−Z

k,j
) (X
k,j
−Z

k−1,j
)
a.s.
−→−2Z
(j)
(1 −Z
(j)
).
Thus condition (8) holds,and this concludes the proof.
￿
Remark 10.Point (ii) admits a simpler proof in case EA
k,j
= mfor all k ≥ 1 and
1 ≤ j ≤ d
0
.This happens,in particular,if the sequence (A
n,1
,...,A
n,d
) is i.i.d..
Given the real numbers b
1
,...,b
d
0
,deﬁne
Y
n,k
=
1

n
d
0
￿
j=1
b
j
X
k,j
(A
k,j
−EA
k,j
),F
n,k
= G
k
,k = 1,...,n.
By Lemma 2,
￿
n
k=1
Y
2
n,k
a.s.
−→
￿
d
0
j=1
b
2
j
(q
j
− m
2
) Z
(j)
:= L.Thus,the martingale
CLT implies
￿
n
k=1
Y
n,k
−→ N(0,L) stably.Since b
1
,...,b
d
0
are arbitrary con-
stants,
￿
￿
n
k=1
X
k,j
(A
k,j
−EA
k,j
)

n
:j = 1,...,d
0
￿
−→N
d
0
(0,Σ) stably
where Σ is the diagonal matrix with σ
j,j
= (q
j
−m
2
)Z
(j)
.Let T
n,j
=
￿
n
k=1
X
k,j
.
Since EA
k,j
= m and
T
n,j
n
a.s.
−→Z
(j)
> 0 for all j ≤ d
0
,one also obtains
￿

n
￿
￿
n
k=1
X
k,j
A
k,j
T
n,j
−m
￿
:j = 1,...,d
0
￿
−→N
d
0
(0,Γ) stably
where Γ is diagonal with γ
j,j
=
(q
j
−m
2
)
Z
(j)
.Next,write
￿
C
n,j
:=

n
￿
T
n,j
￿
d
0
i=1
T
n,i

￿
n
k=1
X
k,j
A
k,j
￿
d
0
i=1
￿
n
k=1
X
k,i
A
k,i
￿
=
T
n,j
￿
d
0
i=1
￿
n
k=1
X
k,i
A
k,i
￿
i≤d
0
,i￿=j
T
n,i
￿
d
0
i=1
T
n,i

n
￿
m−
￿
n
k=1
X
k,j
A
k,j
T
n,j
￿
+
+
T
n,j
￿
d
0
i=1
￿
n
k=1
X
k,i
A
k,i
1
￿
d
0
i=1
T
n,i
￿
i≤d
0
,i￿=j
T
n,i

n
￿
￿
n
k=1
X
k,i
A
k,i
T
n,i
−m
￿
.
Clearly,C

n,j

￿
C
n,j
a.s.
−→ 0.To conclude the proof,it suﬃces noting that
￿
C
n,j
converges stably to the Gaussian kernel with mean 0 and variance
￿
Z
(j)
(1 −Z
(j)
)
m
￿
2
q
j
−m
2
Z
(j)
+
Z
2
(j)
m
2
￿
i≤d
0
,i￿=j
Z
2
(i)
q
i
−m
2
Z
(i)
= U
j
.
APPENDIX
14 PATRIZIA BERTI,IRENE CRIMALDI,LUCA PRATELLI,AND PIETRO RIGO
Proof of Lemma 3.We ﬁrst note that N
n,j
a.s.
−→ ∞ for each j ≤ d
0
.Arguing
as in the proof of Proposition 2.3 of [9],in fact,
￿

n=1
X
n,j
= ∞ a.s..Hence,
￿
n
k=1
X
k,j
EA
k,j
a.s
−→∞,and N
n,j
a.s.
−→∞follows from
L
n
= N
n,j

￿
a
j
+
n
￿
k=1
X
k,j
EA
k,j
￿
=
n
￿
k=1
X
k,j
￿
A
k,j
−EA
k,j
￿
is a G-martingale such that |L
n+1
−L
n
| ≤ β for all n.
We also need the following fact.
CLAIM:τ
n,j
=
N
n,j
(S

n
)
λ
converges a.s.for all j > d
0
and λ ∈ (
λ
0
m
,1).
On noting that (1 −x)
λ
≤ 1 −λx for 0 ≤ x ≤ 1 and
￿
d
0
i=1
Z
n,i
=
S

n
S
n
,one can
estimate as follows
E
￿
τ
n+1,j
τ
n,j
−1 | G
n
￿
= E
￿
N
n,j
+X
n+1,j
A
n+1,j
N
n,j
(
S

n
S

n+1
)
λ
| G
n
￿
−1

Z
n,j
EA
n+1,j
N
n,j
+ E
￿
(
S

n
S

n+1
)
λ
−1 | G
n
￿

EA
n+1,j
S
n
− λ
d
0
￿
i=1
E
￿
X
n+1,i
A
n+1,i
S

n+1
| G
n
￿

EA
n+1,j
S
n
− λ
d
0
￿
i=1
Z
n,i
EA
n+1,i
S

n

=
EA
n+1,j
S
n
− λEA
n+1,1
S

n
S
n
(S

n
+β)
=
1
S
n
￿
EA
n+1,j
− λEA
n+1,1
S

n
S

n

￿
a.s..
Since limsup
n
￿
EA
n+1,j
−λEA
n+1,1
￿
≤ λ
0
−λm< 0,there are ￿ > 0 and n
0
≥ 1
such that EA
n+1,j
−λEA
n+1,1
≤ −￿ whenever n ≥ n
0
.Thus,
E
￿
τ
n+1,j
−τ
n,j
| G
n
￿
= τ
n,j
E
￿
τ
n+1,j
τ
n,j
−1 | G
n
￿
≤ 0 a.s.whenever n ≥ n
0
and S

n
≥ c
for a suitable constant c.Since S

n
≥ N
n,1
a.s.
−→∞,thus,(τ
n,j
) is eventually a non
negative G-super-martingale.Hence,τ
n,j
converges a.s..
Let λ ∈ (
λ
0
m
,1).A ﬁrst consequence of the Claim is that Z
n,j

τ
n,j
S
1−λ
n
a.s.
−→0 for
each j > d
0
.Letting Y
n
=
￿
d
0
i=1
X
n,i
A
n,i
,this implies
E(Y
n+1
| G
n
) =
d
0
￿
i=1
Z
n,i
EA
n+1,i
= EA
n+1,1
(1 −
d
￿
i=d
0
+1
Z
n,i
)
a.s.
−→m.
Thus,Lemma 2 yields
S

n
n
a.s.
−→m.Similarly,
S
n
n
a.s.
−→m.Applying the Claim again,
n
1−λ
Z
n,j
= (
n
S
n
)
1−λ
(
S

n
S
n
)
λ
τ
n,j
converges a.s.for each j > d
0
.
Since j > d
0
and λ ∈ (
λ
0
m
,1) are arbitrary,it follows that n
1−λ
￿
d
j=d
0
+1
Z
n,j
a.s.
−→0
for each λ >
λ
0
m
.
CLT FOR MULTICOLOR URNS 15
Next,ﬁx j ≤ d
0
.For Z
n,j
to converge a.s.,it suﬃces that
￿
n
E
￿
Z
n+1,j
−Z
n,j
| G
n
￿
and
￿
n
E
￿
(Z
n+1,j
−Z
n,j
)
2
| G
n
￿
converge a.s.;
see Lemma 3.2 of [11].Since
Z
n+1,j
−Z
n,j
=
X
n+1,j
A
n+1,j
S
n
+A
n+1,j
− Z
n,j
d
￿
i=1
X
n+1,i
A
n+1,i
S
n
+A
n+1,i
,
then |Z
n+1,j
−Z
n,j
| ≤

S
n
.Hence,
￿
n
E
￿
(Z
n+1,j
−Z
n,j
)
2
| G
n
￿
≤ d
2
β
2
￿
n
1
n
2
(
n
S
n
)
2
< ∞ a.s..
Moreover,
E
￿
Z
n+1,j
−Z
n,j
| G
n
￿
= Z
n,j
E
￿
A
n+1,j
S
n
+A
n+1,j
| G
n
￿
− Z
n,j
d
￿
i=1
Z
n,i
E
￿
A
n+1,i
S
n
+A
n+1,i
| G
n
￿
= −Z
n,j
E
￿ A
2
n+1,j
S
n
(S
n
+A
n+1,j
)
| G
n
￿
+ Z
n,j
d
￿
i=1
Z
n,i
E
￿ A
2
n+1,i
S
n
(S
n
+A
n+1,i
)
| G
n
￿
+
+Z
n,j
EA
n+1,j
S
n
− Z
n,j
d
￿
i=1
Z
n,i
EA
n+1,i
S
n
a.s.,and
EA
n+1,j

d
￿
i=1
Z
n,i
EA
n+1,i
= EA
n+1,1
d
￿
i=d
0
+1
Z
n,i

d
￿
i=d
0
+1
Z
n,i
EA
n+1,i
.
Therefore,
￿
n
E
￿
Z
n+1,j
−Z
n,j
| G
n
￿
converges a.s.since
￿
￿
￿E
￿
Z
n+1,j
−Z
n,j
| G
n
￿
￿
￿
￿ ≤

2
S
2
n
+2β
￿
d
i=d
0
+1
Z
n,i
S
n
= o(n
λ−2
) a.s.for each λ ∈ (
λ
0
m
,1).
Thus,Z
n,j
a.s.
−→ Z
(j)
for some random variable Z
(j)
.To conclude the proof,we
let Y
n,i
= log
Z
n,i
Z
n,1
and prove that
￿
n
E
￿
Y
n+1,i
−Y
n,i
| G
n
￿
and
￿
n
E
￿
(Y
n+1,i
−Y
n,i
)
2
| G
n
￿
converge a.s.whenever i ≤ d
0
.
In this case,in fact,log
Z
n,i
Z
n,1
converges a.s.for each i ≤ d
0
and this implies Z
(i)
> 0
a.s.for each i ≤ d
0
.
Since Y
n+1,i
−Y
n,i
= X
n+1,i
log
￿
1 +
A
n+1,i
N
n,i
￿
−X
n+1,1
log
￿
1 +
A
n+1,1
N
n,1
￿
,then
E
￿
Y
n+1,i
−Y
n,i
| G
n
￿
= Z
n,i
E
￿
log
￿
1+
A
n+1,i
N
n,i
￿
| G
n
￿
−Z
n,1
E
￿
log
￿
1+
A
n+1,1
N
n,1
￿
| G
n
￿
a.s..
Since EA
n+1,i
= EA
n+1,1
,a second order Taylor expansion of x ￿→log(1+x) yields
￿
￿
￿
E
￿
Y
n+1,i
−Y
n,i
| G
n
￿
￿
￿
￿

β
2
S
n
￿
1
N
n,i
+
1
N
n,1
￿
a.s..
A quite similar estimate holds for E
￿
(Y
n+1,i
−Y
n,i
)
2
| G
n
￿
.Thus,it suﬃces to see
￿
n
1
S
n
N
n,i
< ∞ a.s.for each i ≤ d
0
.
16 PATRIZIA BERTI,IRENE CRIMALDI,LUCA PRATELLI,AND PIETRO RIGO
Deﬁne R
n,i
=
(S

n
)
u
N
n,i
where u ∈ (0,1) and i ≤ d
0
.Since (1+x)
u
≤ 1+ux for x ≥ 0,
one can estimate as
E
￿
R
n+1,i
R
n,i
−1 | G
n
￿
= E
￿
(
S

n+1
S

n
)
u
−1 | G
n
￿
−E
￿
(
S

n+1
S

n
)
u
X
n+1,i
A
n+1,i
N
n,i
+A
n+1,i
| G
n
￿
≤ uE
￿
S

n+1
−S

n
S

n
| G
n
￿
−E
￿
X
n+1,i
A
n+1,i
N
n,i

| G
n
￿
=
u
S

n
d
0
￿
p=1
Z
n,p
EA
n+1,p

Z
n,i
EA
n+1,i
N
n,i

=
EA
n+1,1
S
n
￿
u −
N
n,i
N
n,i

￿
a.s..
As in the proof of the Claim,thus,
E
￿
R
n+1,i
−R
n,i
| G
n
￿
= R
n,i
E
￿
R
n+1,i
R
n,i
−1 | G
n
￿
≤ 0 a.s.whenever N
n,i
≥ c
for a suitable constant c.Since N
n,i
a.s.
−→∞,then (R
n,i
) is eventually a non negative
G-super-martingale,so that R
n,i
converges a.s..Hence,
￿
n
1
S
n
N
n,i
=
￿
n
R
n,i
S
n
(S

n
)
u
=
￿
n
R
n,i
n
S
n
(
n
S

n
)
u
1
n
1+u
< ∞ a.s..
This concludes the proof.
￿
References
[1] Aletti G.,May C.and Secchi P.(2008) A central limit theorem,and related
results,for a two-color randomly reinforced urn,Preprint,currently available
at:ArXiv:math.PR/0811.2097v1
[2] Bay Z.D.and Hu F.(2005) Asymptotics in randomized urn models,Ann.Appl.
Probab.,15,914-940.
[3] Berti P.,Pratelli L.and Rigo P.(2004) Limit theorems for a class of identically
distributed random variables,Ann.Probab.,32,2029-2052.
[4] Berti P.,Crimaldi I.,Pratelli L.and Rigo P.(2009) Acentral limit theoremand
its applications to multicolor randomly reinforced urns,submitted,currently
available at:http://arxiv.org/abs/0904.0932
[5] Crimaldi I.,Letta G.and Pratelli L.(2007) A strong form of stable conver-
gence,Sem.de Probab.XL,LNM,1899,203-225.
[6] Hall P.and Heyde C.C.(1980) Martingale limit theory and its applications,
[7] Janson S.(2004) Functional limit theorems for multitype branching processes
and generalized Polya urns,Stoch.Proc.Appl.,110,177-245.
[8] Janson S.(2005) Limit theorems for triangular urn schemes,Probab.Theo.
Rel.Fields,134,417-452.
[9] May C.and Flournoy N.(2009) Asymptotics in response-adaptive designs
generated by a two-color,randomly reinforced urn,Ann.Statist.,37,1058-
1078.
[10] Muliere P.,Paganoni A.M.and Secchi P.(2006) A randomly reinforced urn,
J.Statist.Plann.Inference,136,1853-1874.
[11] Pemantle R.and Volkov S.(1999) Vertex-reinforced random walk on Z has
ﬁnite range,Ann.Probab.,27,1368-1388.
CLT FOR MULTICOLOR URNS 17
[12] Pemantle R.(2007) A survey of random processes with reinforcement,Probab.
Surveys,4,1-79.
Patrizia Berti,Dipartimento di Matematica Pura ed Applicata ”G.Vitali”,Univer-
sita’ di Modena e Reggio-Emilia,via Campi 213/B,41100 Modena,Italy