Central limit theorems for variances and correlation ... - Edward Omey

unwieldycodpieceElectronics - Devices

Oct 8, 2013 (4 years and 1 month ago)

92 views

Central limit theorems
for variances and correlation coe¢ cients
E.Omey and S.Van Gulck
HUB Stormstraat 2,1000 - Brussels,Belgium
{edward.omey,stefan.vangulck}@hubrussel.be
March 2008
Abstract
In many texbooks,the central limit theorem playes a prominent role.In
studying condence intervals for the mean  for example,the use of the central
limit theoremis fully exploited.For large samples froman arbitrary distribution
with nite second moment,we can always construct condence intervals and test
hypothesis concerning .In the same textbooks,in the treatment of the variance

2
and the correlation coe¢ cient ,the analysis is usually restricted to samples
from normal distributions!
In this paper we give a general and simple central limit appraoch to these
parameters and show that it is convenient but not necessary to restrict attention
to normal samples.Among others we discuss central limit theorems for the
sample variance s
2
,the sample correlation coe¢ cient r and the ratio of sample
variances s
2
2
=s
2
1
for paired and for unpaired samples.
1
1 Introduction
Let X
1
;X
2
;:::;X
n
denote a sample from X s A(;
2
),where A is an arbitrary
distribution with  = E(X) and 
2
= V ar(X).The sample mean is given byX =
1n
n
X
i=1
X
i
.
It is well known that E( X) =  and that V ar(X) = 
2
=n.To calculate
probabilities concerning X is a more complicated problem.For small samples
there are not many distributions for which the distribution ofX is known.For
large samples we can use the central limit theorem.The central limit theorem
for X states that as n!1,we have
pnX 
d
=)Z s N(0;1),
i.e.we have
P(
p nX 
 x)!P(Z  x).
We use the notation X t N(;
2
=n).In many cases this approximation works
su¢ ciently well.
The sample variances are given by
S
2
=(X )
2
=
1n
n
X
i=1
(X
i
)
2
,
s
2
=
n n 1(X X)
2
=
nn 1
(X
2
X
2
).
It is well known that E(S
2
) = E(s
2
) = 
2
.For the variance,we nd that
V ar(S
2
) =
1 n
V ar((X )
2
).
To calculate the variance of s
2
is,in general,much more complicated.For a
sample from the normal distribution N(;
2
) there are no problems.In this
case we have
nS
2 
2
s 
2
n
,
(n 1)s
2
2
s 
2
n1
and for large n we have
S
2
t N(
2
;
2
4 n
),s
2
t N(
2
;
2
4n 1
).
In the case of a sample from another distribution,these approximations are
usually not valid.In section 2 of this paper,we provide a central limit theorem
for S
2
and for s
2
.
In section 3 we state and prove a multivariate central limit theorem and
then apply a tranfer theorem to obtain central limit theorems for the sample
coe¢ cient of variation CV,the sample correlation coe¢ cient r and the ratio of
sample variances.
2
2 Central Limit Theorem for S
2
and s
2
2.1 Central limit theorem for S
2
In view of the denition of S
2
,using the ordinary central limit theorem,we
immediately obtain the following result.
Theorem 1 If X
1
;X
2
;:::;X
n
is a sample from X where E(X
4
) < 1,then
P(
pn(S
2

2
)  x)!P(U  x)
where U s N(0;
2
U
) with 
2
U
= V ar((X )
2
).
Proof.Apply the central limit theorem to Y
i
= (X
i
)
2
.Remark.Note that 
2
U
is related to the kurtosis (X) of X.Recall that
the kurtosis is dened as:
(X) =
E((X )
4
)
4
3 =
V ar((X )
2
)
4
2.
We nd that 
2
U
= ( (X) +2)
4
.
2.2 Central limit theorem for s
2
To prove a central limit theorem for s
2
,we rewrite s
2
as follows.We have
(n 1)s
2
=
n
X
i=1
(X
i
 ( X ))
2
=
n
X
i=1
(X
i
)
2
+n(X )
2
2(X )
n
X
i=1
(X
i
)
= nS
2
n( X )
2
It follows that
p n(s
2

2
) =
nn 1
pn(S
2

2
) +
pnn 1

2

nn 1
pn(X )
2
(1)
We prove the following result.
Theorem 2 If X
1
;X
2
;:::;X
n
is a sample from X where E(X
4
) < 1,then
P(
p n(s
2

2
)  x)!P(U  x)
where U s N(0;
2
U
) with 
2
U
= V ar((X )
2
).
3
Proof.Consider (1) and write
pn(s
2

2
) = A+B,where
A
n
=
n n 1
pn(S
2

2
),
B
n
=
p nn 1

2

nn 1
pn(X )
2
.
Using Theorem 1,we have
P(A
n
 x)!P(U  x).
For the second term we have
B
n
=
p nn 1

2

nn 1
pn(X )(X ).
Using the central limit theorem we have
P(
p n(X )=  x)!P(Z  x)
and the law of law numbers givesX  
P
!0.It follows that B
n
P
!0.The
result now follows.Remarks.
1) In the previous result we used the following property:if X
n
d
=) X and
Y
n
P
!0,then X
n
+Y
n
d
=)X.
2) In section 4.1 we provide another proof of this result.
3) We nd condence intervals for 
2
in the usual way.We have

2
= s
2
z
=2

Upn
and using 
2
U
= ( (X) +2)
4
we nd that

2
=
s
2 1 z
=2
p( (X) +2)=n
.
In applications,we replace (X) by the sample kurtosis b .
2.3 Special cases
1) If X s N(;
2
),we have E((X)
3
) = 0 and E((X)
4
) = 3
4
and then
it follows that 
2
U
= 2
4
.We nd back the known result.
2) If X s BERN(p),then  = p and,using q = 1 p,we have
E((X )
4
) = p
4
q +q
4
p = pq(1 3pq)
Now we nd that 
2
U
= pq(1 4pq).Note that for p = 1=2 we have 
2
U
= 0.
3) If X s UNIF(a;a),we have  = 0,
2
= a
2
=3 and E(X
4
) = a
4
=5.We
nd that
p n(s
2
a
2
=3) =)U s N(0;a
4
=5).
4
3 Multivariate central limit theorem
3.1 The central limit theorem
We prove the following theorem.
Theorem 3 Let (X
1
;Y
1
);(X
2
;Y
2
);:::;(X
n
;Y
n
) denote a sample from a bivari-
ate distribution (X;Y ) s A(
1
;
2
;
2
1
;
2
2
;).LetX = n
1
P
n
i=1
X
i
andY =
n
1
P
n
i=1
Y
i
.Then we have
P(
p n(X 
1
)  x;
pn(Y 
2
)  y)!P(U  x;V  y)
where (U;V ) has a bivariate normal distribution (U;V ) s BN(0;0;
2
1
;
2
2
;).
Proof.For arbitrary a and b where (a;b) 6= (0;0),we consider aX+bY.Clearly
we have
E(aX +bY ) = a
1
+b
2
,
V ar(aX +bY ) = a
2

2
1
+b
2

2
2
+2ab
1

2
.
Using the ordinary central limit theorem,we obtain that
p n(aX +bY a
1
b
2
)
d
=)W
where
W s N(0;a
2

2
1
+b
2

2
2
+2ab
1

2
).
Clearly this limit can be identied as follows:we have
W
d
= aU +bV
where (U;V ) has a bivariate normal distribution (U;V ) s BN(0;0;
2
1
;
2
2
;).
The result now follows from the Cramer-Wold device.Remark.The Cramer-Wold device states that for random vectors (X
n
;Y
n
)
we have
(X
n
;Y
n
)
d
=)(U;V )
if and only if
8(a;b) 6= (0;0):aX
n
+bY
n
d
=)aU +bV.
This device is easy to prove by using generating functions or characteristic
functions.
For random vectors with 3 or more components,we have a similar result
with a similar proof.
5
Theorem 4 Let (X
1;j
;:::;X
k;j
),j = 1;2;:::;n,denote a sample from a multi-
variate distribution (X
1
;X
2
;:::;X
k
) s A with means E(X
i
) = 
i
and variance-
covariance matrix
= (cov(X
i
;X
j
))
k
i;j=1
.For each i = 1;2;:::;k,letX
i
=
n
1
P
n
j=1
X
i;j
.Then we have
P(
p n(X
1

1
)  x
1
;
pn(X
2

2
)  x
2
;:::;
pn(X
k

k
)  x
k
)
!P(U
1
 x
1
;U
2
 x
2
;:::;U
k
 x
k
)
where (U
1
;U
2
;:::;U
k
) has a multivariate normal distribution with E(U
i
) = 0 and
Cov(U
i
;U
j
) =

i;j
.
The following corollary will we be useful.
Corollary 5 (5) Let (X
1
;Y
1
);(X
2
;Y
2
);:::;(X
n
;Y
n
) denote a sample from a bi-
variate distribution (X;Y ) s A(
1
;
2
;
2
1
;
2
2
;) and suppose that E(X
4
+Y
4
) <
1.Consider the vectors
!
A = ( X;Y;X
2
;Y
2
;XY ),
!
 = (
1
;
2
;E(X
2
);E(Y
2
);E(XY )).
Then
P(
p n(
!
A 
!
) 
!
x )!P(
!
V 
!
x ),
where
!
V has a multivariate normal distribution with means 0 and with variance-
covariance matrix
given by
0
B
B
B
B
@

2
1
Cov(X;Y ) Cov(X;X
2
) Cov(X;Y
2
) Cov(X;XY )

2
2
Cov(Y;X
2
) Cov(Y;Y
2
) Cov(Y;XY )
V ar(X
2
) Cov(X
2
;Y
2
) Cov(X
2
;XY )
V ar(Y
2
) Cov(Y
2
;XY )
V ar(XY )
1
C
C
C
C
A
(2)
3.2 Functions
Using the notations of Theorem 3,let us consider a new random variable
f( X;Y ),where the function f(x;y) is su¢ ciently smooth.Writing the rst
terms of a Taylor expansion,we have
f(x;y) = f(a;b) +
fx
(a;b)(x a) +
fy
(a;b)(y b) +
12
R
where the remainder term R is of the form
R = (x a;y b)

f
x;x
(;) f
x;y
(;)
f
x;y
(;) f
y;y
(;)

x a
y b

.
Here the f
a;b
denote the second partial derivatives of f,and  (resp.) is
between x and a (resp.y and b).If these partial derivatives are bounded
around (a;b),for some constant c > 0 we have
jRj  c((x a)
2
+(y b)
2
+j(x a)(y b)j).
6
Furthermore,if jx aj   and jy bj  ,we nd that




f(x;y) f(a;b) 
fx
(a;b)(x a) 
fy
(a;b)(y b)




 3c
2
and hence also that
3c
2
+
f x
(a;b)(x a) +
fy
(a;b)(y b)
 f(x;y) f(a;b)
 3c
2
+
f x
(a;b)(x a) +
fy
(a;b)(y b)
Now replace (x;y) and (a;b) by ( X;Y ) and (
1
;
2
) and dene the following
quantities:
!
 = (
1
;
2
) = (
fx
(
1
;
2
);
fy
(
1
;
2
)),
A
n
= 
1
p n(X 
1
) +
2
pn(Y 
2
),
K
n
=
p n(f(X;Y ) f(
1
;
2
)).
Note that Theorem3 implies that P(A(n)  x)!P(W  x) = P(
1
U+
2
V 
x).
If

 X 
1


  and

Y 
2


 ,the previous analysis shows that
3c
pn
2
+A
n
 K
n
 3c
pn
2
+A
n
Now consider P(K
n
 x) and write P(K
n
 x) = I +II,where
I = P(K
n
 x;E),
II = P(K
n
 x;E
c
),
where E is the event E =

 X 
1


  and

Y 
2


 

,and E
c
its com-
plement.
We have II  P(E
c
)  P(

 X 
1


> ) + P(

Y 
2


> ).Using the
inequality of Chebyshev,we obtain that
II 

2
1
+
2
2 n
2
.
If we choose  such that n
2
!1,we obtain that II!0.
For I,we have
I  P(3
p nc
2
+A(n)  x;E)  P(A(n)  x +3
pnc
2
).
If we choose  such that
p n
2
!0,we nd,after taking limits for n!1,
that I is bounded from above by P(W  x).A good choice of  is for example
 = n
1=3
.On the other hand,we have
I  P(3
p nc
2
+A(n)  x;E)
= P(3
pnc
2
+A(n)  x) P(3
pnc
2
+A(n)  x;E
c
)
7
As before,we have P(3
pnc
2
+A(n)  x)!P(W  x).For the other term,
we have
P(3
p nc
2
+A(n)  x;E
c
)  P(E
c
)!0.
We obtain that as n!1,I is bounded frombelow by P(W  x).We conclude
that
P(K
n
 x)!P(W  x).
Clearly we have E(W) = 0 and for the variance we nd that

2
W
= V ar(W) = (
1
;
2
)



1

2

=
!


!

T
.
where

=

V ar(X) Cov(X;Y ))
Cov(X;Y ) V ar(Y )

.
This approach can also be used for random vectors with 3 or more components.
The general result is the following.
Theorem 6 Using the notations of Theorem 4,if f is su¢ ciently smooth,we
have
P(
p n(f(
!
A) f(
!
))  x)!P(W  x)
where W
d
=
P
k
i=1

i
U
i
s N(0;
2
W
) with 
i
= (f=x
i
)(
!
) and 
2
W
=
!


!

T
.
Remark.We can also consider vectors of functions.
If (f
1
(
!
x );f
2
(
!
x );:::;f
m
(
!
x )),is such a vector,it su¢ ces to consider linear
combinations of the form
h(
!
x ) = u
1
f
1
(
!
x ) +u
2
f
2
(
!
x ) +:::+u
m
f
m
(
!
x )
where (u
1
;u
2
;:::;u
m
) 6= (0;0;:::;0).Now Theorem 6 and the Cramer-Wold
device can be used.
4 Variance and coe¢ cient of variation
4.1 The sample variance s
2
Here is another proof of Theorem 2.Consider the vectors
!
A = (X;X
2
),
!
 =
(;E(X
2
) and the function f(x;y) = y  x
2
.In this case we nd f(
!
A) =
(n1)s
2
=n and f(
!
) = 
2
.Using (
1
;
2
) = (2;1) it follows from Theorem
6 that
P(
p n(
n 1n
s
2

2
)  x)!P(W  x),
8
where W s N(0;
2
W
) with

2
W
= (2;1)

V ar(X) Cov(X;X
2
)
Cov(X;X
2
) V ar(X
2
)

2
1

= 4
2
V ar(X) 4Cov(X;X
2
) +V ar(X
2
)
= V ar(X
2
2X)
= V ar((X )
2
)
We can easily replace (n 1)s
2
=n by s
2
to nd back Theorem 2.
4.2 The sample coe¢ cient of variation
In probability theory and statistics,the coe¢ cient of variation (CV ) is a nor-
malized measure of dispersion of a probability distribution.It is dened as the
ratio of the standard deviation to the mean:CV = =.This is only dened
for non-zero mean ,and is most useful for variables that are always positive.
The sample coe¢ cient of variation is given by
SCV =
sX
.
If  6= 0,we have X
a:s:
! 6= 0 and SCV is well-dened a:s:.Now we consider
the vectors
!
A = ( X;X
2
),
!
 = (;E(X
2
) and the function
f(x;y) =
py x
2x
.
It is easy to see that f(
!
) = CV and that
f(
!
A) =
r n 1n
SCV.
Straightforward calculations show that
(
1
;
2
) = (
E(X
2
)
2
;
12
).
Using Theorem 6,we nd that
P(
p n(
rn 1n
SCV CV )  x)!P(W  x).
where W s N(0;
2
W
) with

2
W
=
!


V ar(X) Cov(X;X
2
)
Cov(X;X
2
) V ar(X
2
)

!

T
=
E
2
(X
2
)
4

E(X
2
)
3

2
Cov(X;X
2
) +
14
2

2
V ar(X
2
).
9
To simplify,note that
E((X )
3
) = Cov(X;X
2
) 2
2
,
V ar((X )
2
) = V ar(X
2
) +4
2

2
4Cov(X;X
2
)
Now we nd

2
W
=
E
2
(X
2
)
4
(
E(X
2
)
3

2

44
2

2
)(E((X )
3
) +2
2
)
+
1 4
2

2
(V ar((X )
2
) 4
2

2
)
=
(
2
+
2
)
2
4

1
3
E((X )
3
) 
2
2
2
+
14
2

2
V ar((X )
2
) 1
=

4 
4

1
3
E((X )
3
) +
14
2

2
V ar((X )
2
).
In terms of kurtosis (X) and skewness
1
(X) = 
3
E((X)
3
),we nd that

2
W
=

4 
4


3
3

1
(X) +

24
2
(X) +

22
2
.
Remarks.
1) In the case of a normal distribution,we nd that

2
W
=

4 
4
+

22
2
= CV
4
+
12
CV
2
.
In other cases,we see that 
2
W
is inuenced by (X) and
1
(X).
2) In the case of an exponential distribution with parameter ,we have
 =  = 1=,
1
= 2, = 6
and then CV = 
2
W
= 1.
3) For the Poisson()-distribution,we have
 = 
2
= ,
1
= 
1=2
, = 
1
and then CV = 
1=2
and

2
W
=
1 2
+
14
2
.
4.3 The case  = 0
If  = 0,then CV is not dened but we can always calculate
1 SCV
=Xs
.
10
If 
2
< 1,the central limit theorem together with s
2
P
!
2
shows that we
have
pnSCV
=
pnXs
d
=)Z
where Z s N(0;1).Now note that for x > 0,we have
P(
p nSCV
> x) = P(
SCVpn
<
1x
),
P(
p nSCV
< x) = P(
SCVpn
> 
1x
).
As a consequence,we have
SCV pn
d
=)U =
1Z
.
The reader can check that U has a (symmetric) density given by
f
U
(u) =
1 u
2
f
Z
(
1u
) =
1u
2
p
exp(
12u
2
).
From this it follows that E(U) = 0 and 
2
U
= 1.
4.4 A t-statistic
In the place of SCV we can study T = 1=SCV = X=s.This is a quantity
related to the t-statistic t = ( X )=s.As in section 4.2,we obtain that
pn(T 

)
d
=)W
where W s N(0;
2
U
) where

2
U
=

4 
4

2
W
= 1 

3
3

1
(X) +

24
2
(X) +

22
2
.
Note that for the t-statistic,we have the simpler result that
p nX s
d
=)Z s N(0;1).
4.5 The sample dispersion
Another related statistic is related to the dispersion D = 
2
=.This measure is
well dened for  6= 0 and can D be used for example to compare distributions
with di¤erent means.The corresponding sample dispersion is given by
SD =
s
2X
.
11
To study SD,we consider
!
A = (X;X
2
),
!
 = (;E(X
2
)) and the function
f(x;y) = (y x
2
)=x.Clearly we have
f(
!
A) =
n 1n
SD,
f(
!
) = D,
!
 = (

2
2
2;
1
).
It readily follows that
p n(SDD)
d
=)W
where W s N(0;
2
W
) with

2
W
= V ar(
1
X +
2
X
2
)
= 
2
(

2
2
( (X) +2) +

4
4
2

3
3

1
(X)).
In the case of a normal distribution,we nd that

2
W
= 
2

2 
2
(2 +

2
2
).
If  = 0,we obtain rst that
p n
1SD
=
pnXs
2
d
=)
1
2
Z,
where Z s N(0;1),and then it follows as in section 4.3.that
1 pn
SD
d
=)
2
1Z
.
5 Sample covariance and correlation
5.1 The sample covariance
Consider the vector
!
A = ( X;Y;XY ),
!
 = (
1
;
2
;E(XY )) and let f(x;y;z) =
z xy.In this case we nd
f(
!
A) =XY XY
f(
!
) = Cov(X;Y )
and
!
 = (
1
;
2
;1)
It follows that
P(
p n(f(
!
A) Cov(X;Y ))  x)!P(W  x)
12
where W s N(0;
2
W
) and

2
W
=
!

0
@
V ar(X) Cov(X;Y ) Cov(X;XY )
Cov(X;Y ) V ar(Y ) Cov(Y;XY )
Cov(X;XY ) Cov(Y;XY ) V ar(XY )
1
A
!

t
Assuming rst for simplicity that 
1
= 
2
= 0,we nd
!
 = (0;0;1) and

2
W
= V ar(XY ).In the general case we nd that

2
W
= V ar((X 
1
)(Y 
2
)).
Remark.If X and Y are independent,we have

2
W
= E((X 
1
)
2
(Y 
2
)
2
) = 
2
1

2
2
.
5.2 The sample correlation coe¢ cient
For a sample (X
1
;Y
1
);(X
2
;Y
2
);:::;(X
n
;Y
n
) from (X;Y ) s A(
1
;
2
;
2
1
;
2
2
;),
the sample correlation coe¢ cient is dened as
r =
nn 1XY XYs
1
s
2
.(3)
For n!1,a rough estimate gives
r t
E(XY ) E(X)E(Y )
1

2
= ,
so that r is an approximation of .As in Corollary 5,we consider the vectors
!
A = ( X;Y;X
2
;Y
2
;XY ),
!
 = (
1
;
2
;E(X
2
);E(Y
2
);E(XY ))
and the function
f(a;b;c;d;e) =
e abp(c a
2
)(d b
2
)
Now we nd f(
!
) = ,f(
!
A) = r,and the derivatives:
f a
=
bp(c a
2
)(d b
2
)
+
(e ab)a(c a
2
)
p(c a
2
)(d b
2
)
f b
=
ap(c a
2
)(d b
2
)
+
(e ab)b(d b
2
)
p(c a
2
)(d b
2
)
f c
= 
12
(e ab)(c a
2
)
p(c a
2
)(d b
2
)
f d
= 
12
(e ab)(d b
2
)
p(c a
2
)(d b
2
)
f e
=
1p(c a
2
)(d b
2
)
13
It follows that
P(
pn(r )  x)!P(W  x).(4)
where W s N(0;
2
W
) and 
2
W
=
!


!

t
with
given in (2).
In this case we have

1
=
E(Y )
1

2
+
E(X)
2
1
,
2
=
E(X)
1

2
+
E(Y )
2
2
,

3
= 
1 2

2
1
,
4
= 
12

2
2
,
5
=
1
1

2
.
In the special case where 
1
= 
2
= 0 and 
1
= 
2
= 1,we nd
!
 =
(0;0;=2;=2;1) and then we have
(
!

)
1
= 
 2
Cov(X;X
2
) 
2
Cov(X;Y
2
) +Cov(X;XY )
(
!

)
2
= 
 2
Cov(Y;X
2
) 
2
Cov(Y;Y
2
) +Cov(Y;XY )
(
!

)
3
= 
 2
V ar(X
2
) 
2
Cov(X
2
;Y
2
) +Cov(X
2
;XY )
(
!

)
4
= 
 2
Cov(X
2
;Y
2
) 
2
V ar(Y
2
) +Cov(Y
2
;XY )
(
!

)
5
= 
 2
Cov(X
2
;XY ) 
2
Cov(Y
2
;XY ) +V ar(XY )
and then (recall that 
1
= 
2
= 0 and 
1
= 
2
= 1) we have:

2
W
=
!


!

t
=

2 4
V ar(X
2
) +

24
Cov(X
2
;Y
2
) 
2
Cov(X
2
;XY )
+

2 4
Cov(X
2
;Y
2
) +

24
V ar(Y
2
) 
2
Cov(Y
2
;XY )

 2
Cov(X
2
;XY ) 
2
Cov(Y
2
;XY ) +V ar(XY )
=

2 4

V ar(X
2
) +2Cov(X
2
;Y
2
) +V ar(Y
2
)

(Cov(X
2
;XY ) +Cov(Y
2
;XY )) +V ar(XY )
=

24
(E(X
4
) 1 +2E(X
2
Y
2
) 2 +E(Y
4
) 1)
(E(X
3
Y )  +E(XY
3
) ) +E(X
2
Y
2
) 
2
=

24
(E(X
4
) +2E(X
2
Y
2
) +E(Y
4
))
(E(X
3
Y ) +E(XY
3
)) +E(X
2
Y
2
)
In the general case,we nd that

2
W
=

24
(E(X
4
) +2E(X
2
Y
2
) +E(Y
4
)) (5)
(E(X
3
Y

) +E(X

Y
3
)) +E(X
2
Y
2
),
14
where
X

=
X E(X)
1
and Y

=
Y E(Y )
2
.
The nal result is that (4) holds with 
2
W
given in (5).
Remarks.
1) We can rewrite 
2
W
more compact as follows.Assuming standardized
variables,we have

2
W
=

2 4

V ar(X
2
) +2Cov(X
2
;Y
2
) +V ar(Y
2
)

(Cov(X
2
;XY ) +Cov(Y
2
;XY )) +V ar(XY )
=

24
V ar(X
2
+Y
2
) Cov(X
2
+Y
2
;XY ) +V ar(XY )
= V ar(
2
(X
2
+Y
2
) XY )
2) Note that the asymptotic variance 
2
W
only depends on  and fourth-order
central moments of the underlying distribution.
3) If  = 0,we nd that 
2
W
= E(X
2
Y
2
).
4) If X and Y are independent,we have  = 0 and 
2
W
= E(X
2
Y
2
) =
E(X
2
)E(Y
2
) = 1.
5) If Y = a +bX,b > 0 we nd  = 1,Y

= X

and 
2
W
= 0.
5.3 Application
To model dependence,one often uses a model of the following form.Starting
from arbitrary independent random variables A and B we construct the vector
(X;Y ) = (A;B + A).Given a sample (X
i
;Y
i
) we want to test e.g.the
hypothesis H
0
: = 0 versus H
a
: 6= 0.
It is clear that
V ar(X) = 
2
X
= 
2
A
V ar(Y ) = 
2
Y
= 
2
B
+
2

2
A
Cov(X;Y ) = 
2
A
 = (X;Y ) = 

2
Aq
2
A
(
2
B
+
2

2
A
)
and we have  = 0 if and only if  = 0.Under H
0
we have
p nr
d
=)Z s N(0;1).
15
5.4 The bivariate normal case
For a standard bivariate normal distribution (X;Y ) s BN(0;0;1;1;),we show
how to calculate 
2
W
,cf.(5).
First note that (U;V ) = (XY;Y ) also has a bivariate normal distribution
with
Cov(U;V ) = Cov(X;Y ) Cov(Y;Y ) = 0.
It follows that U and V are independent with V s N(0;1) and U s N(0;1
2
).
For general W s N(0;
2
),we have 
W
(t) = exp
12

2
t
2
and then E(W) =
E(W
3
) = 0 and E(W
2
) = 
2
,E(W
4
) = 3
4
.
Now observe that Y = V and X = U +V.We nd
E(Y
4
) = E(X
4
) = 3;
E(Y X
3
) = E(Y
3
X) = E(V
3
U +V
4
) = 3;
E(Y
2
X
2
) = E(V
2
(U
2
+2UV +
2
V
2
) = 1 +2
2
;
It follows that

2
W
=

2 4
(3 +2 +4
2
+3) (3 +3) +1 +2
2
= 
4
2
2
+1 = (1 
2
)
2
In general,for (X;Y ) s BN(
1
;
2
;
2
1
;
2
2
;),we also nd that 
2
W
= (1
2
)
2
,
and then
r t N(;
(1 
2
)
2n
)
5.5 The t and the Ftransformation
The approach of the previous section can now be used to construct condence
intervals for  and to test hypothesis concerning .
5.5.1 Testing H
0
: = 0 versus H
a
: 6= 0
In the bivariate normal case it is often necessary to test H
0
: = 0 versus
H
a
: 6= 0.In the bivariate normal case,usually one uses the t-transformation:
t(x) =
xp1 x
2
.
Observe that we have
t
0
(x) =
1(1 x
2
)
p1 x
2
Under H
0
we have t() = 0 and t
0
() = 1 and then the ttransformation shows
that
p n t(r)
d
=)Z s N(0;1).
16
Remark.Note that
t(r) r =
r
3p1 r
2
(1 +
p1 r
2
)
.
Under H
0
it follows that
n
3=2
(t(r) r)
d
=)
1 2
Z
3
.
For large samples it is not very useful to use the t-transformation.
5.5.2 Testing H
0
: = 
0
versus H
a
: 6= 
0
To test H
0
: = 
0
versus H
a
: 6= 
0
,where 
0
6= 0,in the bivariate normal
case,usually one uses the Fisher F-transformation:
F(x) =
1 2
ln(
1 +x1 x
).
In this case we have
F
0
(x) =
11 x
2
The Ftransformation leads to the popular result that
p n(F(r) F()) t F
0
()
pn(r )
so that
p n(F(r) F())
d
=)Z s N(0;1).
This approach can also be used in the case where 
0
= 0.
5.6 Spearmans rank correlation
To see whether or not two ordinal variables are associated,one can use Spear-
mans rank correlation coe¢ cient r
S
.In this case we start from the sample of
ordinal variables (X
1
;Y
1
);(X
2
;Y
2
);:::;(X
n
;Y
n
) and we assign a rank going from
1 to n.The smallest Xvalue gets label 1,the next smallest Xvalue gets label
2,...,the largest of the Xvalues is labelled with rank n.In a similar way we
label the Y values.In the case of ties,we assign each variable the average of
the rankings,cf.the example below.
Starting from (X
1
;Y
1
);(X
2
;Y
2
);:::;(X
n
;Y
n
),we thus obtain a sequence of
ranks (R
1
;R

1
);(R
2
;R

2
);:::;(R
n
;R

n
).The rank correlation r
S
is given by the
ordinary correlation coe¢ cient between the two rankings.We use the notation
r
S
= r
S
(X;Y ) = r(R;R

).
As before,we calculate r
S
by using the general formula (3) as before.Formula
(3) can be rewritten as
r
S
=
P
R
i
R

i
nRR
q(
P
R
2
i
nR
2
)(
P
R
2
i
nR

2
)
.
17
Now note that (with or without ties):
X
R
i
=
X
R

i
= 1 +2 +::+n =
n(n +1)2
.
If there are no ties,we also have:
X
R
2
i
=
X
R
2
i
= 1 +2
2
+:::+n
2
=
n(n +1)(2n +1)6
,
X
R
2
i
n R
2
=
n(n +1)(2n +1)6
n
(n +1)
24
=
n(n
2
1)12
1 2
X
(R
i
R

i
)
2
=
n(n +1)(2n +1)6

X
R
i
R

i
.
In the case of no ties,after simplifying,we nd that:
r
S
= 1 
6
P
n
i=1
(R
i
R

i
)
2n(n
2
1)
.(6)
For independent variables,we can use the result of section 5.2 to conclude that
p nr
S
d
=)Z s N(0;1).
Remark.In the case of ties between variables,we assign each variable
the average of the rankings.Formula (5) to calculate r
S
should be modied.
Consider the following example:
X Y R R

RR

(RR

)
2
3 10 1 1 0 0
6 15 2 2 0 0
9 30 3 4;5 1;5 2;25
12 35 4 6 2 4
15 25 5 3 2 4
18 30 6 4;5 1;5 2;25
21 50 7 8 1 1
24 45 8 7 1 1
In the case of no ties we had
P
R
2
i
=
P
R
2
i
= 204.In our example,we
have
P
R
2
i
= 204,
P
R
2
i
= 203;5.If there is 1 tie involving 2 observations,we
see that there is a di¤erence of 0;5.
In general,one can proceed as follows.Let
t
2
= the number of ties involving 2 observations;
t
3
= the number of ties involving 3 observations;
...
t
k
= the number of ties involving k observations.
18
Now we calculate the correction factor
T =
2
3
212
t
2
+
3
3
312
t
3
+:::+
k
3
k12
t
k
In the case of ties,we replace (6) by:
r
S
= 1 
6(T +
P
n
i=1
(R
i
R

i
)
2
)n(n
2
1)
.
6 Comparing variances
Testing hypothesis concerning di¤erences between means is well known and can
be found in any textbook about statistics.Less is known about comparing
variances.In the case of unpaired samples from normal distributions,the dis-
tribution of the quotient of the sample variances s
2
1
=s
2
2
can be determined and
is related to an F-distribution.In general,the analysis of s
2
1
=s
2
2
is more compli-
cated.In this section we study s
2
1
=s
2
2
for large samples.We consider unpaired
samples as well as paired samples.
6.1 Unpaired samples
Suppose that we have unpaired samples X
1
;X
2
;:::;X
n
fromX s A(
1
;
2
1
) and
Y
1
;Y
2
;:::;Y
m
from Y s B(
2
;
2
2
).In order to test whether or not 
2
2
= 
2
1
one
can use a test based on s
2
1
and s
2
2
.We need the following lemma.
Lemma 7 Suppose that E(X
4
+Y
4
) < 1.As n!1 and m!1,we have
P(
p n(s
2
1

2
1
)  x;
pm(s
2
2

2
2
)  y)
P(
p n(s
2
1

2
1
)  x)P(
pm(s
2
2

2
2
)  y)!P(U
1
 x)P(U
2
 y),
where U
1
s N(0;V ar(X 
1
)
2
) and U
2
s N(0;V ar((Y 
2
)
2
).
Proof.This follows from independence and Theorem 2.Now consider K dened by
K =

2
2
2
1
s
2
1s
2
2
1.
Clearly we have
K =

2
2
(s
2
1

2
1
) 
2
1
(s
2
2

2
2
)s
2
2

2
1
=
Qs
2
2

2
1
.
Now we write
p nQ = 
2
2
pn(s
2
1

2
1
) 
2
1
pm(s
2
2

2
2
)
pnpm
.
Using the notations of Lemma 7 we have the following result.
19
Theorem 8 Suppose that E(X
4
+Y
4
) < 1.If n!1 and m!1 in such a
way that n=m!
2
(0   < 1),then
pnK
d
=)V
d
=
1
2
1
U
1

1
2
2
U
2
,
and V s N(0;
2
V
) with

2
V
=
1 
4
1
V ar((X 
1
)
2
) +
2
1
4
2
V ar((Y 
2
)
2
).(7)
Proof.We clearly have
p nQ
d
=)W,
where W
d
= 
2
2
U
1

2
1
U
2
.Using s
2
i
P
!
2
i
(i = 1;2),it follows that
p nK
d
=)
1
2
1

1
2
W
d
=
1
2
1
U
1

1
2
2
U
2
and the result follows.Remarks.
1) If  = 1,we can interchange the role of n and m.
2) From the practical point of view,we can use (7) to write
1n

2
V
t
1n
1
4
1
V ar((X 
1
)
2
) +
1m
1
4
2
V ar((Y 
2
)
2
).
3) Note that the asymptotic variance depends on the kurtosis of the under-
lying distributions.We nd that.

2
V
= (X) +2 +
2
( (Y ) +2),
and then
1 n

2
V
=
1n
( (X) +2) +
1m
( (Y ) +2)
4) In the special case of independent samples from normal distributions,we
have
p nK
d
=)V,where V s N(0;
2
V
) with

2
V
= V ar(X
2
) +
2
V ar(Y
2
),
where X

and Y

are the standardized X and Y.Using the expressions of
Section 5.3.we nd that 
2
V
= 2(1 +
2
),and then
1n

2
V
t 2(
1n
+
1m
)
4) If 
2
1
= 
2
2
= 
2
,we can study the pooled variance given by:
s
2
p
=
(n 1)s
2
1
+(m1)s
2
2n +m2
.
20
Now we nd that
pn(s
2
p

2
) = (
n 1n +m2
)
pn(s
2
1

2
) +
m1n +m2
pnpm
pm(s
2
2

2
)
It follows that
p n(s
2
p

2
)
d
=)W
d
=

2
2
+1
U
1
+

2
+1
U
2
In this case W s N(0;
2
W
),with

2
W
= (

2 
2
+1
)
2
V ar((X 
1
)
2
) +(

2
+1
)
2
V ar((Y 
2
)
2
).
In the case of samples from normal distributions with 
2
1
= 
2
2
= 
2
,we nd
that

2
W
= 2
4
(

2 
2
+1
)
2
+2
4
(

2
+1
)
2
= 2
4

2 1 +
2
t 2
4
nn +m
.
6.2 Paired samples
Let (X
1
;Y
1
);(X
2
;Y
2
);:::;(X
n
;Y
n
) denote a sample from an arbitrary bivariate
distribution (X;Y ) s A(
1
;
2
;
2
1
;
2
2
;).We prove the following result.
Lemma 9 If E(X
4
+Y
4
) < 1,then
P(
p n(s
2
1

2
1
)  x;
pn(s
2
2

2
2
)  y)!P(U
1
 x;U
2
 y).
where (U
1
;U
2
) has a bivariate normal distribution with zero means and with
variance-covariance matrix

V ar((X 
1
)
2
) Cov((X 
1
)
2
;(Y 
2
)
2
)
Cov((X 
1
)
2
;(Y 
2
)
2
) V ar((Y 
2
)
2
)

.
Proof.Take arbitrary real numbers (u;v) 6= (0;0) and consider the vectors
!
A = (X;Y;X
2
;Y
2
),
!
 = (
1
;
2
;E(X
2
);E(Y
2
)),
and the function f(a;b;c;d) = u(c a
2
) +v(d b
2
).Clearly we have
f(
!
A) = u( X
2
X
2
) +v(Y
2
Y
2
)
=
n 1 n
(us
2
1
+vs
2
2
)
21
and f(
!
) = u
2
1
+v
2
2
.It is easy to see that
!
 = (2u
1
;2v
2
;u;v).The
transfer results of section 3.2 show that
P(
pn(f(
!
A) f(
!
))  x)!P(W  x),
where W s N(0;
2
W
) with 
2
W
=
!


!

t
and

=
0
B
B
@

2
1
Cov(X;Y ) Cov(X;X
2
) Cov(X;Y
2
)

2
2
Cov(Y;X
2
) Cov(Y;Y
2
)
V ar(X
2
) Cov(X
2
;Y
2
)
V ar(Y
2
)
1
C
C
A
.
Straighforward calculations show that
!


!

t
= V ar(u(X 
1
)
2
+v(Y 
2
)
2
).
It follows that
P(
p n(
n 1n
(us
2
1
+vs
2
2
) (u
2
1
+v
2
2
))  x)!P(W  x),
where W
d
= uU
1
+vU
2
,and (U
1
;U
2
) has the desired bivariate normal distribu-
tion.It is clear that the correction factor (n1)=n is not important.The result
follows by using the Cramer-Wold-device.As in Theorem 8,we consider K and now we conclude that P(
pnK  x)!
P(V  x),where
V
d
=
1 
2
1
U
1

1
2
2
U
2
.
We nd that V s N(0;
2
V
) with

2
V
=
V ar((X 
1
)
2 
4
2
+
V ar((Y 
2
)
2
)
4
2
2
Cov((X 
1
)
2
;(Y 
2
)
2
)
2
1

2
2
Remarks
1) We can rewrite 
2
V
more compact as follows.Using the notation X

=
(X 
1
)=
1
and Y

= (Y 
2
)=
2
we have

2
V
= V ar(X
2
) +V ar(Y
2
) 2Cov(X
2
;Y
2
)
= V ar(X
2
Y
2
)
= E((X
2
Y
2
)
2
)
2) If we start from a sample from a bivariate normal distribution,we nd
(cf Section 5.3) that

2
V
= E(X
4
) +E(Y
4
) 2E(X
2
Y
2
) = 4(1 
2
).
In the case of  = 0 we nd back the result of the unpaired case with  = 1.
22
7 References
1.Bentkus,V.,Jing,B.Y.,Shao,Q.M.and Zhou,W.,2006,Limiting distri-
butions of the non-central t-statistic and their applications to the power
of t-tests under non-normality.Bernoulli 13:2,346-364
2.P.Billingsley (1968).Convergence of probability measures.Wiley,New
York.
3.W.Feller,(1971).An introduction to probability theory and its applica-
tions,Vol.2 (2nd edition).Wiley,New York.
4.G.Grimmet and D.Stirzaker (2002).Probability and Random Processes
(3rd edition).Oxford University Press,London.
5.Ladoucette,S.A.(2007).Analysis of Heavy-Tailed Risks.Ph.D.Thesis,
Catholic University of Leuven.
6.Omey,E.(2008).Domains of attraction of the random vector (X;X
2
)
and applications.To appear:Stochastics:An International Journal of
Probability and Stochastic Processes,Vol.80,N

2-3,211-227..Available
on http://arxiv.org/abs/0712.3440
7.S.Ross (1998).A rst course in probability (5th edition).Prentice-Hall,
New York.
8.O.Rykunova (1997).Some applications of asymptotic distribution of the
sample correlation coe¢ cient Proceedings of Tartu Conference on Compu-
tational Statistics and Statistics Education (Ed.E.M.Tiit),University of
Tartu,Estonia,140-147.
9.Sta¤ of Research and Education Association (1986).The Statistics Prob-
lem Solver.Research and Education Association,New York
23