Target-following framework for symmetric cone programming

giantsneckspiffyΗλεκτρονική - Συσκευές

13 Οκτ 2013 (πριν από 4 χρόνια και 1 μήνα)

125 εμφανίσεις

Target-following framework for symmetric cone programming
CHEK BENG CHUA
January 3,2011
Abstract.We extend the target map,together with the weighted barriers and
the notions of weighted analytic centers,from linear programming to general
convex conic programming.This extension is obtained from a novel geomet-
rical perspective of the weighted barriers,that views a weighted barrier as a
weighted sum of barriers for a strictly decreasing sequence of faces.Using the
Euclidean Jordan-algebraic structure of symmetric cones,we give an algebraic
characterization of a strictly decreasing sequence of its faces,and specialize
this target map to produce a computationally-tractable target-following algo-
rithm for symmetric cone programming.The analysis is made possible with
the use of triangular automorphisms of the cone,a new tool in the study of
symmetric cone programming.As an application of this algorithm,we demon-
strate that starting from any given any pair of primal-dual strictly feasible
solutions,the primal-dual central path of a symmetric cone program can be
eciently approximated.
2000 Mathematics Subject Classication.90C25;90C51;52A41.
Key words and phrases.Symmetric cone programming;target-following algorithm;target map;
weighted barrier;weighted analytic centers; ags of faces;triangular transformations.
1.Introduction
In this paper,we consider primal-dual interior point algorithms for linear op-
timization problems over symmetric cones (a.k.a.symmetric cone programming):
(1.1a) sup
8
<
:
m
X
j=1
b
j
y
j
:
m
X
j=1
y
j
a
j
+x = c;x 2 cl(
)
9
=
;
;
where
is a given (open) symmetric cone in a nite-dimensional real vector space
with inner product h;i,a
1
;:::;a
m
;c are given vectors,and b
1
;:::;b
m
are given
real numbers.Its dual problem is the symmetric cone program
(1.1b) inffhc;xi:ha
j
;si = b
j
;1  j  m;s 2 cl(
)g:
Without any loss of generality,we assume that the vectors a
1
;:::;a
m
are linearly
independent.With this assumption,(y
1
;:::;y
m
) is uniquely determined by each
x satisfying the equality constraints,and we thus view (y
1
;:::;y
m
) as a function
of x.Henceforth,we shall use only the x-component when referring to a feasible
solution.For the purpose of studying interior point algorithms,we also assume
that the primal-dual symmetric cone programs have strictly feasible solutions;i.e.,
there exists x;s 2
satisfying the equality constraints in their respective problems.
Primal-dual interior-point algorithms|rst designed for linear programming
(see,e.g.,[27]),and subsequently extended to semidenite programming (see,e.g.,
[26,Part II]),symmetric cone programming (see,e.g.,[19]) and,recently,homoge-
neous cone programming [6]|are the most widely used interior-point algorithms in
1
2 C.B.CHUA
practice.At the same time,they are able to achieve the best iteration complexity
bound known to date.
The development of primal-dual algorithms for symmetric cone programming
began from two very dierent perspectives.Yu.Nesterov and M.Todd [19] de-
scribed their algorithm in the context of self-concordant barriers (see the seminal
work of Yu.Nesterov and A.Nemirovski [18]) by specializing general logarithmi-
cally homogeneous self-concordant barriers to self-scaled barriers.L.Faybusovich
[8],on the other hand,obtained his algorithmby extending a primal-dual algorithm
for semidenite programming via the theory of Euclidean Jordan algebras.This
Jordan-algebraic approach had been so successful that it is now the most commonly
used tool in designing interior-point algorithms for symmetric cone programming
[1,2,4,21].
In the special case of linear programming,various primal-dual path-following al-
gorithms were simultaneously analyzed under the target-following framework by B.
Jansen,C.Roos,T.Terlaky and J.-Ph.Vial [12].The target-following framework
was rst introduced by S.Mizuno [15] for linear complementarity problems.It was
subsequently used by Jansen et al.as a unifying framework for various primal-dual
path-following algorithms for linear programming and algorithms that nd analytic
centers of polyhedral sets.The essential ingredient of this framework is the target
map (x;s) 7!(x
1
s
1
;:::;x
n
s
n
),dened for each pair of positive n-vectors (x;s).
An important feature of the target map is its bijectivity between the primal-dual
strictly feasible region and the cone of positive n-vectors R
n
++
[12,14],whence
identifying the primal-dual strictly feasible region with the relatively simple cone
R
n
++
known as the target space (or v-space).Interior-point algorithms based on
the target map are known as target-following algorithms,which are conceptually
elegant when viewed as following a sequence of targets in the target space.
Various attempts were made to generalize the concept of target maps to semidef-
inite programming [16,17,22],symmetric cone programming [11,23] and general
convex conic programming [24].It is noted here that these extensions of the target
map do not result in target-following algorithms as they are generally not injective
on the whole primal-dual strictly feasible region;see Section 1.4.
In this paper,we present an extension of the target-following framework to sym-
metric cone programming.This extension is obtained from a novel geometrical
perspective of the weighted barriers,that views a weighted barrier as a weighted
sum of barriers for a strictly decreasing sequence of faces.Using the Euclidean
Jordan-algebraic structure of symmetric cones,we give an algebraic characteriza-
tion of a strictly decreasing sequence of its faces,and specialize this target map
to produce a computationally-tractable target-following algorithm for symmetric
cone programming.The analysis is made possible with the use of triangular auto-
morphisms of the cone,a new tool in the study of symmetric cone programming.
As an application of this algorithm,we demonstrate that starting from any given
any pair of primal-dual strictly feasible solutions,the primal-dual central path of a
symmetric cone program can be eciently approximated.
1.1.Linear programming revisited.Let us begin by revisiting the special case
of linear programming,where
= R
r
++
= (0;1)
r
.In this case,the minimizer of
the weighted barrier problem|which is the problem of minimizing the function
x 2
7!
r
X
i=1
!
i
log x
i
+
m
X
j=1
b
j
y
j
over the primal strictly feasible region|and the unique set of Lagrange multipliers
satisfying the Lagrange optimality conditions form the pair of primal-dual weighted
analytic centers associated with the weights!
1
;:::;!
r
.Under the target map,a
Target-following and symmetric cone programs 3
pair of primal-dual strictly feasible solutions (x;s) is mapped to (!
1
;:::;!
r
) if and
only if it is the pair of primal-dual weighted analytic centers associated with the
weights!
1
;:::;!
r
.The weighted sum of logarithmic barriers
x 2
7!
r
X
i=1
!
i
log x
i
is called a weighted logarithmic barrier for R
r
++
associated with (!
1
;:::;!
r
) (or
simply a weighted barrier for R
r
++
).
We now describe a generalization of the weighted barriers,and the notion of
weighted analytic centers,to symmetric cone programming,and generally to convex
conic programming|where
is an open convex cone.
First we select a sequence of faces of R
r
+
satisfying R
r
+
= F
1
B   BF
r
BF
r+1
=
f0g,where FB
e
F means that
e
F is a proper face of F.This choice of faces determines
a permutation  on the index set f1;:::;rg such that F
i
= fx 2 R
r
+
:x
(1)
=    =
x
(i1)
= 0g for i = 2;:::;r;r +1.In analogy to ag manifolds,we use the term
ag when referring to such sequence of faces.
Denition 1 (Flag of faces).A ag of faces (or simply ag) of a convex cone K
is a strictly decreasing sequence of faces
cl(K) = F
1
B   BF
p
BF
p+1
= f0g:
A ag is said to be complete if it is not a subsequence of another ag of the same
cone.
Next,we consider the weighted sum of logarithmic barriers:
(1.2) f
(!;f)
:x 7!
r
X
i=1
(!
i
!
i1
)f
F
i
(x);
where!= (!
0
;!
1
;:::;!
r
) is a nondecreasing sequence 0 =!
0
<!
1
    !
r
of real numbers,f = (F
1
;:::;F
r
;F
r+1
) is a complete ag of R
r
+
,and f
F
i
denotes
the (modied) Legendre-Fenchel conjugate
1
of the logarithmic barrier of the face
F
i
;i.e.,f
F
i
is the barrier x 7!log x
(i)
   log x
(r)
for the open dual cone
int(F
i
)
]
= fx 2 R
r
:x
(i)
;:::;x
(r)
> 0g of the interior int(F
i
) of the face F
i
.
When the complete ag f is paired with the nondecreasing sequence!,we call this
pair a weighted complete ag.
Denition 2 (Weighted complete ag).A weighted complete ag of a convex cone
K is a pair (!;f),with!= (!
0
;!
1
;:::;!
r
) a nondecreasing sequence 0 =!
0
<
!
1
    !
r
of real numbers,and f = (F
1
;:::;F
r
;F
r+1
) a complete ag of K.The
sequence!is called a weight sequence,and the numbers!
1
;:::;!
r
are called its
weights.
Using partial summation,we can write this weighted sum logarithmic barriers
as
f
(!;f)
(x) =
r
X
i=1
!
i
(f
F
i
(x) f
F
i+1
(x)) = 
r
X
i=1
!
i
log x
(i)
= 
r
X
i=1
!

1
(i)
log x
i
:
This is precisely a weighted logarithmic barrier for R
r
++
associated with the weights
(!

1
(1)
;:::;!

1
(r)
).Conversely,every weighted logarithmic barrier for R
r
++
can
1
The (modied) Legendre-Fenchel conjugate of a function f:S!R on a (nonempty) convex
set S in a Euclidean space with inner product h;i is the function f
]
:s 7!supfhs;xi f(x):
x 2 Sg with domain fs:f
]
(s) < +1g.When f is closed (e.g.,continuous f on open domain S),
we have f
]
]
= f.
4 C.B.CHUA
be written as a weighted sum of the form (1.2) once the reordering  of the indices
that puts the weights in nondecreasing order is determined.
If we replace each logarithmic barrier f
F
i
in this weighted sum by the image
e
F
i
of the vector of all ones 1 under the duality map rf
F
i
(),we recover the
image of the primal-dual weighted analytic centers (x;s) under the target map:the
image e
F
i
is the 0-1 vector with nonzero entries precisely at positions (i);:::;(r),
whence e
F
i
e
F
i+1
is the (i)'th unit vector,and subsequently
(!

1
(1)
;:::;!

1
(r)
) =
r
X
i=1
!
i
(e
F
i
e
F
i+1
) =
r
X
i=1
(!
i
!
i1
)e
F
i
:
In summary,for a weighted complete ag (!;f) of R
r
+
,
(1) the nonnegative sum of barriers (1.2) is a weighted logarithmic barrier for
R
r
++
,which we call the weighted barrier associated with (!;f);
(2) the pair of primal-dual solutions (x;s) to the weighted barrier problem de-
termined by this weighted logarithmic barrier is a pair of weighted analytic
centers,which we call the pair of weighted centers associated with (!;f);
and
(3) the weighted sum 
P
r
i=1
(!
i
!
i1
)rf
F
i
(1) is the image of the weighted
analytic centers (x;s) under the target map,which we call the target vector
associated with (!;f).
When the weights are not pair-wise distinct,a weighted barrier for R
r
++
is asso-
ciated with more than one weighted complete ags since the permutation  is not
uniquely determined.Thus we group the weighted complete ags into equivalence
classes according on the weighted barriers with which they associate.
Denition 3 (Equivalence of weighted complete ags).Two weighted complete
ags (!;f) and (e!;
e
f) of an open convex cone K are said to be equivalent if they
have the same weights!
1
;:::;!
r
,and
!
i
>!
i1
=) F
i
=
e
F
i
for i = 1;:::;r.
It is straightforward to verify that two weighted complete ags of R
r
+
are equiv-
alent if and only if they associate with the same target vector.Hence we have an
alternative denition for the target map (x;s) 7!(x
1
s
1
;:::;x
r
s
r
):
(x;s) 7!
r
X
i=1
(!
i
!
i1
)rf
F
i
(1);
where (!;f) is a weighted complete ag of R
r
+
such that (x;s) is the pair of weighted
centers associated with (!;f).
1.2.Extension to convex conic programming.We now extend this idea to
linear optimization over a general closed convex cone cl(K).In order to associate a
weighted complete ag (!;f) of the open dual cone K
]
with a weighted barrier,we
would need to x,for each and every face F of cl(K
]
),an a priori logarithmically ho-
mogeneous self-concordant barrier f
]
F
.Since the Legendre-Fenchel conjugate f
cl(K
]
)
is strictly convex,the weighted barrier problemit determines has a unique solution.
With these barriers,we then dene the weighted barrier,pair of weighted centers,
and target vector associated with (!;f),respectively,as
(1) the nonnegative sum of barriers in (1.2),
(2) the pair of primal-dual solutions (x;s) to the weighted barrier problem
determined by the weighted barrier (1.2),and
Target-following and symmetric cone programs 5
(3) the weighted sum
P
r
i=1
(!
i
!
i1
)rf
F
i
(e),where e 2 K is the xed point
of the duality map rf
cl(K
]
)
().
Denition 4 (Target map).The target map for a linear optimization problemover
the closure cl(K) of an open convex cone is the map dened over its primal-dual
strictly feasible region by
(x;s) 7!
r
X
i=1
(!
i
!
i1
)rf
F
i
(e);
where (!;f) is a weighted complete ag of K
]
such that (x;s) is the pair of weighted
centers associated with (!;f),and e 2 K is the xed point of the duality map
rf
cl(K
]
)
().
One of our main results generalizes the bijectivity of the target map for linear
programming over the set of primal-dual strictly feasible solutions.
Theorem 1 (Bijectivity of target map).The target map is a bijection between the
primal-dual strictly feasible region and the open dual cone K
]
.
Proof.We rst demonstrate that the association of each pair of primal-dual strictly
feasible solutions (x;s) to a weighted barrier f
(!;f)
is a bijection.It suces to show
that each pair of primal-dual strictly feasible solutions (x;s) solves the weighted
barrier problem determined by a unique weighted barrier f
(!;f)
.
To this end,we note that the pair of primal-dual strictly feasible solutions (x;s)
solves the weighted barrier problem determined by f
(!;f)
if and only if
s = 
r
X
i=1
(!
i
!
i1
)rf
F
i
(x):
Since s 2 K
]
and rf
F
1
(x) = rf
cl(K
]
)
(x) 2 K
]
 cl(K
]
),we can nd some
positive 
1
such that the dierence s  
1
(rf
F
1
(x)) is on the boundary of K
]
.
Let F
2
C cl(K
]
) be the minimal face containing the dierence s  
1
(rf
F
1
(x)).
If the minimal face F
2
is not the trivial cone f0g,we repeat this process with s
replaced by the dierence s
1
(rf
F
1
(x)) 2 int(F
2
),x replaced by the projection
Proj
F
2
F
2
x 2 int(F
2
)
]
,and K replaced by the cone int(F
2
).
After a nite number (at most the dimension of K) of iterations of this process,
we have a ag and a corresponding strictly increasing sequence of weights f
1
+
   + 
i
g
p
i=1
satisfying s = 
P
p
i=1

i
rf
F
i
(x).The weighted complete ag (!;f),
obtained by extending this ag and sequence of weights to a weighted complete
ag,then denes a weighted barrier f
(!;f)
associated with (x;s).
In the above argument,we have in fact demonstrated a bijection between ele-
ments w 2 K
]
and weighted logarithmic barriers via
w = 
r
X
i=1
(!
i
!
i1
)f
F
i
(e);
by taking (x;s) = (e;w).Composing these two bijections proves the theorem.
1.3.Specialization to symmetric cone programming.In the special case of
symmetric cone programming,where
is a symmetric cone,we consider the Eu-
clidean Jordan algebra J of rank r associated with
,and use the standard log-
determinant barriers
x 7!log det(x)
6 C.B.CHUA
for all faces of cl(
);see Section A.1.This results in the following weighted barrier
and target map associated with the weighted complete ag (!;f):
f
(!;f)
:x 7!
r
X
i=1
(!
i
!
i1
) log det Proj
F
i
F
i
(x);
and
(x;s) 7!
r
X
i=1
(!
i
!
i1
)e
F
i
;
where e
F
i
is the identity element in the Euclidean Jordan subalgebra J
F
i
of J
associated with the symmetric cone int(F
i
).
The association between weighted complete ags (!;f) and targeted pairs of
primal-dual strictly feasible solutions (x;s) is given by
s =
r
X
i=1
(!
i
!
i1
)(Proj
F
i
F
i
(x))
1
;
where the inverse is taken as an element of the Euclidean Jordan subalgebra J
F
i
.In
the unweighted case where!= 1,we get the familiar expression s = x
1
,which we
readily express in terms of the Jordan product as the perturbed complementarity
condition
x  s = e
with e the identity element of the Euclidean Jordan algebra J.In the general
weighted case,however,we would introduce a partition of s into (!
1
!
0
)s
1
+   +
(!
r
!
r1
)s
r
with s
i
= (Proj
F
i
F
i
(x))
1
;i.e.,
Proj
F
i
F
i
(x)  s
i
= e
F
i
:
This is done so that we can enjoy the benet of applying well studied numerical
solution methods for the perturbed complementarity condition.In particular,we
choose to use the Nesterov-Todd method [19] when computing the search direction,
and measure progress via the function
(x;fs
i
g) 7!
1
p
!
1
v
u
u
t
r
X
i=1
(!
i
!
i1
)kP
Proj
F
i
F
i
(x)
1=2s
i
e
F
i
k
2
;
where P
x
denotes the quadratic representation of x;see Section A.A weighted sum
is used here as the computed search directions are orthogonal under the induced
weighted inner product.The multiplicative factor of
1
p
!
1
scales the induced unit
ball centered at the targeted primal-dual solutions so that it just sits within the
primal-dual feasible region.
The Nesterov-Todd method applies scalings to the primal-dual variables to get
P
p
i
Proj
F
i
F
i
(x)  P
p
1
i
s
i
= e
F
i
;
where p
i
2 int(F
i
) is commonly known as a scaling point:it is chosen so that
after scaling,P
p
i
Proj
F
i
F
i
(x) = P
p
1
i
s
i
.This method is chosen for its simplicity
in the analysis of algorithm,as shall be evident in Section 3.1.One drawback of
partitioning s is that we now have a sequence of r dual variables (s
1
;:::;s
r
) to
solve for;i.e.,there is an increased in the size of the Newton system.This can be
circumvented by using triangular transformations A
i
(see Denition 7) instead of
quadratic representations,together with an appropriate choice of complete ag f;
see Section 3.1 for details.
We thus apply Newton's method to solve
A
t
i
Proj
F
i
F
i
(x)  A
1
i
s
i
= e
F
i
;
Target-following and symmetric cone programs 7
where A
i
is a triangular automorphism of int(F
i
) satisfying A
t
i
Proj
F
i
F
i
(x) =
A
1
i
s
i
,and measure progress via the proximity measure
d
F
(x;s;!) =
v
u
u
t
1
!
1
r
X
i=1
(
i
(P
x
1=2s) !
i
)
2
!
i
;
which is obtained naturally from the measure for (x;fs
i
g).Here,
i
() denotes the
i'th smallest eigenvalue;see Section A.1.In short,we prove the following quadratic
convergence result.
Proposition 1.If the target
P
r
i=1
(!
i
!
i1
)e
F
i
is selected in such a way that the
current primal-dual iterates (x;s) satisfy
s =
r
X
i=1

i
Proj
F
i
F
i
(x)
1
for some 
1
;:::;
r
> 0,and d
F
(x;s;!) <
p
51
2
< 1,then taking a full step along
the search directions (
x
;
s
) determined by the Nesterov-Todd method using a
suitable triangular automorphism keeps the iterates within the primal-dual strictly
feasible region,and satises
d
F
(x +
x
;s +
s
;!) 
d
F
(x;s;!)
2
1 d
F
(x;s;!)
:
The above inequality allows us to design a globally convergence target-following
algorithm (see Algorithm 2),with which we show that points on the primal-dual
central path can be eciently approximated when given any primal-dual strictly
feasible solutions within a prescribed wide-neighborhood of the central path;i.e.,in
the set f(x;s) 2

2
:
1
(P
x
1=2s) 
hx;si
r
g for some 2 (0;1).This is summarized
in the following theorem.
Theorem 2.Suppose  2 (0;1) is xed.Given any pair of primal-dual strictly
feasible solutions (bx;bs) for the primal-dual symmetric cone programming problems
(1.1),and any positive real number b,there is a sequence of at most
O

p
r

log
hbx;bsi
r
1
(P
bx
1=2bs)
+




log
hbx;bsi
rb





weights such that Algorithm 2 nds a pair of primal-dual strictly feasible solutions
(x;s) satisfying kP
x
1=2
s  bek  b.
1.4.Comparison with existing notions target maps.As earlier mentioned,
there were other attempts at extending the concept of target maps to semidenite
programming [16,17,22],symmetric cone programming [11,23] and general convex
conic programming [24].
In the works [16,17],the authors consider various notions of target map,and
demonstrate that each of these target maps is injective on some neighborhood of the
primal-dual central path.However,it is not known if any of these target maps are
injective on the whole strictly feasible region.Thus,unlike our target map and the
target-following algorithms derived from it,any target-following algorithm based
on the target maps in [16,17] requires all targets to stay within some neighborhood
of the central path.
In the work [22],the authors consider the mapping of primal-dual strictly feasible
solutions to the diagonal matrix of eigenvalues of their product as the target map.
This target map is only injective along the primal-dual central path,hence the
target-following algorithm based on it can only follow the central path.
8 C.B.CHUA
The target maps considered in [11,23] and [24] are all generalizations of the target
map induced by the Nesterov-Todd method [17] to symmetric cone programming
and general convex conic programming.Hence,we expect that any target-following
algorithmbased on these will again require all targets to stay within some neighbor-
hood of the central path.While our algorithm is also based on the Nesterov-Todd
approach,we do not use self-adjoint automorphisms P
p
for the primal-dual scal-
ings,but instead employ triangular scalings;this new tool enables our algorithm to
work beyond neighborhoods of the central path.
1.5.Organization of paper.This paper is organized as follows.In Section 2,we
use the Euclidean Jordan-algebraic characterization of symmetric cones to dene
the notion of weighted analytic centers for symmetric cone programming.This
notion allows us to dene the target map,with which we describe and analyze a
target-following algorithm in Section 3.Finally in Section 4,we apply the target-
following algorithm to the problem of nding the primal-dual central path of a
symmetric cone program.
2.Target map for symmetric cone programming
Throughout this paper,
denotes a symmetric cone,and (J;) denotes a Eu-
clidean Jordan algebra of rank r with identity element e such that the associated
symmetric cone
(J) coincides with
;i.e.,the interior of the cone of squares of J
is the symmetric cone
.Here,and throughout,we equip J with the inner product
h;i:(x;y) 7!tr(x  y).We refer the reader to the appendix for more details on
Euclidean Jordan algebras,including various notations used in this paper.
We shall denote the automorphism group of
by G(
) and its connected com-
ponent containing the identity by G.We note that since
is self-adjoint,so is its
automorphism group G(
);i.e.for each automorphism A 2 G(
),its adjoint A
t
is
also an automorphism of
.
2.1.Weighted barriers and target map.In order to have a Jordan-algebraic
description of the target map,we would rst need a description of ags of faces of
cl(
).
2.1.1.Weighted ags and Jordan frames.We begin with an algebraic characteri-
zation of faces of
given by L.Faybusovich [9,Theorem 2]:each face F Ccl(
)
is the cone of squares cl(
(J
c
2
++c
k
)) of the subalgebra J
c
2
++c
k
in the Peirce
decomposition J = J
c
2
++c
k
J
c
2
++c
k
;c
1
J
c
1
with respect to the idempotent
c
2
+   +c
k
,where 
1
c
1
+
2
c
2
+   +
k
c
k
is the type I spectral decomposition
of an arbitrary x 2 relint(F) with 0 = 
1
< 
2
<    < 
k
.
We now extend this algebraic characterization to one for ag of faces of
.
Theorem3.Given a ag f = (F
1
;:::;F
p
;F
p+1
) of
,there exists a unique complete
system of orthogonal idempotents C = (c
1
;:::;c
p
) such that
F
i
= cl(
(J
c
i
++c
p
)) for i = 1;:::;p.
Proof.We shall prove by induction on the rank r of the Euclidean Jordan algebra
J.When r = 1,the theorem trivially holds with c
1
= e.
Suppose that the theorem holds for every Euclidean Jordan algebra of rank no
more than some r  2.Consider a Euclidean Jordan algebra J of rank r+1.If p = 1,
then the theorem trivially holds with c
1
= e.Otherwise,by the preceding facial
characterization,there is an idempotent ec 6= e such that F
2
is the cone of squares of
J
ec
.This idempotent is the identity element in the subalgebra J
ec
,and is thus unique.
The Euclidean Jordan algebra J
e
c
has rank at most r.By the inductive hypothesis,
there is a unique systemof orthogonal idempotents (c
2
;:::;c
p
) with c
2
+  +c
p
=ec
Target-following and symmetric cone programs 9
such that F
i
is the cone of squares of the subalgebra J
c
i
++c
p
for i = 2;:::;p.With
c
1
= e ec,F
1
= cl(
) is the cone of squares of J
c
1
++c
p
= J
e
= J.
The above description of ags leads to the following denition of weighted Jor-
dan frame and its equivalent classes:there is a natural correspondence between
(equivalent) weighted Jordan frames and (equivalent) weighted complete ags of
.
Denition 5 (Weighted Jordan frame).A weighted Jordan frame of a Euclidean
Jordan algebra J is a pair (!;C),with!= (!
0
;!
1
;:::;!
r
) is a nondecreasing
sequence 0 =!
0
<!
1
    !
r
of real numbers,and C a Jordan frame of J.The
sequence!is called a weight sequence,and the real numbers!
1
;:::;!
r
are called
its weights.
Denition 6 (Equivalence of weighted Jordan frame).Two weighted Jordan frames
(!;C) and (e!;
e
C) of a Euclidean Jordan algebra J are said to be equivalent if they
have the same weights (!
0
;!
1
;:::;!
r
),and
!
i
>!
i1
=) c
i
+   +c
r
=ec
i
+   +ec
r
for i = 1;:::;r.In other words,two weighted Jordan frames are equivalent if the
weighted complete ags they determined are equivalent.
2.1.2.Triangular automorphisms.In our target-following algorithm,we will employ
special automorphisms of
that respect the structure of ags of faces.These are
called triangular automorphisms.
Denition 7 (Triangular transformation).Given a Jordan frame C = (c
1
;:::;c
r
),
a linear transformation A 2 L[J;J] is said to be C-triangular if,for each i 2
f1;:::;rg,the subalgebra J
c
i
++c
r
is an invariant subspace of A,and the restric-
tion of A to each subspace in the Peirce decomposition of J with respect to C is
some multiple of the identity transformation.
In matrix theory,the Gauss decomposition of a square matrix A is its decompo-
sition into the product LU of a lower triangular matrix with an upper triangular
matrix.This decomposition is obtained as a consequence of the Gaussian elimi-
nation process.For a symmetric positive denite matrix,we often further require
that the two triangular matrices have positive diagonal entries,and are transposes
of each other.This symmetric Gauss decomposition A= LL
T
is commonly known
as the Cholesky decomposition.The Cholesky decomposition produces the linear
transformation X 7!LXL
T
that is representable by a triangular matrix under a
suitable choice of basis.This triangular transformation is in fact an automorphism
of the cone of positive denite matrices,and we recover the original matrix X by
applying this triangular automorphism to the identity matrix.
We shall brie y see that this C-triangular automorphisms of the identity compo-
nent of G(
) generalizes the triangular automorphisms to the setting of Euclidean
Jordan algebra.Moreover,these triangular automorphisms can be used in place of
quadratic representation for the primal-dual scalings in the Nesterov-Todd method.
Proposition 2 (Symmetric Gauss decomposition,cf.Theorem VI.3.6 of [7]).For
each Jordan frame C of J,
(1) each x 2
can be uniquely expressed as x = Ae =
e
A
t
e,where A;
e
A 2 G
are C-triangular;
(2) each A 2 G can be uniquely decomposed into A = BQ =
e
Q
e
B,where B;
e
B 2 G
are C-triangular and Q;
e
Q 2 G are orthogonal;
Proof.All statements,except the last expression in each item,are proved in The-
orem VI.3.6 of [7].To prove these last expressions,we reverse the ordering of
primitive idempotents in C before applying Theorem VI.3.6 of [7].
10 C.B.CHUA
Example 1.For the Jordan algebra of r  r real symmetric matrices,a Jordan
frame C is the r-tuple (q
1
q
T
1
;:::;q
r
q
T
r
) with the vectors q
1
;:::;q
r
taken from
the columns of an orthogonal matrix Q,and a C-triangular automorphism A 2 G
takes the form X 7!QLQ
T
XQL
T
Q
T
for some lower triangular matrix L with
positive diagonal entries,and an orthogonal automorphism Q 2 G takes the form
X 7!PXP
T
for some orthogonal matrix P.Thus the rst item in the above
corollary gives the Cholesky and inverse Cholesky decompositions,and the second
item is the QR-decomposition.
Theorem 4 (Triangular Nesterov-Todd scaling).For each pair (x;s) 2

2
and
each Jordan frame C of J,there exists a unique C-triangular automorphism A 2 G
satisfying A
t
x = A
1
s.
Proof.By the preceding proposition,there exists a unique C-triangular automor-
phism
e
A 2 G satisfying
e
A
t
e = x,and a unique C-triangular automorphism
e
B 2 G
satisfying
e
Be = (
e
As)
1=2
.The theorem follows from
A
t
x = A
1
s () (
e
AA)
t
e = (
e
AA)
1
(
e
As) = (
e
AA)
1
P
(
e
As)
1=2
e
() (
e
AA)(
e
AA)
t
e = P
e
Be
e
(Lemma A.2)
=
e
BP
e
e
B
t
e =
e
B
e
B
t
e
(Lemma A.4)
() (
e
AAe)
2
= (
e
Be)
2
()
e
AAe =
e
Be
(Proposition 2)
()
e
AA =
e
B:
Finally,we show that we can always nd a Jordan frame C = (c
1
;:::;c
r
) for
which the unique C-triangular automorphismin the above proposition scales a given
primal-dual pair in

2
to a\diagonal"element;i.e.,an element d with d
c
i
;c
j
= 0
for all i < j in its Peirce decomposition d =
P
r
i=1
d
c
i
+
P
i<j
d
c
i
;c
j
.This will be
subsequently used to give an algebraic proof of bijectivity of the target map.
Theorem5.For each pair (x;s) 2

2
,there exists a Jordan frame C = (c
1
;:::;c
r
)
of J and a unique C-triangular automorphism A 2 G satisfying
A
t
x = A
1
s =
r
X
i=1
p

i
(P
x
1=2s)c
i
:
Proof.Take any spectral decomposition P
x
1=2s =
P
r
i=1

i
(P
x
1=2s)
e
c
i
.By Propo-
sition 2,there is a
e
C-triangular automorphism B 2 G and an orthogonal auto-
morphism Q 2 G such that P
x
1=2
= BQ.We take A to be the automorphism
P
1
x
1=2
P
(P
x
1=2
s)
1=4
Q = Q
t
B
1
P
(P
x
1=2
s)
1=4
Q 2 G,and C = (c
1
;:::;c
r
) to be the r-
tuple (Q
t
e
c
1
;:::;Q
t
e
c
r
);so that A
1
s = Q
t
P
1
(P
x
1=2
s)
1=4
P
x
1=2s = Q
t
(P
x
1=2s)
1=2
and
A
t
x = Q
t
P
(P
x
1=2
s)
1=4
P
1
x
1=2
x = Q
t
P
(P
x
1=2
s)
1=4
e = Q
t
(P
x
1=2
s)
1=2
,with
Q
t
(P
x
1=2s)
1=2
= Q
t
r
X
i=1
p

i
(P
x
1=2s)ec
i
=
r
X
i=1
p

i
(P
x
1=2s)c
i
:
It remains to check that C is a Jordan frame and A is a C-triangular transforma-
tion.The former holds since orthogonal automorphisms in G are automorphisms
of J that stabilizes the identity element e;see Theorem A.7.The latter follows
from A = Q
t
B
1
P
(P
x
1=2
s)
1=4
Q and the fact that both B
1
and P
(P
x
1=2
s)
1=4
are
e
C-triangular.
2.1.3.Weighted analytic centers.In dening weighted barriers for
,we use the
log-determinant barriers for faces F of cl(
):s 2 int(F) 7!log det
F
(s),where
det
F
(s) denotes the determinant of s as an element of the Jordan subalgebra J
F
that is the linear span of the face F (so that int(F) =
(J
F
)).Its Legendre-Fenchel
Target-following and symmetric cone programs 11
conjugate is then the composition of the orthogonal projection onto this subalgebra,
and the log-determinant barrier of the associated symmetric cone.
Remark 1.While the choice of barriers is irrelevant to the bijectivity of the target
map,this choice is taken here for the convenience of designing and analyzing the
target-following algorithmbased on it.In fact,it gives an algebraic means of nding
a weighted Jordan frame that associates with a given pair of weighted centers;see
proof of Theorem 6.
The one-to-one correspondence between weighted complete ags and weighted
Jordan frames results in the following denition of weighted barriers.
Denition 8 (Weighted log-determinant barrier for symmetric cone).The weighted
log-determinant barrier for
associated with the weighted Jordan frame (!;C) of
the Euclidean Jordan algebra J (or simply weighted barrier) is the function
f
(!;C)
:x 2
7!
r
X
i=1
(!
i
!
i1
) log det
c
i:r
(x
c
i:r
);
where c
i:r
denotes the idempotent c
i
+   +c
r
,and det
c
i:r
(x) denotes the deter-
minant of x as an element of the Jordan subalgebra J
c
i:r
.
Up to equivalence of weighted Jordan frame,there is a one-to-one correspondence
between the weighted barriers for
and the weighted Jordan frames of J.
The log-determinant barrier x 7!log det(x) for the symmetric cone
is strictly
convex,and all barriers x 7!log det
c
i:r
(x
c
i:r
) are convex.Therefore the weighted
barrier f
(!;C)
is strictly convex,and hence the weighted barrier problem
inf
8
<
:
f
(!;C)
(x) +
p
X
j=1
b
j
y
j
:
m
X
j=1
y
j
a
j
+x = c;x 2

9
=
;
has a unique solution.We call this the primal weighted analytic center associated
with the weighted Jordan frame (!;C) for the symmetric cone program.
We now consider the Lagrange optimality conditions for this weighted barrier
problem.The gradient of the log-determinant barrier at the element x is the nega-
tion of its inverse x
1
.Therefore the gradient of the weighted barrier at x is

P
r
i=1
(!
i
!
i1
)x
1
c
i:r
,and the Lagrange optimality conditions are
(WCE
(!;C)
)
m
X
j=1
y
j
a
j
+x = c;x 2
;
ha
j
;si = b
j
;1  j  m;
s =
r
X
i=1
(!
i
!
i1
)x
1
c
i:r
:
We shall call these conditions the weighted center equations given by the weighted
Jordan frame (!;C),and the unique s satisfying the above conditions the dual
weighted analytic center associated with the weighted Jordan frame (!;C).
By following the proof of Thereom 1,we can show that every pair of primal-dual
strictly feasible solutions to the symmetric cone programs (1.1) is a pair of weighted
analytic centers.
Theorem 6 (Completeness of weighted log-determinant barriers).Given any pair
of primal-dual strictly feasible solutions (x;s) to the symmetric cone programs (1.1),
there exist a weighted Jordan frame (!;C) such that (x;s) is the unique solution
to the weighed central equations (WCE
(!;C)
).Moreover,up to equivalence,the
weighted Jordan frame (!;C) is uniquely determined by the pair (x;s).
12 C.B.CHUA
Proof.While this follows fromthe constructive proof of Theorem1 as a special case,
we can instead use the proof Theorem 5 to nd the weighted Jordan frame (!;C).
Indeed in the proof of Theorem5,we construct a Jordan frame C = (c
1
;:::;c
r
) such
that there is unique a C-triangular automorphism A 2 G satisfying A
t
x = A
1
s =
P
r
i=1
p

i
(P
x
1=2
s)c
i
.
2
From Lemma 1,we deduce (x
c
i:r
)
1
= (A
t
i
(A
t
x)
c
i:r
)
1
,
where A
i
denotes the restriction of A to the subalgebra J
c
i:r
.From Lemma A.2,
we see that this expression is equivalent to A
i
((A
t
x)
c
i:r
)
1
.Therefore,for any
weight sequence!,we have
r
X
i=1
(!
i
!
i1
)(x
c
i:r
)
1
=
r
X
i=1
(!
i
!
i1
)A
i
((A
t
x)
c
i:r
)
1
(Lemma 1) = A
r
X
i=1
(!
i
!
i1
)((A
t
x)
c
i:r
)
1
= A
r
X
i=1
(!
i
!
i1
)
r
X
j=i
1
p

j
(P
x
1=2
s)
c
j
:
In particular,for the weight sequence!= (0;
1
(P
x
1=2s);:::;
r
(P
x
1=2s)),the above
expression simplies to A
P
r
i=1
p

i
(P
x
1=2s)c
i
= s.
2.1.4.Target map.For each pair of primal-dual strictly feasible solutions (x;s) of
the symmetric programs (1.1),Theorem 6 states that,up to equivalence,there
exists a unique weighted Jordan frame (!;C) satisfying s =
P
r
i=1
(!
i
!
i1
)x
1
c
i:r
,
where c
i:r
denotes the idempotent c
i
+   +c
r
.With this weighted Jordan frame
(!;C),we dene the target map as
T:(x;s) 7!
r
X
i=1
(!
i
!
i1
)c
i:r
=
r
X
i=1
!
i
c
i
:
We note here that the idempotent c
i:r
= e
c
i:r
= r(log det
c
i:r
)(e
c
i:r
),where the
identity element e is the xed point of the duality map y 7!r(log det(y)) =
y
1
.
The following theoremis a special case of Theorem1,and has a constructive alge-
braic proof,obtained by replacing the geometric argument in the proof of Theorem
1 with the algebraic version in the proof of Theorem 6.
Theorem 7 (Bijectivity of target map for symmetric cone programming).The
target map for the symmetric cone programs (1.1) is a bijection between the primal-
dual strictly feasible region and the symmetric cone
.
3.Target-following algorithms for symmetric cone programming
Using the target map T dened in the previous section,we propose the following
target-following framework.
Algorithm 1.(Target-following framework for symmetric cone programming)
Given a pair of primal-dual strictly feasible solutions (x
in
;s
in
) and a target w
out
2

.
(1) Set (x
+
;s
+
) = (x
in
;s
in
),and w
+
= T (x
in
;s
in
).
(2) Repeat the following steps until w
+
is close to w
out
,
(a) Select the next target w
++
leading towards w
out
.
(b) Compute an approximation (x
++
;s
++
) of the pre-image T
1
(w
++
).
(c) Update (x
+
;s
+
) (x
++
;s
++
) and w
+
w
++
.
(3) Output (x
out
;s
out
) = (x
+
;s
+
).
2
This involves a QR-decomposition in the case of semidenite programming.
Target-following and symmetric cone programs 13
The two main steps in this framework are the selection of the next target w
++
and the computation of the next pair of iterates (x
++
;s
++
).In the next section,
we consider the problem of computing the next pair of iterates.
3.1.Approximating weighted analytic centers.We consider the problem of
approximating the weighted analytic centers determined by the weighted center
equations (WCE
(!;C)
),given a weighted Jordan frame (!;C) that denes the next
target w
++
and a pair of current iterates (x
+
;s
+
).For simplicity of notations,we
shall denote by c
i:r
the idempotent c
i
+    + c
r
,by J
i
and

i
,respectively,the
Jordan subalgebra J
c
i:r
and its associated symmetric cone
(J
c
i:r
),and by G
i
the
identity component of G(J
i
).
3.1.1.Nesterov-Todd scaling.We begin by writing the last equation in the weighted
center equations (WCE
(!;C)
) as
s 
r
X
i=1
(!
i
!
i1
)s
i
= 0(3.1a)
x
c
i:r
x
i
= 0;i = 1;:::;r(3.1b)
x
i
 s
i
= c
i:r
;i = 1;:::;r:(3.1c)
For the moment,we cast aside the rst two equations and consider the application
of a primal-dual Nesterov-Todd-type scaling in (3.1c):
(fx
i
g;fs
i
g) 7!(fA
t
i
x
i
g;fA
1
i
s
i
g)
where A
i
is some automorphism in G
i
such that A
t
i
x
i
= A
1
i
s
i
for each i.The
bilinear equations (3.1c) are invariant under this transformation since x
i
 s
i
= c
i:r
if and only if A
t
i
x
i
 A
1
i
s
i
= c
i:r
for any A
i
2 G
i
;see Lemma A.5.The advantage
of using the Nesterov-Todd-type scaling is that we can simplify the linearization of
A
t
i
x
i
 A
1
i
s
i
= c
i:r
by scaling with L
1
A
t
i
x
i
= L
1
A
1
i
s
i
to get
A
t
i

x
i
+A
1
i

s
i
= (A
t
i
x
i
)
1
A
t
i
x
i
:
As we turn out attention to the rst two equations in (3.1),we quickly realize
that the automorphisms A
i
used in the primal-dual scalings of the bilinear equations
(3.1c) should be chosen so that for some automorphism A 2 G,
(3.2) (A
t
x)
c
i:r
= A
t
i
x
c
i:r
8x 2 J 8i 2 f1;:::;rg;
and
(3.3) A
1
r
X
i=1
(!
i
!
i1
)s
i
=
r
X
i=1
(!
i
!
i1
)A
1
i
s
i
8s
i
2 J
i
:
The next lemma shows that these conditions do hold if we take A 2 G to be
C-triangular,and take A
i
to be its restriction to the subalgebra J
i
.
Lemma 1.If A
i
is the restriction of a nonsingular C-triangular transformation
A,then both (3.2) and (3.3) hold.
Proof.By denition of A
i
,we have As
i
= A
i
s
i
for all s
i
2 J.Therefore for all
s
i
2 J
i
with i 2 f1;:::;rg,
A
r
X
i=1
(!
i
!
i1
)A
1
i
s
i
=
r
X
i=1
(!
i
!
i1
)AA
1
i
s
i
=
r
X
i=1
(!
i
!
i1
)s
i
:
Since J
i
is invariant under A and the subspaces in a Peirce decomposition are
pairwise orthogonal,we have for all s
i
2 J
i


A
t
i
x
c
i
;s
i

= hx
c
i
;A
i
s
i
i = hx;As
i
i =


A
t
x;s
i

=


(A
t
x)
c
i
;s
i

:
14 C.B.CHUA
In summary,we will transform the primal-dual variables by (x;s) 2 J
2
7!
(A
t
x;A
1
s) and (x
i
;s
i
) 2 J
2
i
7!(A
t
i
x
i
;A
1
s
i
),where A 2 G is C-triangular and
A
i
2 G
i
is the restriction of A to J
i
.We further require that A
t
i
x
i
= A
1
i
s
i
for
each i 2 f1;:::;rg at the current primal-dual iterates (x
+
;s
+
),where x
i
= (x
+
)
c
i
and s
i
2 J
i
is such that s
+
=
P
r
i=1
(!
i
!
i1
)s
i
.
From x
i
= (x
+
)
c
i
,A
t
i
x
i
= A
1
i
s
i
and s
+
=
P
r
i=1
(!
i
!
i1
)s
i
,we arrive at
s
+
=
r
X
i=1
(!
i
!
i1
)A
i
A
t
i
(x
+
)
c
i
=
r
X
i=1
(!
i
!
i1
)A(A
t
x
+
)
c
i
= AMA
t
x
+
;
where M:x 7!
P
r
i=1
(!
i
!
i1
)x
c
i
.Since Mis not an automorphism of
,we
cannot expect AMA
t
x
+
2
in general,thence the above equation may not be
satised with any choice of A 2 G.On the other hand,if (A
t
x
+
)
c
i
;c
j
= 0 for all
i < j,then we can replace Mwith the automorphism D = P
P
r
i=1
p
!
i
c
i
2 G.This
happens whenever the next target w
++
is selected in such a way that the current
primal-dual iterates (x
+
;s
+
) satisfy
(3.4) s
+
=
r
X
i=1
(e!
i
 e!
i1
)(x
+
)
1
c
i:r
for some weight sequence e!;i.e.,we select f to be some complete ag
e
f such that
(x
+
;s
+
) is the pair of weighted analytic centers associated with the weighted com-
plete ag (e!;
e
f).We note that this complete ag
e
f can be obtained from the con-
struction in the proof of Theorem 6.
Remark 2.This assumption means that we only need to (and,in fact,only allowed
to) choose the weight sequence!when selecting the next target.Thus the analysis
in this paper only allow the algorithm to target at the collection of targets with
specic weight sequence,but non-specic complete ag;i.e.,targets that share the
same set of eigenvalues.This is not an issue if the nal target is a multiple of the
identity element e;i.e.,if the algorithm aims to locate points on the central path.
For all other purposes,we would need to resort to another approach described in
[5],which unfortunately has a more involved analysis.
Under the above assumption,the following lemma shows that with the choice
D = P
P
r
i=1
p
!
i
c
i
2 G,we are able to nd C-triangular automorphism A 2 G
satisfying s
+
= ADA
t
x
+
.
Lemma 2.There exists a unique C-triangular automorphism A 2 G satisfying
A
t
x
+
= D
1
A
1
s
+
,where D 2 G is the automorphism P
P
r
i=1
p
!
i
c
i
.Moreover,if
the next target w
++
is selected in such a way that the current primal-dual iterates
(x
+
;s
+
) satisfy (3.4) for some weight sequence e!,then A
t
x
+
= D
1
A
1
s
+
=
P
r
i=1
p
e!
i
=!
i
c
i
Proof.By Theorem 4,there is a unique C-triangular automorphism
e
A 2 G satis-
fying
e
A
t
x
+
=
e
A
1
s
+
.We can then take A to be the C-triangular automorphism
e
AD
1=2
2 G,and check that A
t
x
+
= D
1=2
e
A
t
x
+
= D
1=2
e
A
1
s
+
= D
1
A
1
s
+
.
Uniqueness of A follows from that of
e
A.Moreover,if (3.4) holds,we see from
the proof of Theorem 6 that
e
A can be chosen such that
e
A
t
x
+
=
e
A
1
s
+
=
P
r
i=1
p
e!c
i
;subsequently,D
1
A
1
s
+
= D
1=2
e
A
1
s
+
= D
1=2
P
r
i=1
p
e!
i
c
i
=
P
r
i=1
p
e!
i
=!
i
c
i
.
Let A 2 G be the C-triangular automorphism in the lemma,and denote its
restriction to J
i
by A
i
.We then choose s
i
= A
i
A
t
i
x
i
2 J
i
,and check that s
+
=
Target-following and symmetric cone programs 15
P
r
i=1
(!
i
!
i1
)s
i
:using lemma 1,we deduce x
i
= (A
t
d)
c
i:r
= A
t
i
d
c
i:r
,where
d denotes the element
P
r
i=1
p
e!
i
=!
i
c
i
,whence s
i
= A
i
d
c
i:r
;we then have
r
X
i=1
(!
i
!
i1
)s
i
=
r
X
i=1
(!
i
!
i1
)A
i
d
c
i:r
=
r
X
i=1
(!
i
!
i1
)Ad
c
i:r
= ADd = s
+
:
With the primal-dual scaling (x;s;fx
i
g;fs
i
g) 7!(A
t
x;A
1
s;fA
t
i
x
i
g;fA
1
i
s
i
g),the
linearization of the rewritten last equation (3.1) of the weighted center equations at
the current iterate (x
+
;s
+
;fx
i
= (x
+
)
c
i:r
g;fs
i
= A
i
A
t
i
x
i
g),after scaling by L
1
d
c
i:r
,
becomes

s

r
X
i=1
(!
i
!
i1
)
s
i
= 0;(3.5a)
(
x
)
c
i

x
i
= 0;i = 1;:::;r;(3.5b)
A
t
i

x
i
+A
1

s
i
= d
1
c
i:r
d
c
i:r
=
r
X
j=i
!
j
 e!
j
p
e!
j
!
j
c
j
;i = 1;:::;r:(3.5c)
The weighted sum of these equations,with weights (!
i
!
i1
),is
r
X
i=1
(!
i
!
i1
)(A
t

x
1
)
c
i:r
+A
1
r
X
i=1
(!
i
!
i1
)
s
i
=
r
X
j=i
r
!
j
e!
j
(!
j
 e!
j
)c
j
:
Thus the search directions (
x
;
s
) are obtained by solving
(3.6)
m
X
j=1

y
j
a
j
+
x
= 0;
r
X
i=1
ha
j
;
s
i = 0;1  j  m;
r
X
i=1
(!
i
!
i1
)(A
t

x
)
c
i:r
+A
1

s
=
r
X
j=i
r
!
j
e!
j
(!
j
 e!
j
)c
j
:
Since the linear operator x 2 J 7!
P
r
i=1
(!
i
!
i1
)x
c
i:r
is positive denite,the
search directions are uniquely determined.
3.1.2.Proximity measure.Proximity of the iterates to the weighted analytic centers
is measured in terms of (fx
i
g;fs
i
g) via the backward error
(3.7) d
F
(fy
i
g;fw
i
g;(!;C))
def
=
1
p
!
1
v
u
u
t
r
X
i=1
(!
i
!
i1
)kP
y
1=2
i
w
i
c
i:r
k
2
dened on
(J
1
   J
r
) (J
1
   J
r
).This error is induced by the inner
product h;i
!
:(fu
i
g;fv
i
g) 7!
1
!
1
P
r
i=1
(!
i
!
i1
) hu
i
;v
i
i on the Cartesian product
J
1
     J
r
of Euclidean Jordan algebras,which is chosen because the search
directions (f
x
i
g;f
s
i
g) satisfy hf
x
i
g;f
s
i
gi
!
= 0.The factor 1=
p
!
1
is the
greatest factor so that
d
F
(fy
c
i:r
g;fw
i
g;(!;C)) < 1 =)
r
X
i=1
(!
i
!
i1
)w
i
2
;
which is a consequence of the following lemma.
16 C.B.CHUA
Lemma 3.For w =
P
r
i=1
(!
i
!
i1
)w
i
,
d
F
(fy
c
i:r
g;fw
i
g;(!;C)) 
v
u
u
t
1
!
1
r
X
i=1
(
i
(P
y
1=2
w) !
i
)
2
!
i
:
Proof.By Lemma A.3,P
y
1=2
i
w
i
and A
y
i
w
i
shares the same spectrum for any au-
tomorphism A
y
i
2 G(

i
) satisfying A
t
y
i
c
i:r
= y
i
.In particular,we may use a C-
triangular transformation A
y
satisfying A
t
y
e = y (see Proposition 2) and take A
y
i
to be the restriction of A
y
to J
i
;this results in kP
y
1=2
c
i:r
w
i
c
i:r
k = kA
y
i
w
i
c
i:r
k =
kA
y
w
i
c
i:r
k,whence!
1
d
F
(fy
i
g;fw
i
g;(!;C))
2
=
P
r
i=1
(!
i
!
i1
)kA
y
w
i
c
i:r
k
2
.
In terms of the Pierce decomposition with respect to C,
!
1
d
F
(fy
i
g;fw
i
g;(!;C))
2
=
r
X
i=1
(!
i
!
i1
)
r
X
j=i
((A
y
w
i
)
c
j
1)
2
+
r
X
i=1
2(!
i
!
i1
)
X
ij<k
k(A
y
w
i
)
c
j
;c
k
k
2
=
r
X
j=1
j
X
i=1
(!
i
!
i1
)((A
y
w
i
)
c
j
1)
2
+2
X
j<k
j
X
i=1
(!
i
!
i1
)k(A
y
w
i
)
c
j
;c
k
k
2
:
Using Cauchy's inequality and the triangle inequality,we get
!
1
d
F
(fy
i
g;fw
i
g;(!;C))
2

r
X
j=1
1
!
j

j
X
i=1
(!
i
!
i1
)(A
y
w
i
)
c
j
!
j
!
2
+2
X
j<k
1
!
j





j
X
i=1
(!
i
!
i1
)(A
y
w
i
)
c
j
;c
k





2
=
r
X
j=1
1
!
j

(A
y
w)
c
j
!
j

2
+2
X
j<k
1
!
j


(A
y
w)
c
j
;c
k


2
;
where w =
P
r
i=1
(!
i
!
i1
)w
i
.The lemma then follows fromthe next theorem.
Theorem 8 (cf.Lemma 3 of [3];cf.Homan-Wielandt Inequality).For any
0 <!
1
    !
n
,any x 2 J and any Jordan frame C,
r
X
i=1
1
!
i
(x
c
i
!
i
)
2
+2
X
i<j
1
!
i


x
c
i
;c
j


2

r
X
i=1
1
!
i
k
i
(x) !
i
k
2
:
Proof.By expanding both sides of the desired inequality,it is clear that it suces
to bound the sum
P
r
i=1
1
!
i
x
2
c
i
+2
P
i<j
1
!
i


x
c
i
;c
j


2
from below by
P
r
i=1
1
!i

i
(x)
2
.
Let x =
P

i
(x)ec
i
be a spectral decomposition.From x
2
=
P

i
(x)
2
ec
i
,we get
2
6
4
(x
2
)
c
1
.
.
.
(x
2
)
c
r
3
7
5
=
2
6
4
(ec
1
)
c
1
   (ec
r
)
c
1
.
.
.
.
.
.
.
.
.
(ec
1
)
c
r
   (ec
r
)
c
r
3
7
5
2
6
4

1
(x)
2
.
.
.

n
(x)
2
3
7
5
;
where the matrix on the right side of the equation is doubly-stochastic.By the
Hardy-Littlewood-Polya Theorem [10],we have
P
r
i=k
(x
2
)
c
i

P
r
i=k

i
(x)
2
for any
Target-following and symmetric cone programs 17
k 2 f1;:::;rg.Consequently
r
X
i=1
1
!
i
x
2
c
i
+2
X
i<j
1
!
i


x
c
i
;c
j


2
=
1
!
1
r
X
i;j=1


x
c
i
;c
j


2

r
X
k=2

1
!
k1

1
!
k

r
X
i;j=k


x
c
i
;c
j


2

1
!
1
r
X
i=1
(x
2
)
c
i

r
X
k=2

1
!
k1

1
!
k

r
X
i=k
(x
2
)
c
i

1
!
1
r
X
i=1

i
(x)
2

r
X
k=2

1
!
k1

1
!
k

r
X
i=k

i
(x)
2
=
r
X
i=1
1
!
i

i
(x)
2
proves the lemma.
We note that for x
i
= (x
+
)
c
i:r
and s
i
= A
i
A
t
i
x
i
,
d
F
(fx
i
g;fs
i
g;(!;C)) =
1
p
!
1
v
u
u
t
r
X
i=1
(!
i
!
i1
)
r
X
j=i

e!
j
!
j
!
j

2
=
v
u
u
t
1
!
1
r
X
i=1
(e!
i
!
i
)
2
!
i
:
This suggest the following proximity measure for (x;s):
(3.8) d
F
(x;s;!)
def
=
v
u
u
t
1
!
1
r
X
i=1
(
i
(P
x
1=2
s) !
i
)
2
!
i
:
From this denition,it is straightforward to deduce that

i
(P
x
1=2
s)
!
i
2 [1 d
F
(x;s;!);1 +d
F
(x;s;!)]:
Lemma 4.For all  2 [0;1],
d
F
(fx
i
+
x
i
g;fs
i
+
s
i
g;(!;C))  (1 ) +

2
!
1
r
X
i=1
(!
i
 e!
i
)
2
e!
i
 (1 ) +
2

2
1 
;
where denotes d
F
(fx
i
g;fs
i
g;(!;C)).
Proof.Denote by ex
i
,es
i
,
ex
i
and 
es
i
the scaled elements A
t
i
x
i
,A
1
i
s
i
,A
t
i

x
i
and
A
1
i

s
i
respectively.
We rst show that d
F
is invariant under the primal-dual scaling (fy
i
g;fw
i
g) 7!
(fA
t
i
y
i
g;fA
t
i
w
i
g).From Lemma A.2,we have P
A
t
i
y
i
= A
t
i
P
y
i
A
i
.We can then
follow the method of F.Alizadeh and S.H.Schmieta [1,Proposition 21] to show that
P
y
1=2
i
w
i
and P
(A
t
i
y
i
)
1=2A
1
i
w
i
share the same set of eigenvalues by demonstrating
that their quadratic representations are similar to each other.This shows that each
summand in d
F
,and hence d
F
,is invariant under the primal-dual scaling.
18 C.B.CHUA
From the rst two sets of equations (3.5a) and (3.5b) of the linearization,we see
that the search directions (f
x
i
g;f
s
i
g) are orthogonal under the inner product
h;i
!
.Thus
kf
x
i
gk
2
!
+kf
s
i
gk
2
!
= kf
x
i
+
s
i
gk
2
!
=
r
X
i=1
(!
i
!
i1
)
r
X
j=i

!
j
 e!
j
p
e!
j
!
j
!
2
=
r
X
i=1
!
i

!
i
 e!
i
p
e!
i
!
i

2
=
r
X
i=1
(!
i
 e!
i
)
2
e!
i
where we have used the scaled third set of equations (3.5c) in the second equal-
ity.Therefore kf
x
i
gk
!
;kf
s
i
gk
!

q
P
r
i=1
(!
i
e!
i
)
2
e!
i
.For each i 2 f1;:::;rg,
we have k
x
i
k = k(
x
)
c
i:r
k  k
x
k = k
x
1
k,and hence k
x
i
k  k
x
1
k 
1
p
!
1
kf
x
i
gk
!

1
p
!
1
q
P
r
i=1
(!
i
e!
i
)
2
e!
i
.
From the third set of equations (3.5c) of the linearization,we have
(ex
i
+
e
x
i
)  (es
i
+
e
s
i
) c
i:r
=
e
x
i

e
s
i
+(
ex
i

e
s
i
+
e
x
i
 
es
i
) +
2

ex
i
 
es
i
c
i:r
= (1 )(ex
i
 es
i
c
i:r
) +(ex
i
 es
i
+
ex
i
 es
i
+ex
i
 
es
i
c
i:r
) +
2

ex
i
 
es
i
= (1 )(ex
i
 es
i
c
i:r
) +
2

ex
i
 
es
i
:
Therefore,by Lemma 30 of [1],
kP
(ex
i
+
e
x
i
)
1=2(es
i
+
es
i
) c
i:r
k  k(ex
i
+
ex
i
)  (es
i
+
es
i
) c
i:r
k
 (1 )kex
i
 es
i
c
i:r
k +
2
k
ex
i
kk
es
i
k
= (1 )kP
ex
1=2
i
es
i
c
i:r
k +
2
k
ex
i
kk
es
i
k
= (1 )kP
x
1=2
i
s
i
c
i:r
k +
2
k
ex
i
kk
es
i
k;
where the rst equality follows from the fact that
e
x
i
=
e
s
i
.Consequently,
d
F
(fx
i
+
x
i
g;fs
i
+
s
i
g;(!;C))
= d
F
(fex
i
+
e
x
i
g;fes
i
+
e
s
i
g;(!;C))
=
1
p
!
1
v
u
u
t
r
X
i=1
(!
i
!
i1
)kP
(ex
i
+
ex
i
)
1=2(es
i
+
es
i
) c
i:r
k
2

1 
p
!
1
v
u
u
t
r
X
i=1
(!
i
!
i1
)kP
x
1=2
i
s
i
c
i:r
k
2
+

2
p
!
1
v
u
u
t
r
X
i=1
(!
i
!
i1
)k
ex
i
k
2
k
es
i
k
2
 (1 )d
F
(fx
i
g;fs
i
g;(!;C)) +

2
!
1
r
X
i=1
(!
i
 e!
i
)
2
e!
i
proves the lemma.
With this lemma,we prove the following quadratic convergence result.
Proposition 3.If next target w
++
=
P
r
i=1
!
i
c
i
is selected in such a way that the
current primal-dual iterates (x
+
;s
+
) satisfy
s
+
=
r
X
i=1
b

i
(x
+
)
1
c
i:r
Target-following and symmetric cone programs 19
for some
b

1
;:::;
b

r
> 0,and d
F
(x
+
;s
+
;!) <
p
51
2
< 1,then taking a full step along
the directions (
x
;
s
),determined by the linear system (3.6) with A 2 G a C-
triangular automorphism satisfying P
P
r
i=1
p
!
i
c
i
A
t
x
+
= A
1
s
+
,keeps the iterates
within the primal-dual strictly feasible region,and
d
F
(x
+
+
x
;s
+
+
s
;!)  d
F
(fx
i
+
x
i
g;fs
i
+
s
i
g;(!;C))

d
F
(x
+
;s
+
;!)
2
1 d
F
(x
+
;s
+
;!)
< 1;
where x
i
= (x
+
)
c
i:r
,s
i
= A
i
A
t
i
x
i
,
x
i
= (
x
)
c
i:r
,
s
i
= x
1
i
 s
i
 A
i
A
t
i

x
i
,
and A
i
is the restriction of A to J
c
i:r
.
Proof.Recall that 
x
i
= (
x
)
c
i:r
and 
s
=
P
r
i=1
(!
i
!
i1
)
s
i
,whence d
F
(x+

x
;s +
s
;!)  d
F
(fx
i
+
x
i
g;fs
i
+
s
i
g;(!;C)) by Lemma 3.Thus it suces
to show that x
i
+
x
i
;s
i
+
s
i
2

i
for  2 [0;1],and that
d
F
(fx
i
+
x
i
g;fs
i
+
s
i
g;(!;C)) 
d
F
(x;s;!)
2
1 d
F
(x;s;!)
:
By Lemma 3,if x
i
+
x
i
2 cl(

i
)n

i
for any  2 [0;1],then d
F
(fx
i
+
x
i
g;fs
i
+

s
i
g;(!;C))  1.Assuming := d
F
(x;s;!) < 1,the previous lemma states that
d
F
(fx
i
+
x
i
g;fs
i
+
s
i
g;(!;C))  (1 ) +
2

2
1 
< 1;
for all  2 [0;1],whence x
i
+ 
x
i
2

i
for all  2 [0;1] by the continuity of
 7!x
i
+ 
x
i
.We then apply Lemma 3 once again,together with the above
inequality,to conclude that s
i
+
s
i
2

i
for all  2 [0;1].
3.2.Choice of targets.The analysis in the preceding section requires the as-
sumption that the next target w
++
is selected in such a way that the current
primal-dual iterates (x
+
;s
+
) satisfy
s
+
=
r
X
i=1
b

i
(x
+
)
1
c
i:r
for some
b

1
;:::;
b

r
> 0.This,in general,decides the choice of the complete ag
f.Thus,we now only need to decide the values of the weights!;see Remark 2.
In light of the proximity measure d
F
(;;!),we shall use the following proximity
measure on the set of weights:
(3.9) d
F
(e!;!)
def
=
v
u
u
t
1
!
1
r
X
i=1
(e!
i
!
i
)
2
!
i
;
With this choice of targets and proximity measure,the target-following framework
is now specialized to the following:
Algorithm 2.(Target-following algorithm for symmetric cone programming)
Given a pair of primal-dual strictly feasible solutions (x
in
;s
in
) and target weights
!
out
.
(1) Pick some  2 (0;1) and a sequence of weights f!
k
g
N
k=0
such that
s
+
=
r
X
i=1
(!
(0)
i
!
(0)
i1
)(x
+
)
1
c
i
++c
r
for some Jordan frame (c
1
;:::;c
r
),d
F
(!
k
;!
k1
)   for k = 1;:::;N,
and!
N
=!
out
.
(2) Set (x
+
;s
+
) = (x
in
;s
in
).
(3) For k = 1;:::;N,
20 C.B.CHUA
(a) Solve the linear system (3.6) with (c
1
;:::;c
r
) a Jordan frame from
Theorem 6 for (x
+
;s
+
),with (!
1
;:::;!
r
) the weights in!
k
,and with
A 2 G the automorphism from Lemma 2.
(b) Update (x
+
;s
+
) (x
+
+
x
;s
+
+
s
).
(4) Output (x
out
;s
out
) = (x
+
;s
+
).
3.3.Analysis of algorithm.Consider one iteration of Algorithm 2.Recall from
Proposition 3 that we can take a full step when d
F
(x
+
;s
+
) < (
p
51)=2.This can
be enforced during the update of weights via the following lemma.
Lemma 5.If d
F
(x;s;!)   and d
F
(e!;!)   for some ; 2 (0;1),then
d
F
(x;s;e!) 
 +
1 
:
Proof.We have
d
F
(x;s;e!) 
v
u
u
t
1
e!
1
r
X
i=1
(
i
(P
x
1=2s) !
i
)
2
e!
i
+
v
u
u
t
1
e!
1
r
X
i=1
(!
i
 e!
i
)
2
e!
i
 (d
F
(x;s;!) +d
F
(e!;!)) max
i
!
i
e!
i
:
If d
F
(e!;!)  ,then

2

1
!
1
r
X
i=1
(e!
i
!
i
)
2
!
i

r
X
i=1

e!
i
!
i
1

2
;
whence min
i
e!
i
!
i
 1 .
We now give the main theoremof this section,which states that for all  > 0 suf-
ciently small,say  
1
6
,then Algorithm 2 terminates with a good approximation
of T
1
(
P
r
i=1
!
out
i
c
i
) for some Jordan frame C = (c
1
;:::;c
r
).
Theorem 9.In Algorithm 2,if  2 (0;1) is such that there exists some  2
(0;
p
51
2
) satisfying
(3.10)
( +)
2
(1 )(1 )
 ;
then (x
+
;s
+
) is well-dened and strictly feasible in each iteration,and the algorithm
terminates with d
F
(x
out
;s
out
;!
out
)  
min
,where 
min
is the least  satisfying the
inequality.
Proof.We shall prove the theorem by induction that the iterates (x
+
;s
+
) are
strictly feasible and d
F
(x
+
;s
+
;!
k
)  
min
at the beginning of each iteration.This
is certainly true for the rst iteration.By Lemma 5,we have d
F
(x
+
;s
+
;!
k+1
) 
(
min
+)=(1 ).If the hypothesis (3.10) holds,then we may apply Proposition
3 to deduce that the iterates (x
+
+
x
;s
+
+
s
) are strictly feasible with
d
F
(x
+
+
x
;s
+
+
s
) 
((
min
+)=(1 ))
2
1 (
min
+)=(1 )
=
(
min
+)
2
(1 
min
)(1 )
 
min
:
This completes the induction.
4.Finding analytic centers
In this section,we consider an algorithm that nds the analytic center T
1
(be)
for any given b > 0.This algorithm can be used to nd analytic centers of compact
sets described by linear matrix inequalities and convex quadratic constraints.It
can also be combined with a path-following algorithm to solve the symmetric cone
program (1.1).
Target-following and symmetric cone programs 21
Given a pair of primal-dual strictly feasible solutions (bx;bs),we shall construct a
nite sequence of targets f!
k
g
N
k=0
such that
s
+
=
r
X
i=1
(!
(0)
i
!
(0)
i1
)(x
+
)
1
c
i
++c
r
for some Jordan frame (c
1
;:::;c
r
),
(4.1) d
F
(!
k
;!
k1
) =
v
u
u
t
1
!
k1
1
r
X
i=1
(!
k
i
!
k1
i
)
2
!
k1
i
  for 1  k  N;
and!
N
= b1,with  satisfying the hypothesis of Theorem 9,thus allowing us to
apply Algorithm 2 to approximate T
1
(be).
Any sequence f!
k
g
N
k=0
satisfying (4.1) is called a -sequence,and N is called its
length;see [15].In [3],the author gave an upper bound on the length of a shortest
-sequence from any weight sequence!
0
to the ray f1: > 0g.For the sake of
completeness,we repeat the argument here.
Consider the local metric dened by the inner product
h;i
!
:(u;v) 2 R
r
R
r
7!
1
!
n
r
X
i=1
u
i
v
i
!
i
at each weight sequence!.We denote by kk
!
the norminduced by the above inner
product.In terms of this local metric,an -sequence f!
k
g
N
k=0
is one that satises


!
k
!
k1


!
k1
  for 1  k  N:
The length of a piecewise smooth curve :[0;1]!W,where W  R
r
++
denotes
the set of weight sequences,is dened to be
Z
1
0




d(t)
dt




(t)
dt =
Z
1
0
v
u
u
t
1

n
n
X
i=1
_

2
i

i
dt;
and denoted by l().The next lemma gives an upper bound on a shortest -
sequence between any two weight sequences in terms of the length of a piecewise
smooth curve joining them.Its proof can be obtained by adapting the proof of a
similar result in [20],and is thus omitted here.
Lemma 6 (c.f.Lemma 3.3 of [20]).For every piecewise smooth curve :[0;1]!W
and every  2 (0;1),there exists an -sequence f!
k
g
N
k=0
with!
0
= (0),!
1
= (1)
and length
N 

l()
 
1
2

2

:
Next we show the existence of a piecewise smooth curve from a given weight
sequence!to the ray f1: > 0g with length O(log(
1
r!
1
P
r
i=1
!
i
)).
Lemma 7.For each weight sequence!,there exists a piecewise smooth curve
:[0;1] 7!W with (0) =!,(1) = 1,and length
l() 
p
r log

4
!
1

;
where  denotes the average weight
1
r
P
r
i=1
!
i
.
Proof.The lemma is trivially true when!= 1.Otherwise,!
1
=    =!
p
<
!
p+1
    !
r
for some p 2 f1;:::;r 1g.
Consider the straight line segment
e
:[0;

t] 7!Wstarting from!,along which the
weights of least value increases at the same rate,with the other weights decreasing
22 C.B.CHUA
at rates proportional to their values,while maintain the average weight throughout,
and ending when the weights of least value coincide with the next higher value;i.e.,
e
 is dened by
(4.2)
e
(t)
1
=    =
e
(t)
p
=!
1
+t
r p!
1
p
and
e
(t)
i
=!
i
t!
i
for i = p +1;:::;r,
where

t 2 (0;1) is such that
e
(

t)
1
=    =
e
(

t)
p
=
e
(

t)
p+1
;as required
e
(t)
1
+   +
e
(t)
r
= r is independent of t.Its length is
Z

t
0
0
@
1
!
1
+t
rp!
1
p
0
@
p
X
i=1
(
rp!
1
p
)
2
!
1
+t
rp!
1
p
+
r
X
i=p+1
!
2
i
!
i
t!
i
1
A
1
A
1=2
dt
=
Z

t
0
0
@
p
rp!
1
p!
1
rp!
1
+t
0
@
r p!
1
p!
1
rp!
1
+t
+
r
X
i=p+1
!
i
1 t
1
A
1
A
1=2
dt
=
p
p
Z

t
0

1
p!
1
rp!
1
+t

1
p!
1
rp!
1
+t
+
1
1 t
!!
1=2
dt
=
p
p
Z

t
0
q
r
rp!
1
(
p!
1
rp!
1
+t)
p
1 t
dt
=
p
plog
q
r
rp!
1
+1
q
r
rp!
1
1

p
plog
q
r
rp!
1
+
p
1 

t
q
r
rp!
1

p
1 

t
:
From the denition of
e
(t)
1
,we can simplify
q
r
rp!
1
+
p
1 

t
q
r
rp!
1

p
1 

t
=
1 +
q
(1 

t)
rp!
1
r
1 
q
(1 

t)
rp!
1
r
=
1 +
q
rp(

t)
1
r
1 
q
rp(

t)
1
r
= R(
p
e
(

t)
1
r
);
and
q
r
rp!
1
+1
q
r
rp!
1
1
=
1 +
q
rp!
1
r
1 
q
rp!
1
r
= R(
p!
1
r
) = R(
p
e
(0)
1
r
);
where R:(0;1]![1;1) is the decreasing function u 7!(1+
p
1 u)=(1
p
1 u)
satisfying R(u)  (1 +1)=(1 (1 
1
2
u)) = 4=u.This gives
l(
e
) =
p
plog R(
p
e
(0)
1
r
) 
p
plog R(
p
e
(

t)
1
r
):
As long as
e
(

t) 6= 1,we repeat this process to construct another straight line
segment starting from
e
(

t).Eventually,we get a piecewise linear curve  joining!
to 1 with q  r straight line segments,and total length
l() =
q
X
i=1

p
p
i
log R(
p
i
!
i
1
r
) 
p
p
i
log R(
p
i
!
i+1
1
r
)

;
where!
i
is the weight sequence at the start of the i'th straight segment,!
q+1
denotes the weights 1,and p
i
is the number of weights of least value in!
i
.We
claim that for any a > 1,the function u 2 (0;1=a] 7!log R(u)  log R(au) is
Target-following and symmetric cone programs 23
increasing
3
.Thus,since both f!
i
1
g
q
i=1
increasing by construction,we have the
upper bound
l() <
q
X
i=1
p
p
i

log R(
r!
i
1
r
) log R(
r!
i+1
1
r
)


p
r
q
X
i=1

log R(
!
i
1

) log R(
!
i+1
1

)

=
p
r(log R(
!
1

) log R(1)) 
p
r log
4
!
1
:

From the above two lemmas,we deduce the following upper bound on the length
of a shortest -sequence from a given weight sequence!to the ray f1: > 0g.
Theorem 10.For every weight sequence!
0
and every  2 (0;1),there exists an
-sequence f!
k
g
N
k=0
with!
N
= 1,where  =
P
r
i=1
!
0
i
=r,and length
N 

p
r
 
1
2

2
log

4
!
0
1

:
Corollary 1.Suppose  2 (0;
p
51
2
) is xed.Let  2 (0;1) be a number satisfying
the inequality (3.10) in Theorem 9.Given any pair of primal-dual strictly feasible
solutions (bx;bs) for the primal-dual symmetric cone programming problems (1.1),
there is a sequence of at most
N 
 p
r
 
1
2

2
log

4hbx;bsi
r
1
(P
bx
1=2bs)

:
weights such that Algorithm 2 nds a pair of primal-dual strictly feasible solutions
(x;s) satisfying kP
x
1=2s ek = d
F
(x;s;1)  ,where  =
1
r
hbx;bsi.
Combining the corollary with an -sequence on the central path,we have the
following theorem.
Theorem 11.Suppose  2 (0;1) is xed.Given any pair of primal-dual strictly
feasible solutions (bx;bs) for the primal-dual symmetric cone programming problems
(1.1),and any positive real number b,there is a sequence of at most
O

p
r

log
hbx;bsi
r
1
(P
bx
1=2
b
s)
+




log
hbx;bsi
rb





weights such that Algorithm 2 nds a pair of primal-dual strictly feasible solutions
(x;s) satisfying kP
x
1=2
s  bek = bd
F
(x;s;b1)  b.
As an immediate corollary,we have the following worst-case iteration bound on
solving symmetric cone problems using Algorithm 2.
Corollary 2.Given any pair of primal-dual strictly feasible solutions (bx;bs) and
any"> 0,there is a sequence of at most
O

p
r

log
hbx;bsi
r
r
(P
bx
1=2bs)
+


log"
1



targets such that Algorithm 2 nd a pair of primal-dual strictly feasible solutions
(x;s) satisfying hx;si "hbx;bsi.
Proof.If (x;s) 2

2
satises kP
x
1=2s ek   for some  2 (0;1) and some
 > 0,then hx;si r = he;P
x
1=2s ei 
p
r.Apply the preceding theorem
with b ="hbx;bsi =(
p
r +r).
3
We have
R(u)
R(au)
= a

1+
p
1u
1+
p
1au

2
,and it is straightforward to check that u 7!
1+
p
1u
1+
p
1au
is
increasing when a > 1.
24 C.B.CHUA
5.Conclusion
We extend the target map (x;s) 7!(x
1
s
1
;:::;x
n
s
n
),together with the weighted
barriers x 7!
P
n
i=1
!
i
log x
i
and the notions of weighted analytic centers,fromlin-
ear programming to general convex conic programming.This extension is obtained
from a geometrical perspective of the weighted barriers,via the facial structure of
the nonnegative orthant,that views a weighted barrier as a weighted sumof barriers
for a strictly decreasing sequence of faces.When we replace decreasing sequences of
faces of the nonnegative orthant with decreasing sequences of faces of an arbitrary
closed convex cone,we arrive at weighted barriers for the convex cone;provided
that we have made a priori choices of barriers for all faces of the convex cone.This
potentially opens the door to ecient target-following algorithms for general convex
conic programming,once we know how to design and analyze ecient primal-dual
algorithms for general convex conic programming.
References
[1] F.Alizadeh and S.H.Schmieta.Extension of primal-dual interior point algorithms to sym-
metric cones.Math.Program.,96:409{438,2003.
[2] F.Alizadeh and Y.Xia.The Q method for symmetric cone programming.AdvOl-Report
2004/18,McMaster University,Advanced Optimization Laboratory,Hamilton,Ontario,
Canada,October 2004.http://www.optimization-online.org/DB_HTML/2004/10/982.html.
[3] C.B.Chua.Target following algorithms for semidenite programming.Research Report
CORR 2006-10,Department of Combinatorics and Optimization,Faculty of Mathematics,
University of Waterloo,Canada,May 2006.http://www.optimization-online.org/DB_HTML/
2006/05/1392.html.
[4] C.B.Chua.The primal-dual second-order cone approximations algorithmfor symmetric cone
programming.Found.Comput.Math.,7:271{302,2007.
[5] C.B.Chua.T-algebras and linear optimization over symmetric cones.Research report,Di-
vision of Mathematical Science,School of Physical and Mathematical Sciences,Nanyang
Technological University,Singapore,2008.http://www.optimization-online.org/DB_HTML/
2008/06/2018.html.
[6] C.B.Chua.A T-algebraic approach to primal-dual interior-point algorithms.SIAM J.Op-
tim.,20:503{523,2009.
[7] J.Faraut and A.Koranyi.Analysis on Symmetric Cones.Oxford Press,New York,NY,USA,
1994.
[8] L.Faybusovich.Linear systems in Jordan algebras and primal-dual interior-point algorithms.
J.Comput.Appl.Math.,86:149{175,1997.
[9] L.Faybusovich.Jordan-algebraic approach to convexity theorems for quadratic mappings.
SIAM J.Optim.,17:558{576,2006.
[10] G.H.Hardy,J.E.Littlewood,and G.Polya.Some simple inequalities satised by convex
functions.Messenger of Math.,58:145{152,1929.
[11] R.Hauser.Square-root elds and the\V-space"approach to primal-dual interior-point meth-
ods for self-scaled conic programming.Numerical Analysis Report DAMTP 1999/NA 14,
Department of Applied Mathematics and Theoretical Physics,Cambridge,England,1999.
http://www.damtp.cam.ac.uk/user/na/NA_papers/NA1999_14.ps.gz.
[12] B.Jansen,C.Roos,T.Terlaky,and J.-P.Vial.Primal-dual target-following algorithms for
linear programming.Annals of Oper.Res.,62:197{231,1996.
[13] P.Jordan,J.von Neumann,and E.Wigner.On an algebraic generalization of the quantum
mechanical formalism.Ann.Math.,35(1):29{64,1934.
[14] M.Kojima,N.Megiddo,T.Noma,and A.Yoshise.A Unied Approach to Interior Point
Algorithms for Linear Complementarity Problems,volume 538 of Lecture Notes Comput.Sci.
Springer-Verlag,Berlin-Heidelberg-New York,1991.
[15] S.Mizuno.A new polynomial time method for a linear complementarity problem.Math.
Program.,56:31{43,1992.
[16] R.D.C.Monteiro and J.-S.Pang.On two interior-point mappings for nonlinear semidenite
complementarity problems.Math.Oper.Res.,23:39{60,1998.
[17] R.D.C.Monteiro and P.Zanjacomo.General interior-point maps and existence of weighted
paths for nonlinear semidenite complementarity problems.Math.Oper.Res.,25(3):382{399,
2000.
Target-following and symmetric cone programs 25
[18] Yu.E.Nesterov and A.S.Nemirovski.Interior Point Polynomial Algorithms in Convex
Programming,volume 13 of SIAM Stud.Appl.Math.SIAM Publication,Philadelphia,PA,
USA,1994.
[19] Yu.E.Nesterov and M.J.Todd.Self-scaled barriers and interior-point methods for convex
programming.Math.Oper.Res.,22:1{46,1997.
[20] Yu.E.Nesterov and M.J.Todd.On the Riemannian geometry dened by self-concordant
barriers and interior-point methods.Found.Comput.Math.,2:333{361,2002.
[21] B.K.Rangarajan.Polynomial convergence of infeasible-interior-point methods over symmet-
ric cones.SIAM J.Optim.,16:1211{1229,2006.
[22] J.F.Sturm and S.Zhang.On weighted centers for semidenite programming.Eur.J.Oper.
Res.,126:391{407,2000.
[23] L.Tuncel.Primal-dual symmetry and scale invariance of interior-point algorithms for convex
optimization.Math.Oper.Res.,23:708{718,1998.
[24] L.Tuncel.Generalization of primal-dual interior-point methods to convex optimization prob-
lems in conic form.Found.Comput.Math.,1:229{254,2001.
[25]

E.B.Vinberg.The structure of the group of automorphisms of a homogeneous convex cone.
Tr.Mosk.Mat.O.-va,13:56{83,1965.
[26] H.Wolkowicz,R.Saigal,and L.Vandenberghe,editors.Handbook of Semidenite Program-
ming:Theory,Algorithms,and Applications.Springer-Verlag,Berlin-Heidelberg-New York,
2000.
[27] S.J.Wright.Primal-Dual Interior-Point Methods.SIAM Publication,Philadelphia,PA,
USA,1997.
[section]
Appendix A.Euclidean Jordan algebras
In this section,we give a very brief introduction to Euclidean Jordan algebras,
stating various known results and proving some new ones that are necessary for
the development of this paper.For a comprehensive discussion on symmetric cones
and Jordan algebras,we refer the reader to the excellent exposition by J.Faraut
and A.Koranyi [7].
A Jordan algebra (J;;+) is a commutative algebra whose multiplication oper-
ator  satises the Jordan identity x (x
2
 y) = x
2
 (x y) for all x;y 2 J,where
x
2
denotes x  x.The multiplication operator  is often called the Jordan product
of the Jordan algebra.We use L
x
to denote the Lyapunov operator y 2 J 7!x y.
A Jordan algebra (J;) is said to be formally real if
(x
2
+y
2
= 0 =) x = y = 0) 8x;y 2 J:
A formally real Jordan algebra has a identity element e:an element that satises
e  x = x;see [13].It is also noted in [13] that a formally real Jordan algebra
is power associative
4
:if we recursively dene,for each element x 2 J,the k'th
power by x
0
= e and x
k
= x
k1
 x for k = 1;2;:::,then x
k+l
= x
k
 x
l
for
all nonnegative integers k and l (i.e.,the collection of all powers of an element x
forms a semigroup).This important fact results in the existence of the minimal
polynomial for each element x 2 J:the monic polynomial in R[X] that generates
the principle ideal fp 2 R[X]:p(x) = 0g.The maximum degree of all minimal
polynomials is called the rank of the Jordan algebra.An element x 2 J is said to
be regular if the degree of its minimal polynomial coincides with the rank of J.
Henceforth,r shall denote the rank of J.
The minimal polynomials also give us two important functions.For a regular
element x,its trace is the coecient of the second highest power in its minimal
polynomial,and its determinant is the constant term in its minimal polynomial.
As the set of regular elements is dense in J,and both functions are continuously
extendable to J as polynomials,we can dene the trace and determinant of all
elements of J;see [7,Proposition II.2.1].
4
This is actually true in general for all Jordan algebras;see [7,Proposition II.1.2].
26 C.B.CHUA
Example A.1.The space of r  r real symmetric matrices S
r
equipped with the
symmetrized product
1
2
(AB+BA) is a formally real Jordan algebra with identity
I.The notions of minimal polynomial,trace and determinant are as we commonly
dened.
It is known (see,e.g.,[7,Proposition VIII.4.2]) that a Jordan algebra with iden-
tity is formally real if and only if it is a Euclidean Jordan algebra;i.e.,there is a
symmetric positive denite bilinear functional B:J
2
7!R that is associative;i.e.,
B(xy;z) = B(y;xz) for all x;y;z 2 J.Equivalently,a Euclidean Jordan algebra
is a Jordan algebra with identity such that the bilinear function (x;y) 7!tr(xy) is
positive denite;see [7,Proposition III.1.5].Thus a Euclidean Jordan algebra can
be given a Euclidean structure with the inner product h;i:(x;y) 2 J
2
7!tr(xy)
in such a way that L
x
is self-adjoint.
It is known that if (J;) is a Euclidean Jordan algebra,then the interior of its
cone of squares fxx:x 2 Jg is a symmetric cone in the Euclidean space (J;h;i).
We denote this interior by
(J).Moreover,this symmetric cone coincides with the
set of elements with positive denite Lyapunov operators;see [7,Theorem III.2.1].
A key ingredient in showing the homogeneity of the cone of squares is the quadratic
representation of J:P
x
:x 2 J 7!2L
2
x
 L
x
.The collection of quadratic repre-
sentations at all x 2
(J) gives a transitive subset of automorphisms of
(J).In
particular,to each x 2
(J) is associated a unique y 2
(J) such that P
y
e = x.
We denote such y by x
1=2
,and called it the square root of x.
Conversely,given any symmetric cone
,there is an Euclidean Jordan-algebraic
structure such that the symmetric cone coincides with the interior of the cone of
squares.Moreover the closure cl(
) of the symmetric cone coincides with the cone
of squares;see [7,Theorem III.3.1].
Alternatively,the symmetric cone
(J) can be dened as the connected com-
ponent of the set of invertible elements containing the identity element;see [7,
Theorem III.2.1].An element x is said to be invertible if there exists a linear com-
bination of powers of x whose Jordan product with x is the identity element.This
linear combination of powers is called the inverse of x,and denoted by x
1
.It is
unique since the subalgebra generated by x and the identity element e is associative.
An element x is invertible if and only if its quadratic representation is nonsingular,
and in this case,P
1
x
= P
x
1;see [7,Theorem II.3.1].
Example A.2.For the Jordan algebra of r r real symmetric matrices,the cone
of squares is the cone of all positive semidenite matrices.The quadratic represen-
tation of a matrix X is Y 7!XYX.The notions of square root and inverse are as
we commonly dened.
A.1.Spectral decompositions.A key ingredient in the study of formally real
Jordan algebra by P.Jordan et al.[13] is the set of idempotents.An idempotent of
J is a nonzero element c 2 J satisfying cc = c.Amongst the idempotents are the
primitive ones:idempotents that cannot be written as the sum of two idempotents.
Two idempotents c and d are said to be orthogonal if c  d = 0.Orthogonal
idempotents are indeed orthogonal with respect to the inner product h;i since
hc;di = hc  e;di = he;c  di:
From its denition,it is straightforward to check that the sum of two orthogonal
idempotents is an idempotent,and that an element c is an idempotent if and only if
the element ec is an idempotent.A complete system of orthogonal idempotents is
a set of idempotents that are pair-wise orthogonal and sum to the identity element
e.A Jordan frame is a complete system of orthogonal primitive idempotents.The
number of elements in any Jordan frame always coincide with the rank of J;see
paragraph immediately after Theorem III.1.2 of [7].
Target-following and symmetric cone programs 27
Example A.3.For the Jordan algebra of r r real symmetric matrices,an idem-
potent is a product PP
T
where the matrix P has orthogonal columns of unit length.
It is primitive if and only if it is of rank one.A complete system of orthogonal
idempotents is a p-tuple (P
1
P
T
1
;:::;P
p
P
T
p
) with the columns of P
1
;:::;P
p
taken
from the columns of an orthogonal matrix.A Jordan frame C is then a complete
system of r orthogonal idempotents with each P
i
a column-matrix.
For each x 2 J,there exists numbers 
1
     
r
and a Jordan frame
fc
1
;:::;c
r
g such that x = 
1
c
1
+    + 
r
c
r
.This is known as a spectral de-
composition of type II of x;see Theorem III.1.2 of [7].Moreover,the set of values
of 
i
's (with their multiplicities) remain unchanged over all such Jordan frames.
When the primitive idempotents corresponding to the same eigenvalues are com-
bined,we have the spectral decomposition of type I:x = 
1
ec
1
+   +
k
ec
k
,where

1
<    < 
k
are the distinct eigenvalues of x,and ec
i
is the sum of the primitive
idempotents corresponding to the eigenvalue 
i
;see also Theorems III.1.1 [7].This
spectral decomposition is unique.
The values 
i
in a type II spectral decomposition are called the eigenvalues
of x,and are denoted by 
i
(x),with 
1
(x)      
r
(x).In terms of the
spectral decompositions,the inverse of an invertible element x is the element
x
1
= 
1
(x)
1
c
1
+    + 
r
(x)
1
c
r
,and the square root of an element x in the
symmetric cone
(J) is the element x
1=2
= 
1
(x)
1=2
c
1
+   +
r
(x)
1=2
c
r
.
For an element x with the type I spectral decomposition x = 
1
c
1
+   +
k
c
k
,
the orthogonality of the idempotents implies that a polynomial p 2 R[X] is in
the principle ideal generated by the minimal polynomial if and only if p(
1
) =
   = p(
k
) = 0.Thus the minimal polynomial of an element x is t 7!(t 

1
)    (t  
k
).Consequently,an element is regular if and only if it has distinct
eigenvalues.Moreover,the trace of x is the sum 
1
(x) +    + 
r
(x),and its
determinant is the product 
1
(x)    
r
(x).The norm of an element x is then
p

1
(x)
2
+   +
r
(x)
2
.In particular,the square of the norm of an idempotent is
the number of pairwise orthogonal primitive idempotents summing up to it.
The logarithmof the determinant plays an important role in interior-point meth-
ods for symmetric cone programming:its negation serves as a barrier (called the
log-determinant barrier) for the symmetric cone
.We note that the gradient of
this log-determinant barrier at x 2
is x
1
,and its Hessian is P
1
x
= P
x
1
;see
[7,Proposition II.3.3 and Proposition III.4.2]
5
.
A.2.Peirce decomposition.For any idempotent c,its Lyapunov operator L
c
can only have eigenvalues 0,1=2 or 1;see [7,p.62].We denote by J(c;0),J(c;1=2)
and J(c;1) the (possibly empty) eigenspaces associated with the eigenvalues 0,1=2
and 1,respectively.Since L
e
is the identity map,the eigenspaces of the orthogonal
idempotents c and e c satisfy J(c;0) = J(e c;1),J(c;1=2) = J(e c;1=2) and
J(c;1) = J(e c;0).Recall that L
c
is self-adjoint.Thus we have the orthogonal
decomposition
J = J(c;1) +J(c;1=2) +J(c;0)
= J(c;1) +J(c;1=2)\J(e c;1=2) +J(e c;1):
This is known as the Peirce decomposition of J with respect to c.
5
Part (ii) of [7,Proposition III.4.2],while stated only for simple Euclidean Jordan algebras,
is in fact true for general Euclidean Jordan algebras.This follows from the fact that when a
Euclidean Jordan algebra is written as the direct sum J
1
   J

of simple Euclidean Jordan al-
gebras,the determinant det(x
1
;:::;x
()
) decomposes into the product det
1
(x
1
)    det
()
(x
()
) of
determinants on each component,and the quadratic representation P
(x
1
;:::;x
()
)
is block diagonal
with the quadratic representations P
x
1
;:::;P
x
()
as the diagonal blocks.
28 C.B.CHUA
For simplicity of notation,we shall use J
c
to denote the (nonempty) eigenspace
J(c;1),and for each pair of orthogonal idempotents (c;c
0
),we shall use J
c;c
0 to
denote the (nonempty) common eigenspace J(c;1=2)\J(c
0
;1=2).The above Peirce
decomposition J = J
c
+J
c;ec
+J
ec
with respect to c can be generalized to one
with respect to a collection of pairwise orthogonal idempotents that sums up to the
identity element.
Theorem 12 (Peirce decomposition,cf.Theorem IV.2.1 of [7]).For each complete
system of orthogonal idempotents C = fc
1
;:::;c
p
g,the space J decomposes into the
orthogonal direct sum
J =
p
M
i=1
J
c
i

M
i<j
J
c
i
;c
j
in such a way that
(1) J
c
i
is a Jordan subalgebra of J with identity element c
i
;
(2) the orthogonal projector onto J
c
i
is P
c
i
,and that onto J
c
i
;c
j
is 4L
c
i
L
c
j
;
and
(3) for each 1  i;j;k;l  p with fi;jg\fk;lg =;,
J
c
i
;c
j
 J
c
i
;c
j
 J
c
i
+J
c
j
;J
c
i
 J
c
i
;c
k
 J
c
i
;c
k
;
J
c
i
;c
j
 J
c
j
;c
k
 J
c
i
;c
k
;J
c
i
;c
j
 J
c
k
;c
l
= f0g:
Proof.This theorem follows from Theorems 8 and 9 of [13],and their proofs.
For each x 2 J,its decomposition into x =
P
p
i=1
x
c
i
+
P
i<j
x
c
i
;c
j
with x
c
i
=
2P
c
i
(x) and x
c
i
;c
j
= 4L
c
i
(L
c
j
(x)) is called its Peirce decomposition with respect
to the complete system of idempotents fc
1
;:::;c
p
g.
Example A.4.For the Jordan algebra of rr real symmetric matrices,the Peirce
decomposition of a matrix X with respect to a complete system of orthogonal idem-
potents (P
1
P
T
1
;:::;P
p
P
T
p
) is X =
P
p
i=1
P
i
P
T
i
XP
i
P
T
i
+
P
i<j
P
i
P
T
i
XP
j
P
T
j
+
P
j
P
T
j
XP
i
P
T
i
.
The Peirce decomposition allows us to express the eigenvalues and eigenspaces
of the Lyapunov operator L
x
in terms of the spectral decomposition of x:if x =

1
c
1
+    + 
k
c
k
is the type I spectral decomposition of x,then the subspace
J
c
i
;c
j
,if nonempty,is an eigenspace of L
x
associated with the eigenvalue
1
2
(
i
+
j
).
Subsequently,the eigenvalues and eigenspaces of the quadratic representation P
x
can be similarly obtained:the subspace J
c
i
;c
j
,if nonempty,is an eigenspace of
P
x
associated with the eigenvalue 
i

j
.These observations leads to the following
lemma.
Lemma A.1 (cf.Lemma 12 of [1]).If L
x
(resp.,P
x
) and L
y
(resp.,P
y
) are
similar to each other,then x and y have the same set of eigenvalues.
Proof.In the Peirce decomposition with respect to a complete system of orthogo-
nal idempotents (c
1
;:::;c
p
),the subspace J
c
i
is generated by any set of orthogonal
idempotents summing up to c
i
,and has dimension kc
i
k
2
.Thus if two Lyapunov
operators (or quadratic representations) are similar to each other,then the corre-
sponding elements share the same eigenvalues,and each eigenvalue occurs the same
number of times in each type II spectral decomposition.
A.3.Some new results.
Lemma A.2.For each automorphism A in the identity component G of the auto-
morphism group G(
) of
and all x 2
,
log det(Ax) = log det(x) +c
A
;(A
t
x)
1
= A
1
x
1
and P
A
t
x
= A
t
P
x
A:
Target-following and symmetric cone programs 29
Proof.When A is a quadratic representation,the rst equation follows from [7,
Proposition III.4.2].In general,we decompose A into the product P
p
Q of the
quadratic representation of some p 2
and some orthogonal automorphism Q
in the identity component of G(
) (see [7,Theorem III.5.1]),and note that the
determinant is invariant under automorphisms of J (see [7,TheoremII.4.2]),whence
invariant under Q (see Theorem A.7).Dierentiating the rst equation twice gives
A
t
(Ax)
1
= x
1
and A
t
P
1
Ax
A = P
1
x
.Since
is self-dual implies that G(
)
t
=
G(
),the other two equations follows.
Lemma A.3.For each x 2
and each A 2 G satisfying Ax = e,the elements
z 2 J and AP
x
1=2z always have the same set of eigenvalues.
Proof.We shall show that the quadratic representations of z and AP
x
1=2z are
similar to each other,whence by Lemma A.1,both elements have the same set of
eigenvalues.By the choice of A,AP
x
1=2e = Ax = e.Therefore Q:= AP
x
1=2 2 G
is orthogonal by Theorem A.7.By Lemma A.2,P
AP
x
1=2
z
= QP
z
Q
T
is similar to
P
z
.
Lemma A.4.For each A 2 G,AA
t
e = (Ae)
2
.
Proof.By Lemma A.2,AA
t
e = AP
e
A
t
e = P
Ae
e = (Ae)
2
.
Lemma A.5.For any A 2 G,and any x;s 2

2
,x  s = e if and only if A
t
x 
A
1
s = e.
Proof.Since every automorphism A decomposes into A = P
p
Q
t
for some orthog-
onal automorphism Q in the identity component of G(
) and some p 2
(see [7,
Theorem III.5.1]),we can write A
t
x  A
1
s = QP
p
x  QP
p
1s.
From the fact that the orthogonal subgroup of G(
) coincides with both the
automorphism group G(J) and the stabilizer subgroup G(
)
c
i:r
of the unit c
i:r
in
G(
) (see TheoremA.7),we have A
t
xA
1
s = QP
p
xQP
p
1s = Q(P
p
xP
p
1s),
and subsequently,A
t
x  A
1
s = c
i:r
if and only if P
p
x  P
p
1
s = e.The lemma
then follows from Lemma 28 of [1] (cf.Theorem 3.1 (i) of [19]).
Example A.5.For the Jordan algebra of r  r real symmetric matrices,an au-
tomorphism of the cone of positive denite matrices takes the form X 7!PXP
T
for some invertible matrix P.It is in the identity component G if and only if
det(P) > 0.The rst lemma specializes to the well-known facts log det(PXP
T
) =
log det(X)log det(P)
2
,(P
T
XP)
1
= P
1
X
1
P
T
,and (P
T
XP)Y(P
T
XP) =
P
T
(X(PYP
T
)X)P.The second lemma follows easily from XS + SX = 2I ()
XS = I.
A.4.Automorphisms of Euclidean Jordan algebras.In Section II.1 of [25],
it was stated without proof that if (J;) is a Euclidean Jordan algebra and
is
its associated symmetric cone,then the stabilizer subgroup G(
)
e
of the unit e in
G(
) coincide with the group of automorphisms G(J) of J.Here we give a proof of
this fact.
Theorem A.6.Given a Euclidean Jordan algebra (J;) with unit e and associated
symmetric cone
,the stabilizer subgroup G(
)
e
of the unit e in G(
) coincide with
the group of automorphisms G(J) of J.
Proof.Consider the inner product h;i:(x;y) 7!trace L
xy
,where L
x
denotes
the linear map y 7!x  y.Let O(J) denote the orthogonal group of the Euclidean
space (J;h;i);i.e.,O(J) = fA 2 GL(J):hAx;Ayi = hx;yi 8x;y 2 Jg.
Let'be the characteristic function of
;i.e.,
':x 2
7!
Z


]
e
hx;yi
dy;
30 C.B.CHUA
where dy denotes the Euclidean measure on (J;h;i).Let x
]
denote the negative
gradient of the logarithmic derivative of'at x.We deduce from Propositions
II.3.4 and III.2.2 of [7] that expL
x
2 G(
).Thus by Proposition I.3.1 of [7],we
have log'(expL
x
 e) = log'(e)  log det expL
x
= log'(e)  trace L
x
.Dier-
entiating this at 0 gives trace L
h
= Dlog'(e)[h].Since trace L
h
= hh;ei and
Dlog'(e)[h] =


e
]
;h

,it follows that e is a xed point of the map x 2
7!x
]
.
Proposition I.4.3 of [7] then states that G(
)\O(J) = G(
)
e
.
We now show that G(J) coincides with G(
)\O(J) = G(
)
e
.It is straightfor-
ward to check that every automorphism of J is an automorphism of
(which is
the interior of the cone of squares) that stabilizes the unit e.For the other direc-
tion,it suces to show that every linear map A 2 G(
)\O(J) = G(
)
e
preserves
orthogonality of idempotents and maps every primitive idempotent to some prim-
itive idempotent,for if x =
P

i
c
i
is the spectral decomposition,then we have
A(x
2
) = A
P

2
i
c
i

=
P

2
i
A(c
i
) = (
P

i
A(c
i
))
2
= (Ax)
2
,whence A 2 G(J) by
polarization.Suppose A 2 G(
)\O(J) = G(
)
e
.Two idempotents are c and d are
orthogonal if and only if hc;di = 0.One direction of this statement follows fromthe
denition of h;i.For the other direction,suppose that c and d are two idempotents
satisfying hc;di = 0.Since the inner product h;i is associative (see Proposition
II.4.3 of [7]),L
c
is self-adjoint.Proposition III.1.3 of [7] then implies that L
c
is
positive semidenite.Thus it has a self-adjoint,positive semidenite square root
L
1=2
c
.Hence 0 = hc;di =


c;d
2

= hc  d;di = hL
c
d;di =
D
L
1=2
c
d;L
1=2
c
d
E
shows
that L
1=2
c
d = 0,whence c  d = L
1=2
c
L
1=2
c
d = 0;i.e.,c and d are orthogonal.
Since A is orthogonal,it follows that orthogonal idempotents remain orthogonal
under A.Proposition IV.3.2 of [7] states c is a primitive idempotent if and only if
fc:  0g is an extreme ray of
.Since A 2 G(
),it maps each extreme ray
to some extreme ray of
.Thus it maps each primitive idempotent c to a positive
multiple d of some primitive idempotent d.In fact, must be unit since
0 < hd;di =


d
2
;e

= hd;ei = hd;Aei = 
1
hAc;Aei
= 
1
hc;ei = 
1


c
2
;e

= 
1
hc;ci = 
1
hAc;Aci = hd;di
Hence A maps each primitive idempotent to some primitive idempotent.
The proof of the theorem shows that both G(
)
e
and G(J) coincide with certain
orthogonal subgroup of G(
).The next theorem gives a similar result.
Theorem A.7.Given a Euclidean Jordan algebra (J;) with unit e and associated
symmetric cone
,the groups G(
)
e
and G(J) are both equivalent to the orthogonal
subgroup of G(
) under the inner product h;i:(x;y) 7!tr(x  y).
Proof.Let O(
) denote the orthogonal subgroup of G(
) under h;i.By Propo-
sition II.4.2 of [7],if A 2 G(J),then tr(Ax  Ay) = tr A(x  y) = tr(x  y) for all
x;y 2 J,whence A is orthogonal.Therefore G(J) = G(
)
e
 O(
).According to
Proposition I.1.8 of [7] and the paragraph following it,G(
)
e
is a maximal compact
subgroup of G(
).Hence O(
)  G(
)
e
.
Example A.6.For the Jordan algebra of r r real symmetric matrices,an auto-
morphism of the Jordan algebra takes the form X 7!QXQ
T
for some orthogonal
matrix Q,which clearly stabilizes the identity I.It is also orthogonal under the
trace inner product:tr(QXQ
T
)(QYQ
T
) = tr XY.
Division of Mathematical Sciences,School of Physical & Mathematical Sciences,
Nanyang Technological University,Singapore 637371,Singapore
E-mail address:cbchua@ntu.edu.sg