Synthesis of Search Algorithms from Highlevel
CP Models
?
Samir A.Mohamed Elsayed
??
,Laurent Michel
Computer Science Department,University of Connecticut.
Abstract.The ability to specify CP programs in terms of a declara
tive model and a search procedure is instrumental to the industrial CP
successes.Yet,writing search procedures is often dicult for novices or
people accustomed to model &run approaches.The viewpoint adopted in
this paper argues for the synthesis of a search fromthe declarative model
to exploit the problem instance structures.The intent is not to eliminate
the search.Instead,it is to have a default that performs adequately in the
majority of cases while retaining the ability to write full edged proce
dures.Empirical results demonstrate that the approach is viable,yielding
procedures approaching and sometimes rivaling handcrafted searches.
1 Introduction
Constraint programming (CP) techniques are successfully used in various in
dustries and quite successful when confronted with hard constraint satisfaction
problems.Parts of this success can be attributed to the considerable amount of
exibility that arises from the ability to write completely tailored search proce
dures.The main drive is based on the belief that
CP = Model +Search
where the model provides a declarative specication of the constraints,while
the search species how to explore the search space.In some CP languages,the
search can be quite sophisticated.It can concisely specify variable and value se
lection heuristics,search phases [14],restarting strategies [9],large neighborhood
search [1],exploration strategies like depthrstsearch,bestrst search,or lim
ited discrepancy search [12] to name just a few.This exibility is mostly absent
in mathematical programming where the socalled blackbox search is controlled
through a collection of parameters aecting preprocessing,cut generation,or
the selection of predened global heuristics.Users of mathematical programming
solely rely on modeling techniques and reformulations to indirectly in uence and
hopefully strengthen the search process eectiveness.
Newcomers discovering CP often overlook the true potential of open (i.e.,
whitebox) search specication and fail to exploit it.The observation prompted
?
This work is partially supported through NSF award IIS0642906.
??
The author is partially supported by Helwan University,Cairo,Egypt.
a number of eorts to rethink constraint programming tools and mold themafter
LP and MIP solvers by eliminating open search procedures in favors of intelligent
blackbox procedures.Eorts of this type include [14] and [6] while others,e.g.,
[21] provide a number of predened common heuristics.Our contention is that it
is possible to get the best of both worlds:retaining the ability to write tailored
search procedures,and synthesizing instancespecic search procedures that are
competitive with procedures handcrafted by experts.
The central contribution of this paper is Cpas,a modeldriven automatic
search procedure generator written in Comet [24].Cpas analyzes a CP model
instance at runtime,examines the variable declarations,the arithmetic and log
ical constraints,as well as the global constraints and synthesizes a procedure
that is likely to perform reasonably well on this instance.Empirical results on a
variety of representative problems (with nontrivial tailored search procedures)
demonstrate the eectiveness of the approach.The rest of the paper is organized
as follows:Section 2 presents related work.Section 3 provides details about the
synthesis process,while Section 4 illustrates the process on a popular CP appli
cation.Experimental results are reported in Section 5 and Section 6 concludes.
2 Related Work
The oldest general purpose heuristics follow the failrst principle [11] and or
der variables according to the current size of their domains.Impacts [18] were
introduced as a generic heuristic driven by the eect of labeling decisions on the
search space contraction.wdeg and dom/wdeg [3] are inspired by SAT solvers and
use con icts to drive a variable ordering heuristic.Activitybased search [15] is
driven by the number of variables involved in the propagation after each decision
and is the latest entry among blackbox general purpose heuristics.
Minion [6] oers a blackbox search and combines it with matrix based
modeling,aiming for raw speed alone to produce`model and run'solutions.
CPhydra [17] is a portfolio approach exploiting a knowledge base of solved in
stances.It combines machine learning techniques with the partitioning of CPU
time among portfolio members to maximize the expected number of solved in
stances within a xed time budget.Modeldriven derivation of search rst ap
peared in [26] for ConstraintBased Local Search (CBLS).Given a model,a
CBLS synthesizer derives a local search algorithm for the chosen metaheuristic.
It analyzes the instance and synthesizes neighborhoods as well as any other nec
essary components.The Aeon synthesizer [16] targets the scheduling domain
where combinatorial structures are easier to recognize and classify.Note that
Aeon handles both complete and incomplete (CBLS) solvers.The rst exten
sion to generic CP models was proposed in [4] and is extended here with a larger
rule set for global constraints that now uses many variable and value selection
heuristics.
3 The Synthesis Process
Cpas denes rules meant to recognize combinatorial structures for which good
heuristics exist.Each rule,when red,produces a set of recommendations char
acterized by a tness score,a subset of variables,and two heuristics for variable
and value selection as well as dynamic value symmetry breaking whenever ap
propriate.This set of recommendations is the blueprint for the search itself.The
section describes the entire process.
3.1 Preliminaries
A CSP (Constraint Satisfaction Problem) is a triplet hX;D;Ci,where X is a set
of variables,D is a set of domains,and C is a set of constraints.Each x 2 X is
associated with a domain D(x),i.e.,with a totally ordered nite set (i.e.,a well
ordered set) of discrete values over some universe U.A constraint c(x
1
; ;x
n
),
species a subset of the Cartesian product D(x
1
) D(x
n
) of mutually
compatible variable assignments.X is the type of a variable,while X[] denotes
the type of an\array of variables".D = 2
U
is the type of a domain and C is the
type of a constraint.A COP (Constraint Optimization Problem) hX;D;C;Oi is
a CSP with an objective function O.
Common notations.vars(c) denotes the variables appearing in constraint c while
cstr(x) is the subset of constraints referring to variable x.The static degree
deg(x) of variable x is deg(x) =
P
c2cstr(x)
(jvars(c)j 1).Variables can be orga
nized as arrays in the model and this is captured by a special tautological\con
straint"array(x) that states that the subset of variables x X forms an array.
T(c) denotes the type of a constraint c (e.g.,knapsack,sequence,array,etc.).
Finally,T(C) = fT(c):c 2 Cg is the set of constraint types in C.
Denition 1.A variable ordering heuristic h
x
:X[]!N!X is a function
which,given an array of n variables [x
0
; ;x
n1
],denes a permutation of
0::n1 that produces a partial function [0 7!x
(0)
; ;n1 7!x
(n1)
]:N!X.
Example 1.The static variable ordering denoted h
static
simply produces the
variable ordering partial function h
static
= [0 7!x
0
; ;n 1 7!x
n1
].
Example 2.The static degree ordering denoted h
deg
uses a permutation :N!
N of 0::n 1 satisfying
8i;j 2 0::n 1:i j )deg(x
(i)
) deg(x
(j)
)
to dene the partial function h
deg
= [0 7!x
(0)
; ;n 1 7!x
(n1)
].
Example 3.The classic dom variable ordering denoted h
dom
will,when given an
array of variables x,uses a permutation :N!N of 0::n 1 satisfying
8i;j 2 0::n 1:i j )jD(x
(i)
)j jD(x
(j)
)j
to produce a partial function capturing a permutation of x.For instance,invoking
h
dom
([x
1
;x
2
;x
3
]) with D(x
1
) = f1;2;3g;D(x
2
) = f1g;D(x
3
) = f3;4g returns
the partial function [0 7!x
2
;1 7!x
3
;2 7!x
1
].The result produced by h
dom
is
dynamic,i.e.,the embedded permutation will use the domains of the variables
in x when it is invoked.
Example 4.The dom/wdeg [3] variable ordering denoted h
wdeg
will,when given
an array of variables x,use a permutation :N!N of 0::n 1 satisfying
8i;j 2 0::n 1:i j )
jD(x
(i)
)j
wdeg
(x
(i)
)
jD(x
(j)
)j
wdeg
(x
(j)
)
with
wdeg
(x
i
) =
P
c2C
weight[c]jvars(c) 3 x
i
^jfutV ars(c)j > 1.Following [3],
weight[c] is a counter associated to constraint c that tracks the number of con
icts discovered by c during the search.The expression futV ars(c) denotes the
set of uninstantiated variables in c.
Denition 2.A value ordering heuristic h
v
:D!N!U is a function which,
given a domain d = fv
0
; ;v
k1
g of cardinality k,uses a permutation to
produce a serialization function for d dened as [0 7!v
(0)
; k 1 7!v
(k1)
].
Example 5.The minvalue heuristic (denoted h
mv
) applied to the domain D(x) =
fv
0
; ;v
k1
g of a variable x uses a permutation :N!N satisfying
8a;b 2 0::k 1:a b )v
(a)
v
(b)
to produce a serialization partial function [0 7!v
(0)
; ;k 1 7!v
(k1)
].For
instance,invoking h
mv
(f3;7;1;5g) returns [0 7!1;1 7!3;2 7!5;3 7!7].
Denition 3.A value symmetry breaking heuristic h
s
:D!D is a function
that maps a set of k values from U to a subset of nonsymmetric values.
3.2 Rules and Recommendations
Denition 4.Given a CSP hX;D;Ci,a rule r is a tuple hG;S;V;Hi where
G:2
C
!2
2
C
is a partitioning function that breaks C into G
1
G
n
such that
[
n
i=1
G
i
C and G
i
\G
j
=;8i 6= j.
S:h2
X
;2
D
;2
C
i!R is a scoring function,
V:h2
X
;2
D
;2
C
i!2
X
is a variable extraction function,
H:h2
X
;2
D
;2
C
i!hh
x
;h
v
i is a heuristic selection function.
All scores are normalized in 0::1 with 1 representing the strongest t.
Denition 5.Given a rule hG;S;V;Hi,a CSP hX;D;Ci,and a partition G(C) =
fG
1
G
n
g the rule's recommendations are fhS
i
;V
i
;H
i
i:0 < i ng with
S
i
= S(X;D;G
i
);V
i
= V(X;D;G
i
),and H
i
= H(X;D;G
i
).
Generic Partitioning Several rules use the same partitioning scheme
~
G.A rule r
focusing on constraints of type t uses the function
~
G to only retain constraints of
type t and yields one group per constraint.Namely,let n = jfc 2 C:T(c) = tgj
in
~
G(C) = ffc
1
g; ;fc
n
gg with all the c
i
constraints in C of type t.
3.3 Rules Library
Rules are meant to exploit combinatorial structures expressed with arrays,global
constraints,arithmetic constraints,and logical constraints.Structures can be
explicit (e.g.,global constraints),or implicit (e.g.,the static degree of a variable).
Cpas oers one rule per combinatorial structure that can produce a set of
recommended labeling decisions.Global constraints play a prominent role in the
analysis and their rules are described rst.A brief discussion of a generic scoring
function used by most rules starts the section.
Generic Scoring The generic scoring applies to a group (i.e.,a subset) G C
of constraints and attempts to capture two characteristics:the homogeneity of
the entire set C and the coupling of the variables in each constraint of the group
G.A homogeneous constraint set contains few distinct constraint types that
might be easier to deal with.The homogeneity of C is measured by
1
jT(C)j
which
ranges in 0::1 and peaks at 1 when only one type of constraint is present in C.
The variable coupling for a single constraint c 2 G is an indicator of the amount
of ltering to be expected from c.
When vars(c) is a superset of a userspecied array from the model,the
ratio of the maximal variable degree in vars(c) to the maximal overall degree
(r
1
(c) below) is used to estimate c's coupling.Otherwise,the simpler ratio r
2
(c)
is used.
r
1
(c) =
max
x2vars(c)
deg(x)
max
x2X
deg(x)
r
2
(c) =
jvars(c)j
j [
k2C:T(k)=T(c)
vars(k)j
The generic scoring function for G C is then
~
S(G) =
1
jT(C)j
max
c2G
r
1
(c) 9 a 2 C:T(a) = array ^vars(c) vars(a)
r
2
(c) otherwise
Observe how r
1
(c) and r
2
(c) are both in the range 0::1 delivering a generic
score in the 0::1 range.The rest of the section denes the rules.Each denition
species the partitioning G,scoring S,variable extraction V,and the heuristic
selection Hfunctions.Each of these function names is subscripted by a two letter
mnemonic that refers to the rule name.
Alldierent(ad) Rule.The alldifferent(x) constraint over the array x of n
variables holds when all variables are pairwise distinct.The rule uses the generic
partitioning
~
G and the generic scoring
~
S.The variable selection heuristic is simply
h
dom
(i.e.,the smallest domain),while the value selection heuristic is h
mv
(i.e.,
minvalue).The variable extraction simply restricts the scope of the rule to
the variables of the constraint,namely V
ad
(X;D;fcg) = vars(c).The heuristic
selection H
ad
returns hh
dom
;h
mv
i.Note that V
ad
will always receive a singleton
as the rule uses the generic partitioning that always produces partitions with
singletons.The rule is thus h
~
G;
~
S;V
ad
;H
ad
i.
Knapsack(ks) Rule.The knapsack(w,x,b) constraint over the array x of n
variables holds when
P
n1
i=0
w
i
x
i
b.The knapsack rule uses the generic
partitioning
~
G and the generic scoring
~
S.
A customized variable ordering heuristic is desirable when a userspecied
array of variables coincides with the array x.If true,the rule favors a variable
ordering for x based on decreasing weights in w and breaks ties according to
domain sizes.Let :N!N be a permutation of the indices 0::n 1 into x
satisfying
8i;j 2 0::n 1:i j )hw
(i)
;jD(x
(i)
)ji hw
(j)
;jD(x
(j)
)ji
where denotes the lexicographic ordering over pairs.The variable ordering is
then a partial function h
ks
= [0 7!x
(0)
; ;n1 7!x
(n1)
].When x does not
correspond to a model array,the heuristic is simply h
dom
.The heuristic selection
function H is
H
ks
(X;D;fcg) =
h
ks
if 9 a 2 C:T(a) = array ^vars(a) = vars(c)
h
dom
otherwise
;h
mv
The variable extraction simply restricts the scope of the rule to the vari
ables of the constraint,namely V
ks
(X;D;fcg) = vars(c).The rule is thus
h
~
G;
~
S;V
ks
;H
ks
i.
Spread(sp) Rule.The spread(x,s,) constraint over an array x of n variables
and a spread variable holds whenever s =
P
n1
i=0
x
i
^N
P
n1
i=0
(x
i
s=n)
2
holds.It constrains the mean to the constant s=n and states that is an upper
bound to the standard deviation of x [19].The rule uses the generic partitioning
and generic scoring functions.To minimize ,one must minimize each term in
the sum and thus bias the search towards values in D(x
i
) closest to s=n.This
suggests both a variable and a value selection heuristic.The value selection can
simply permute the values of the domain to rst consider those values closer to
s=n.Namely,let :N!N be a permutation of the range 0::k 1 satisfying
8i;j 2 0::k 1:i j )jv
(i)
s
n
j jv
(j)
s
n
j
in the denition of the value ordering h
vsp
= [0 7!v
(i)
; ;k 7!v
(k1)
] for
the domain D(x) = fv
0
; ;v
k1
g.Given h
vsp
,the ideal variable ordering is
maximum regret.Namely,the variable with the largest dierence between the
rst two values suggested by its h
vsp
ought to be labeled rst.Let :N!N be
a permutation for the range 0::n 1 satisfying
8i;j 2 0::n 1:i j )
h
vsp
(D(x
(i)
))(1) h
vsp
(D(x
(i)
))(0) h
vsp
(D(x
(j)
))(1) h
vsp
(D(x
(j)
))(0)
in the variable ordering h
xsp
= [0 7!x
(0)
; ;n 1 7!x
(n1)
].Note how the
value ordering h
vsp
:D!N!U is passed the domains of the two chosen vari
ables x
(i)
and x
(j)
to form the regret between the best two values according to
h
vsp
.The heuristic selection H
sp
returns hh
xsp
;h
vsp
i and the variable extraction
V
sp
returns x (the variables of the spread) in the rule h
~
G;
~
S;V
sp
;H
sp
i.
Sequence(sq) Rule.The classic sequence(x,d,p,q,V) global constraint [2] re
quires that for every window of length q in array x,at most p variables take
their values in V and the demands in d for values in V are met by the sequence.
The sequence rule overrides the partitioning function
~
G to group sequence con
straints that pertain to the same sequence x and same demand d,to exploit the
tightness of the various sequencing requirement and to yield better variable and
value orderings.Let G
sq
(C) = fG
1
; ;G
k
g where G
1
through G
k
satisfy
8a;b 2 G
i
:vars(a) = vars(b) ^d(a) = d(b) ^T(a) = T(b) = sq
The rened scoring function
S
sq
(X;D;G) =
~
S(X;D;G)
P
c2G
U(c)
jGj
where U(c) =
c:q
c:p
P
j2c:V
d
j
n
scales the generic score
~
S with the average constraint tightness of a group G
and the tightness of a single sequence constraint.The tightness of sequence c is
proportional to c:q=c:p and to the overall demand for values in c:V.
Following [20],the ideal variable and value selection heuristics attempt to
avoid gaps in the sequence while labeling and give preference to values that
contribute the most to the constraint tightness.The permutation
x
:N!N of
0::n 1 satises
8i;j 2 0::n 1:i j!jx
x
(i)
n=2j jx
x
(j)
n=2j
(
x
prefers variables that are closer to the middle of the sequence) and is used to
dene the variable ordering h
xsq
= [0 7!x
x
(0)
; ;n1 7!x
x
(n1)
].The value
selection heuristic is driven by the tightness of a value j in all the constraints of
group G
U(j) =
X
c2G
U(c) (j 2 c:V )
The permutation
v
of the values in D(x) = 0::k 1 makes sure that i precedes
j in
v
if it has a higher utility,i.e.,
v
satises
8i;j 2 0::k 1:i j )
U(
v
(i))
U(
v
(j))
and leads to the value ordering h
vsq
= [0 7!v
v
(0)
; ;k 1 7!v
v
(k1)
].The
heuristic selection H
sq
returns hh
xsq
;h
vsq
i while the variable extraction function
V
sq
returns [
c2G
vars(c) for a group G of sequence constraints.The sequence
rule is hG
sq
;S
sq
;V
sq
;H
sq
i.
WeightedSum(ws) Rule.The rule applies to a COP hX;D;C;Oi with O
P
n1
i=0
w
i
x
i
where all the w
i
are positive coecients.The objective (without
loss of generality,a minimization) can be normalized as a linear constraint c
dened as o =
P
n1
i=0
w
i
x
i
with a fresh variable o.The partitioning function
G
ws
returns the singleton fo =
P
n1
i=0
w
i
x
i
g while the scoring function S
ws
always returns 1.To minimize the objective,it is natural to rst branch on
the term with the largest weight and choose a value that acts as the smallest
multiplier.Yet,variables in o are subject to constraints linking them to other
decision variables and it might be preferable to rst branch on those if these
variables are more tightly coupled.Let Z(x) = [
i20::n1
[
c2cstr(x
i
)
vars(c) nfxg
denotes the set of variables one\hop"away from variables in array x.The
decision to branch on x or on Z(x) can then be based upon an estimation of
the coupling among these variables.Like in the generic scoring,the expression
max
y2S
deg(y) can be used to estimate the coupling within set S and drives the
choice between x and Z(x) delivering a simple variable extraction function V
ws
V
ws
(X;D;fcg) =
x if max
y2x
deg(y) max
y2Z(x)
deg(y)
Z(x) otherwise
The variable ordering over x can directly use the weights in the objective.
But a variable ordering operating on Z(x) must rst determine the contributions
of a variable y 2 Z(x) to the terms of the objective function.Note how Z(y)\x
identies the terms of the objective function aected by a decision on y.It is
therefore possible to dene a weight function that aggregates the weights of the
term aected by a decision on y.Let
w(y) =
X
z2Z(y)\x
c:w(z):8y 2 Z(x)
denote the aggregate weights for variable y where c:w(z) is the actual weight
of variable z in the objective.A permutation :N!N of the variable indices
ranging over the n variables in x (respectively,over the n variables in Z(x))
satises
8i;j 2 0::n 1:i j )w(x
(i)
) w(x
(j)
)
and is key to dene the variable ordering h
ws
= [0 7!x
(0)
; ;n1 7!x
(n1)
].
The heuristic selection function H
ws
returns hh
ws
;h
mv
i (the value selection is
minvalue) and the entire rule is hG
ws
;S
ws
;V
ws
;H
ws
i.
PickValueFirst(pv) Rule.If the number of values to consider far outnumbers
the variables to label,it is desirable to rst choose a value and then a variable
to assign it to.This rule generates one recommendation for each variable array
and the partitioning function is thus G
pv
(C) = ffcg 2 C:T(c) = arrayg.The
scoring function measures the ratio array size to number of values
S
pv
(X;D;farray(x)g) =
(
1
jxj
j[
a2x
D(a)j
if j [
a2x
D(a)j jxj
0 otherwise
The variable extraction function V
pv
simply returns the variables in the array x
while H
pv
(X;D;farray(x)g) = hh
static
;h
mv
i.The rule is hG
pv
;S
pv
;V
pv
;H
pv
i.
Degree(deg) Rule.The rule partitions C with G
deg
(C) = ffcg 2 C:T(c) =
arrayg and issues and uses a scoring that conveys the diversity of the static
degrees of the variables in the arrays.The index of diversity is based on the
relative frequencies of each member of the collection [8] and is the rst factor
in the denition of S
deg
.The index tends to 1 for diverse populations and to 0
for uniform populations.The second factor captures the relative coupling of the
variables in the array and also belongs to the 0::1 range.The score function is
S
deg
(X;D;farray(x)g) =
1
z
X
d=1
p
2
d
!
max
y2x
deg(y)
max
y2X
deg(y)
where z is the number of distinct degrees,p
d
= freq
d
=jxj and freq
d
= jfa 2
x:deg(a) = dgj.Note that,when all the variables in x have the same static
degree the diversity index is equal to 0,sending the overall score to 0.The
variable extraction is V
deg
(X;D;farray(x)g) = x.The variable ordering follows
h
deg
,i.e.,it selects variables with largest degree rst.The value selection is h
mv
leading to a denition for the heuristic selection H
deg
that returns hh
deg
;h
mv
i
and the rule is hG
deg
;S
deg
;V
deg
;H
deg
i.
The Default Rule.The rule ensures that all variables are ultimately labeled and
its score is the lowest (i.e.,a small constant bounded away from 0).The rule
could eectively use any blackbox heuristic like Activitybased search,Impact
based search,dom=wdeg,dom=ddeg,or even the simple dom heuristic.In the
following,it defaults to the dom heuristic.G
def
(C) = C,S
def
(X;D;C) = .
V
def
(X;D;C) = X to make sure that all variables are labeled.The variable or
dering is h
dom
and the value ordering is h
mv
.The overall heuristic selection func
tion H
def
returns hh
dom
;h
mv
i and the rule boils down to hG
def
;S
def
;V
def
;H
def
i.
3.4 Symmetry Breaking
The symmetrybreaking analysis is global,i.e.,it considers the model as a whole
to determine whether symmetries can be broken dynamically via the search
procedure.When conclusive,the analysis oers a partitioning of the values into
equivalence classes that the search can leverage.
While breaking symmetries statically is appealing for its simplicity,it can
interfere with the dynamic variable and value selection heuristics.Breaking sym
metries dynamically through the search sidesteps the issue.A global symmetry
analysis of the model identies equivalence classes among values in domains and
avoid the exploration of symmetric labeling decisions.The automatic derivation
of value symmetry breaking in Cpas follows [23,5],where the authors propose
a compositional approach that detects symmetries by exploiting the properties
of the combinatorial substructures expressed by global constraints.
1 forall(r in rec.getKeys()) by (recfrg.getScore()) f
2 recfrg.label();
3 if (solver.isBound()) break;
4 g
Fig.1.A Skeleton for a Synthesized Search Template.
3.5 Obtaining and Composing Recommendations
Given a CSP hX;D;Ci and a set of rules R,the synthesis process computes a
set of recommendations rec dened as follows
let fG
1
; ;G
k
g = G
r
(C)
in
rec =
S
r2R
([
i21::k
fhS
r
(hX;D;G
i
i);V
r
(hX;D;G
i
i);H
r
(hX;D;G
i
i)ig)
Namely,each rule decomposes the set of constraints according to its parti
tioning scheme and proceeds with the production of a set of recommendations,
one per partition.When a rule does not apply,it simply produces an empty set
of recommendations.Once the set rec is produced,the search ranks the recom
mendation based on their scores and proceeds with the skeleton shown in Figure
1.Line 2 invokes the polymorphic labeling method of the recommendation.The
search ends as soon as all the variables are bound (line 3).Note that since
[
hS
r
;V
r
;H
r
i2rec
([
x2V
r
) = X
the search is guaranteed to label all the variables.Figure 2 depicts the label
method for a variable rst recommendation,i.e.,a recommendation that rst
selects a variable and then chooses a value.Line 10 retrieves the variables the
recommendation operates on,and line 12 selects a variable according to the
variable ordering h
x
embedded in the recommendation.Line 14 retrieves the
values that are to be considered for the chosen variable pxi.The getValues
method is responsible for only returning nonsymmetrical values when value
symmetries can be broken (it returns the full domain of pxi otherwise).The
index vr spans over the ranks of these values in d and line 17 retrieves the vr
th
value from d.If the value is still in the domain,line 19 uses it to label pxi.
Line 24 alludes to the fact that valuerst recommendation also have their own
implementation of the Recommendation interface to support their control ow.
4 A Walkthrough Example
The synthesis process is illustrated in detail on one representative COP featuring
arithmetic,reied as well as global constraints.In the scene allocation problem,
shown in Figure 3,one must schedule a movie shoot and minimize the production
costs.At most 5 scenes can be shot each day and actors are compensated per
day of presence on the set.The decision variable shoot[s] (line 2) represents
the day scene s is shot while variable nbd[a] represents the number of days an
actor a appears in the scenes.
1 interface Recommendation f
2 void label();
3 var<CP>fintg[] getVars();
4 setfintg getValues(var<CP>fintg x);
5 int hx(var<CP>fintg[] x,int rank);
6 int hv(setfintg vals,int rank);
7 g
8 class VariableRecommendation implements Recommendation f...
9 void label() f
10 var<CP>fintg[] x = getVars();
11 forall(rank in x.getRange()) f
12 var<CP>fintg pxi = hx(x;rank);
13 if (jD(pxi)j == 1) continue;
14 setfintg d = getValues(pxi);
15 int vr = 0;
16 while (vr < jdj) f
17 int pvr = hv(d,vr++);
18 if (pvr 2 pxi)
19 try<cp> cp.label(pxi;pvr);j cp.di(pxi;pvr);
20 g
21 g
22 g
23 g
24 class ValueRecommendation implements Recommendation...
Fig.2.The Variable/Value Recommendation Classes.
The objective function is a weighted sum leading to a score of 1 for the ws
rule.On a given instance,all the nbd variables have the same static degree (16)
while the remaining variables (shoot) all have a static degree of 18.Therefore,
max
y2nbd
deg(y) < max
y2shoot
deg(y) and the rule recommends to branch on the
connected (1hop away) variables Z(nbd),i.e.,on the shoot variables.The rule
proceeds and creates synthetic weights for each entry in shoot that aggregates
the weight of terms in uenced by the scene being shot.
Beyond ws,two rules produce additional recommendations.The degree rule
produces a single recommendation,while the default rule produces another.Yet,
the score of the degree rule is 0 since all the variables have the same degree forcing
the diversity index to 0.The default rule issues a recommendation with a score
of to label any remaining variables not handled by the ws recommendation.
The valuesymmetry analysis determines that the value (days) assigned to the
scenes (i.e.,shoot) are fully interchangeable as reported in [13].The symmetries
are broken dynamically with the getValues method of the recommendation.
The method returns the subset of values (days) already in use (these are no
longer symmetric and each one forms one equivalence class) along with one
unused day.Comparatively,the tailored search in [22] iterates over the scenes
1 Solver<CP> m();
2 var<CP>fintg shoot[Scenes](m,Days);
3 var<CP>fintg nbd[Actor](m,Days);
4 int up[i in Days] = 5;
5 minimize<m> sum(a in Actor) fee[a] nbd[a] subject to f
6 forall(a in Actor)
7 m.post(nbd[a]==sum(d in Days) (or(s in which[a]) shoot[s]==d));
8 m.post(atmost(up,shoot),onDomains);
9 g
Fig.3.A Model for the Scene Allocation Problem.
and always chooses to rst label the scene with the smallest domain and to break
ties based on the costliest scene rst.
5 Experimental Results
Experiments were carried out on a mix of feasible CSP and COP that benet
fromnontrivial tailored search procedures.Each benchmark is executed 25 times
with a timeout at 300 seconds.Results are reported for Activity Based Search
(ABS),ImpactBased Search (IBS),Weighted Degree search (WDeg),a state
oftheart handwritten tailored search,and the search synthesized by Cpas.
ABS,IBS and WDeg all use a slow restarting strategy based on an initial
failure limit of 3 jXj and a growth rate of 2 (i.e.,the failure limit in round i is
l
i
= 2l
i1
.Table 1 reports the average CPUtime (T) (in seconds),its standard
deviation (T) and the number of runs that timed out (TO).The analysis time
for Cpas is negligible.Timeouts are\charged"300 seconds in the averages.
The tailored search procedures are taken from the literature and do ex
ploit symmetry breaking when appropriate.The steel mill instances come from
CSPLib [7,10] and the handcrafted search is from [25].The car sequencing in
stances come from CSPLib and the tailored search uses the best value and vari
able orderings from [20].The nurse rostering search and instances are from [19].
The progressive party instances come from[7] and the tailored search labels a pe
riod fully (using rstfail) before moving to the next period.The multiknapsack
as well as the magic square instances are from [18].The tailored search for the
magic square uses restarts,a semigreedy variant of h
dom
for its variable order
ing and a randomized (lower or upper rst) bisection for domain splitting.Grid
coloring and radiation models and instances were obtained from the MiniZinc
Challenge
1
.All the COP searches are required to nd a global optimum and
prove optimality.All results are based on Comet 3.0 on 2.8 GHz Intel Core 2
Duo machine with 2GB RAM running Mac OS X 10.6.7.
Rule's adequacy The intent of Cpas was to produce code reasonably close to
procedures produced by experts and competitive with generic blackbox searches.
The evaluation suite contains additional benchmarks (quite a few classic CSP)
1
Available at http://www.g12.csse.unimelb.edu.au/minizinc/
Benchmark
Tailored
CPAS
ABS
IBS
WDEG
(T)
(T)
TO
(T)
(T)
TO
(T)
(T)
TO
(T)
(T)
TO
(T)
(T)
TO
car1
0.1
0.0
0
0.1
0.0
0
80.7
63.1
1
300.0
0.0
25
88.3
111.1
5
car2
0.1
0.0
0
0.1
0.0
0
38.8
42.2
0
221.2
95.7
14
53.7
78.9
1
car3
0.7
0.1
0
0.6
0.0
0
266.4
66.1
19
300.0
0.0
25
276.8
80.2
23
debruijn
0.6
0.1
0
0.5
0.0
0
300.0
0.0
25
301.2
0.7
25
300.0
0.0
25
gap
13.8
1.3
0
10.5
0.3
0
44.7
2.3
0
15.4
0.6
0
91.1
5.9
0
golomb
3.4
0.3
0
2.6
0.2
0
32.3
14.8
0
137.2
60.0
1
15.3
0.2
0
color
24.9
2.5
0
193.6
0.7
0
300.0
0.0
25
300.0
0.0
25
300.0
0.0
25
gcolor(56)
2.7
0.2
0
2.3
0.1
0
22.1
14.3
0
5.3
0.8
0
83.8
1.0
0
knapCOP1
0.8
0.0
0
0.0
0.0
0
0.5
0.0
0
0.4
0.1
0
1.3
0.0
0
knapCOP2
10.4
0.1
0
3.2
0.1
0
2.7
0.5
0
5.0
2.0
0
13.4
0.3
0
knapCOP3
300.0
0.0
25
34.6
0.5
0
64.7
13.8
0
213.7
64.4
3
300.0
0.0
25
knapCSP1
0.6
0.0
0
0.1
0.0
0
0.1
0.1
0
0.1
0.1
0
0.3
0.2
0
knapCSP2
3.2
0.0
0
0.8
0.0
0
1.0
0.4
0
2.0
1.0
0
4.2
2.5
0
knapCSP3
300.0
0.0
25
9.3
0.1
0
13.8
11.4
0
63.0
37.9
0
282.4
40.8
20
magic Sq10
4.6
4.2
0
300.0
0.0
25
2.3
2.8
0
1.3
1.6
0
89.2
126.5
6
magic Sq11
7.9
9.2
0
300.0
0.0
25
9.2
17.2
0
3.8
2.5
0
249.6
87.6
16
magicseries
5.8
0.7
0
5.7
0.1
0
2.8
1.6
0
2.7
0.4
0
1.9
3.0
0
market
5.8
0.2
0
5.2
0.1
0
30.4
21.9
0
37.2
27.1
0
47.3
36.5
0
nurse(z3)
0.3
0.0
0
4.6
0.1
0
40.3
17.2
0
18.9
6.9
0
163.4
49.8
0
nurse(z5)
0.1
0.0
0
2.1
0.0
0
53.8
6.0
0
13.0
9.4
0
61.4
15.6
0
perfectSq
0.2
0.0
0
0.2
0.0
0
300.0
0.0
25
300.0
0.0
25
300.0
0.0
25
progressive1
0.1
0.0
0
0.1
0.0
0
67.3
96.4
2
41.1
34.7
0
3.7
2.3
0
progressive2
0.6
0.0
0
0.7
0.0
0
112.3
125.7
7
278.8
64.6
22
175.4
114.8
9
progressive3
0.1
0.0
0
0.1
0.0
0
19.2
58.8
1
46.2
84.4
2
153.4
142.7
11
radiation1
2.3
0.0
0
0.5
0.0
0
0.4
0.1
0
1.7
0.1
0
0.1
0.0
0
radiation2
7.3
0.8
0
300.0
0.0
25
2.2
0.4
0
8.7
1.3
0
0.6
0.2
0
radiation3
198.1
3.9
0
300.0
0.0
25
0.5
0.1
0
2.4
0.2
0
0.1
0.0
0
radiation4
2.0
0.0
0
0.2
0.0
0
1.0
0.2
0
5.5
0.2
0
0.6
0.3
0
radiation5
0.0
0.0
0
12.4
0.1
0
1.1
0.2
0
5.6
0.5
0
0.3
0.2
0
radiation6
1.1
0.0
0
300.0
0.0
25
1.1
0.2
0
6.5
0.9
0
0.2
0.0
0
radiation7
1.3
0.0
0
11.2
0.3
0
1.3
0.4
0
10.0
0.7
0
0.2
0.1
0
radiation8
6.8
0.0
0
300.0
0.0
25
2.1
0.6
0
9.5
2.8
0
0.5
0.1
0
radiation9
300.0
0.0
25
4.8
0.1
0
2.1
0.5
0
9.5
0.7
0
1.1
0.6
0
RRT
4.7
0.0
0
4.9
0.1
0
145.4
131.9
10
243.2
105.9
17
91.3
94.8
3
scene
0.4
0.0
0
0.7
0.0
0
156.7
45.8
0
47.1
16.1
0
300.0
0.0
25
slab1
5.3
0.0
0
2.6
0.4
0
300.0
0.0
25
300.0
0.0
25
290.6
47.2
24
slab2
2.9
0.0
0
3.6
0.2
0
300.0
0.0
25
300.0
0.0
25
266.3
93.5
22
slab3
300.0
0.0
25
7.7
0.5
0
300.0
0.0
25
300.0
0.0
25
288.1
59.5
24
sport
6.6
1.1
0
5.4
0.1
0
151.2
123.9
7
255.3
96.3
20
131.8
111.1
5
Total
1526
100
2131
150
3170
197
4113
279
4428
294
Table 1.Experimental Results.
that terminate extremely quickly for all the search algorithms and are therefore
providing no insights into Cpas's behavior.Figure 4 graphically illustrates how
often the various rules contribute to the search procedure of a model.Unsurpris
ingly,a rule like\pickvaluerst"is used extremely rarely (only on the perfect
square) as the overwhelming majority of benchmarks do not have this property.
The other rules are used substantially more often.The fallback rule is used fairly
rarely as well.Overall,the rules do not overt the benchmarks,i.e.,we are far
from equating onerule with one benchmark.
Tailored Search Procedures written by experts are often sophisticated with sym
metry breaking and rich variable/value ordering using multiple criteria.The per
formance of customsearches is therefore a target to approach and possibly match
on a number of benchmarks.Cpas is successful in that respect and only falls
short on models like radiation (it cannot generate a bisecting search),or graph
coloring (it branches on the chromatic number too early).On the magic square
Cpas cannot exploit semantics not associated with any one global constraint.
Fig.4.Rule Usage.
Blackbox searches Compared to
blackbox searches,Cpas is gener
ally competitive,especially in terms
of robustness.Sometimes,the black
box heuristics performbetter (e.g.,on
radiation) and this needs to be fur
ther investigated with a much length
ier set of experiments with and with
out restarting.Finally,it is possible
and maybe even desirable to switch
to a fallback rule that uses an eec
tive blackbox search techniques that
dominates a plain domheuristic.This
was intentionally left out to avoid con
fusions about the true causes of Cp
as behavior.The grand total of running times and number of timeouts across
the 5 searches is particularly revealing.
6 Conclusion
Cpas automatically generates search algorithms from highlevel CP models.
Given a CP model,Cpas recognizes and classies its structures to synthesize
an appropriate algorithm.Empirical results indicate that the technique can be
competitive with stateoftheart procedures on several classic benchmarks.Cp
as is able to generate searches that split variables into groups/phases and uses
specialized variable and value ordering heuristics within each group.Cpas also
relies on a global value symmetry breaking analysis that follows [23,5] and whose
results are exploited within each group of variables.
References
1.Ravindra K.Ahuja,
Ozlem Ergun,James B.Orlin,and Abraham P.Punnen.A
survey of very largescale neighborhood search techniques.Discrete Appl.Math.,
123(13):75{102,2002.
2.N.Beldiceanu and E.Contejean.Introducing global constraints in CHIP.Mathe
matical and Computer Modelling,20(12):97{123,1994.
3.F.Boussemart,F.Hemery,C.Lecoutre,and L.Sais.Boosting Systematic Search
by Weighting Constraints.In Proceedings of the Sixteenth Eureopean Conference
on Articial Intelligence,ECAI04,pages 146{150.IOS Press,2004.
4.Samir A.Mohamed Elsayed and Laurent Michel.Synthesis of search algorithms
from highlevel CP models.In proceedings of the 9th International Workshop on
Constraint Modelling and Reformulation in conjunction with CP'2010,pages 186{
200,2010.
5.M.Eriksson.Detecting symmetries in relational models of CSPs.Master's thesis,
Department of Information Technology,Uppsala University,Sweden,2005.
6.I.P.Gent,C.Jeerson,and I.Miguel.Minion:A fast,scalable,constraint solver.
In ECAI 2006:17th European Conference on Articial Intelligence,August 29
September 1,2006,Riva del Garda,Italy,page 98,2006.
7.I.P.Gent and T.Walsh.CSPLib:a benchmark library for constraints.In Principles
and Practice of Constraint Programming{CP99,pages 480{481.Springer,1999.
8.J.P.Gibbs and W.T.Martin.Urbanization,technology,and the division of labor:
International patterns.American Sociological Review,27(5):667{677,1962.
9.C.P.Gomes,B.Selman,N.Crato,and H.Kautz.Heavytailed phenomena in
satisability and constraint satisfaction problems.Journal of automated reasoning,
24(1):67{100,2000.
10.Belgian Constraints Group.Data and results for the steel mill slab problem.avail
able from http://becool.info.ucl.ac.be/steelmillslab.Technical report,UCLouvain.
11.R.M.Haralick and G.L.Elliott.Increasing tree search eciency for constraint
satisfaction problems.Articial intelligence,14(3):263{313,1980.
12.W.D.Harvey and M.L.Ginsberg.Limited discrepancy search.In International
Joint Conference on Articial Intelligence,volume 14,pages 607{615,1995.
13.Pascal Van Hentenryck,Pierre Flener,Justin Pearson,and Magnus
Agren.
Tractable Symmetry Breaking for CSPs with Interchangeable Values.In IJCAI,
pages 277{284,2003.
14.SA ILOG.ILOG Concert 2.0.
15.L.Michel and P.Van Hentenryck.Impactbased versus Activitybased Search for
BlackBox ContraintProgramming Solvers.http://arxiv.org/abs/1105.6314,2011.
16.J.N.Monette,Y.Deville,and P.Van Hentenryck.Aeon:Synthesizing scheduling
algorithms from highlevel models.Operations Research and CyberInfrastructure,
pages 43{59,2009.
17.E.OMahony,E.Hebrard,A.Holland,C.Nugent,and B.OSullivan.Using case
based reasoning in an algorithm portfolio for constraint solving.In 19th Irish
Conference on AI,2008.
18.P.Refalo.Impactbased search strategies for constraint programming.In Principles
and Practice of Constraint Programming CP'2004,pages 557{571.Springer,2004.
19.Pierre Schaus,Pascal Hentenryck,and JeanCharles Regin.Scalable load balanc
ing in nurse to patient assignment problems.In Proceedings of the 6th CPAIOR
Conference,CPAIOR'09,pages 248{262,Berlin,Heidelberg,2009.
20.Barbara M.Smith.Succeedrst or failrst:A case study in variable and value
ordering.In third conference on the Practical Application of Constraint Technology
PACT'97,pages 321{330,1997.
21.Gecode Team.Gecode:Generic constraint development environment.Available
from http://www.gecode.org,2006.
22.P.Van Hentenryck.Constraint and integer programming in OPL.INFORMS
Journal on Computing,14(4):345{372,2002.
23.P.Van Hentenryck,P.Flener,J.Pearson,and M.
Agren.Compositional derivation
of symmetries for constraint satisfaction.Abstraction,Reformulation and Approx
imation,pages 234{247,2005.
24.P.Van Hentenryck and L.Michel.Constraintbased local search.The MIT Press,
2005.
25.P.Van Hentenryck and L.Michel.The steel mill slab design problem revisited.In
CPAIOR,pages 377{381.Springer,2008.
26.Pascal Van Hentenryck and Laurent Michel.Synthesis of constraintbased local
search algorithms fromhighlevel models.In AAAI'07,pages 273{278.AAAI Press,
2007.
Enter the password to open this PDF file:
File name:

File size:

Title:

Author:

Subject:

Keywords:

Creation Date:

Modification Date:

Creator:

PDF Producer:

PDF Version:

Page Count:

Preparing document for printing…
0%
Comments 0
Log in to post a comment