A Compass to Guide Genetic Algorithms

losolivossnowΤεχνίτη Νοημοσύνη και Ρομποτική

23 Οκτ 2013 (πριν από 4 χρόνια και 17 μέρες)

144 εμφανίσεις

A Compass to Guide Genetic Algorithms
Jorge Maturana and Fr´ed´eric Saubion
{maturana,saubion}@info.univ-angers.fr
LERIA,Universit´e d’Angers
2,Bd Lavoisier 49045 Angers (France)
Abstract.Parameter control is a key issue to enhance performances of
Genetic Algorithms (GA).Although many studies exist on this problem,
it is rarely addressed in a general way.Consequently,in practice,param-
eters are often adjusted manually.Some generic approaches have been
experimented by looking at the recent improvements provided by the op-
erators.In this paper,we extend this approach by including operators’
effect over population diversity and computation time.Our controller,
named Compass,provides an abstraction of GA’s parameters that al-
lows the user to directly adjust the balance between exploration and
exploitation of the search space.The approach is then experimented on
the resolution of a classic combinatorial problem (SAT).
1 Introduction
Genetic Algorithms (GA) are metaheuristics inspired by natural evolution,which
manage a population of individuals that evolve thanks to operators’ applications.
Since their introduction,GAs have been successfully applied to solve various
complex optimization problems.From a general point of view,the performance
of a GA is related to its ability to correctly explore and exploit the interesting
areas of the search space.Several parameters are commonly used to adjust this
exploration/exploitation balance (EEB),and the operator application rates are
probably among the most influential ones.A suitable control of parameters is
crucial to avoid two well-known problems:premature convergence,that occurs
when the population gets trapped in a local optima,and the loss of computation
time,due to the inability of the GA to detect the most promising areas of the
search space.Most of the efforts on this subject are only applicable to specific
algorithms,thus,in practice,parameter control is often achieved manually,sup-
ported by empirical observations.More recently,new methods have begun to rise
up,proposing more generic control mechanisms.In this trend,our motivation is
to design a new controller in which parameters could be handled by more general
and abstract concepts,in order to be used by a wide range of GAs.
Techniques for assigning values to parameters can be classified according to
the taxonomy proposed by Eiben et al.[1].A general class,named Parameter
Setting [2],is divided in Parameter Tuning,where parameters are fixed before
the run,and Parameter Control,where parameters are modified during the run.
Parameter Control is further divided in Deterministic,where parameters are
2
modified according to a fixed and predefined scheduling;Adaptive,where the
current state of the search is used to modify parameters by means of rules;
and Self-Adaptive [3],where parameters are encoded in the genotype and evolve
together with the population.
Within adaptive control,the central issue is to design rules able to guide
the search and to make the suitable choices.A straightforward way consists in
performing test runs to extract pertinent information in order to feed the system.
However,this approach involves an extra computational time and does not really
correspond to the idea of an “automatic self-driven” algorithm.
A more sophisticated way to build a control system consists in adding a
learning component,which is able to identify a correct control procedure.This
reduces prior effort and increases adaptation abilities,according to the needs of
different algorithms.In this context,two perspectives could be identified:
The first approach consists in modeling the behavior of the GA using dif-
ferent parameters,typically during a learning phase.[4] presents two methods
including a learning phase that tries different combinations of parameters and
encodes the results in tables or rules.A similar approach is presented in [5],
where population’s diversity and fitness evaluation are embedded in fuzzy logic
controllers.Later this controllers are used to guide the search according to a
high level strategy.[6] proposes an algorithm divided in periods of learning and
control of parameters,by adjusting central and limit values of them.
A second approach consists in providing a fast control,neglecting the mod-
eling aspect.[7] presents a controller that adjusts operators’ rates according to
recent performances.Similar ideas are presented in [8,9].In [10],this approach
is extended by considering several statistics of individuals fitness and survival
rate to evaluate operator quality.In [11],the population is resized,depend-
ing of several criteria based on the improvement of the best historical fitness.
[12] presents an algorithm that oscillates between exploration and exploitation
phases when diversity thresholds are crossed.[13] modifies parameters according
to best fitness value.Some methods in this class require special features fromthe
GA,such as [14],that maintains several populations with different parameter
values,and moves the parameter’s values toward the value that produces the
best results.In [15],a forking scheme is used:a parent population is in charge of
exploration,while several child populations exploit particular areas of the search
space.In [16],a parameterless GA gets rid of popsize parameter by comparing
the performance of multiple populations of different size.
In this paper,we investigate a combination of these two general approaches
in order to benefit from their complementary strengths,providing an original
abstract control of GAs’ operators.Our controller measures the variations of
population’s diversity and mean fitness resulting from an operator application,
as well as its execution time.A unique control parameter (Θ) allows us to adjust
the desired level of EEB and determines the application rates assigned to each
operator.We have tested our approach on the resolution of the famous boolean
satisfaction problem (SAT) and compared it to other adaptive control methods.
3
The paper is organized as follows.Sect.2 exposes our approach,Sect.3 de-
scribes the experimental framework we have used,and Sect.4 discusses results.
Finally,main conclusions and future directions are drawn in Sect.5.
2 Method Overview
We consider here a basic steady-state GA:at each step an operator is selected
among several ones,according to a variable probability.Asexual operators are
applied to the best of two randomly chosen individuals of the population,and the
resulting individual replaces the worst one.Sexual operators work on two ran-
domly chosen individuals,modifying them directly.The parameters considered
here are therefore operators’ application rates.
As mentioned in the introduction,adaptive control can be considered from
two different points of view.In order to illustrate more precisely these differences,
we may detail two recent and representative approaches by comparing the work
of Thierens [7] and a method proposed by Wong et al.[6].
In [7],Adaptive Pursuit (AP) aims at adjusting the probabilities of associ-
ated operators,depending on their performances,measured typically by fitness
improvement during previous applications.This method is able to quickly adapt
these probabilities in order to award the most successful operators.AP does not
care about understanding the behavior of the algorithm and focuses immedi-
ately on the best values,in order to increase the performance.At this point,we
may remark that algorithms that are solely based on fitness improvement may
experience premature convergence.
In the APGAIN method [6],the search is divided in epochs,further divided
in two periods.The first one is devoted to the measurement of operators’ perfor-
mance by applying themrandomly,and the second one applies operators accord-
ing to a probability which is proportional to the observed performance.Three
values (low,medium,high) are considered for each parameter,and adjusted by
moving them towards the most successful value.Finally,a diversification mech-
anism is included in the fitness function.Roughly a quarter of the generations
is dedicated to the first (learning) period,what could be harmful if there are
disrupting operators.
Here,we propose a new controller (Compass) based on the idea presented
in [5],that considers both diversity and quality as pertinent criteria to evaluate
algorithms’ performance.Parameters are abstracted,in order to guide the search
by inducing a required level of EEB.The operators are evaluated after each
application and,in addition to diversity and fitness variation,a third measure
–operator’s execution time– is also considered.To get rid of previous drawbacks,
we include some controllers’ features that adapt parameters’ rates during the
search [7–9],namely the speed of response,to update the model.Operators are
applied according to their application rate,which is updated at every generation.
Since we are interested in a controller which could be used by any GA,it must be
independent and placed at a different layer.We have then implemented Compass
in a C++ class,included by the GA.
4
2.1 Operator Evaluation and Applications Rates Updating
Given an operator i ∈ [1...k] and a generation number t,let d
it
,q
it
,T
it
be,
respectively,the population’s mean diversity variation,mean quality (fitness)
variation,and mean execution time of i over the last τ applications of this
operator.At the beginning of the run,all operators can be applied with the
same probability.
We define a vector o
it
= (d
it
,q
it
) to characterize the effects of the operator
over the population in terms of variation of quality and diversity (axis ΔD
and ΔQ of Fig.1).Note that,since both quality and diversity improvements
correspond to somewhat opposite goals,most vectors will lay on quadrants II
(improvement of quality but a decrease in diversity) and IV (increase in diversity
and a reduction of mean fitness),shown in Fig.1a.
Algorithms that just consider the fitness improvement to adjust the operator
probabilities would only use the projection of o
it
over the y-axis (dotted lines
in Fig.1b).On the other hand,if diversity is solely taken in account,measures
would be considered as the projection over the x-axis (Fig.1c).
Our goal is to control these two criteria together by choosing a search direc-
tion which will be expressed by a vector c (defined by its angle Θ ∈ [0,
π
2
]) that
characterizes also its orthogonal plane P (see Fig.1d).
Since measures of diversity and quality usually have different magnitudes,
they are normalized as:
d
n
it
=
d
it
max
i
{|d
it
|}
and q
n
it
=
q
it
max
i
{|q
it
|}
We thus have vectors o
n
it
= (d
n
it
,q
n
it
).Rewards are then based on the projection
of vectors o
n
it
over c,i.e.,|o
it
|cos(α
it
),α
it
being the angle between o
it
and c.A
value of Θ close to 0 will encourage exploration,while a value close to
π
2
will
favor exploitation.In this way,the management of application rates is abstracted
by the angle Θ,that guides the direction of the search as the needle of a compass
shows the north.
Projections are turned into positive values by subtracting the smallest one
and dividing themby execution time,in order to award faster operators (Fig.1e).
δ
it
=
|o
n
it
|cos(α
it
) −min
i
{|o
n
it
|cos(α
it
)}
T
it
Application rates are obtained proportionally to values of δ
it
plus a constant ξ
t
,
that ensures that the smallest rate is equal to a minimal rate,P
min
,preventing
the disappearance of the corresponding operator (Fig.1f).
p
it
=
δ
it

t
￿
k
i=1
δ
it

t
2.2 Operator Application
Operators’ application rates are updated at every generation.An interesting phe-
nomenon,observed during previous experiments,is the displacement of points
5
Fig.1.(a) points (d
it
,q
it
) and corresponding vectors o
it
,(b) quality-based ranking,(c)
diversity-based ranking,(d) proposed approach,(e) values of δ
it
,(f) final probabilities
(d
n
it
,q
n
it
) in the graphic during execution.Consider for instance that Θ is set to
π
4
,
so an equal importance is given to ΔQ and ΔD.At the beginning of the search,
just after the population was randomly created,the population diversity is high
and mean fitness is low,thus it is easy for most operators to be situated in the
quadrant II.After some generations,the population starts to converge to some
optimum,so improvement becomes difficult,and points in II corresponding to
exploitative operators move near x-axis.When improvements in this zone are
exhausted,exploitation operators obtain worst rewards than exploration ones,
causing a shift of the search to diversification,and escaping from that optimum.
Such a visualization tool could be useful to understand the behavior of operators
as well as for debugging purposes.
3 Experimentation
For our experiments,we focus on the use of GAs for the resolution of combi-
natorial problems.Among the numerous possible classes,we have chosen the
Boolean satisfiability problem (SAT) [17],which consists in assigning values to
binary variables in order to satisfy a Boolean formula.
The first reason is that this is probably the most known combinatorial prob-
lem,since it has been the first to be proved NP complete and therefore it has
been used to encode and solve problems frommany application areas.The second
reason is that there exists an impressive library [18] of instances and their dif-
ficulty has been deeply studied with several interesting theoretical results (e.g.,
6
phase transition),which allows us to select different instances with various search
landscapes’ properties.
More formally,an instance of the SAT problem is defined by a set of Boolean
variables X = {x
1
,...,x
n
} and a Boolean formula F:{0,1}
n
→{0,1}.The for-
mula is said to be satisfiable if there exists an assignment v:X →{0,1}
n
satis-
fying F and unsatisfiable otherwise.Instances are classically formulated in con-
junctive normal form (conjunctions of clauses) and therefore one has to satisfy
all these clauses.
To solve this problem,we consider a GA with a binary population that ap-
plies one operator at each generation.The fitness function evaluates the number
of clauses satisfied by an individual and the associated problemis thus obviously
a maximization one.The diversity is classically computed as the Hamming dis-
tance entropy (see [19]).
In order to evaluate our control approach,we compare it with Adaptive Pur-
suit (AP) [7] and APGAIN [6].As mentioned in Sect.2,AP is representative
of many controllers that consider fitness improvement as their guiding crite-
rion while APGAIN is representative of methods that try to learn and model
the behavior of the operators.Additionally,we also included a uniform choice
(UC) among operators as the baseline of the comparison.In order to check the
robustness of our method –but restricted by the lack of space–,we present 13
different instances fromthe SATLIB repository [18],mixing problems of different
sizes and nature,including random-generated instances,graph coloring,logistics
planning and blocks world problems.
3.1 Operators
The goal of this work is to create an abstraction of operators,regardless of their
quality,and to compare controllers,and not to develop an efficient GA for SAT.
The idea is also to use non standard operators,whose effect over diversity and
quality is a priori unknown.Therefore,we propose six operators with different
features,more or less specialized with regards to the SAT problem.
One-point crossover chooses randomly two individuals and crosses them at
a random position.In this operator exclusively,the best child replaces the
worst parent.
Contagion chooses randomly two individuals,and the variables in false clauses
of the worst one are replaced with corresponding values of the best individual.
Hill climbing checks all neighbors by swapping one variable,moves to the bet-
ter one and repeats while improvement is possible.
Tunneling swaps variables without decreasing the number of true clauses ac-
cording to a tabu list of length equal to
1
4
of the number of variables.
Badswap swaps all variables that appear in false clauses.
Wave chooses the variable that appears in the highest number of false clauses
and in the minimum number of clauses only supported by it,and swaps it.
It repeats the same process at most
1
2
times the number of variables.
7
In order to observe the effect of population size over the performance of
controllers,we performed experiments with populations of 3,5,10 and 20 in-
dividuals.10.000 generations were processed,in order to observe the long-term
behavior of controllers.
3.2 Control strategy
Previous experiments have shown that values of Θ around 0.25π produced good
results.To observe the sensitivity of this value,we ran experiments with values of
0.20π,0.25π and 0.30π.Note that,even when the value of Θ remains fixed along
the run,it does not mean that Compass falls in the category of parameter tuning.
It is necessary to distinguish the parameters of the GA (operator’s application
rates) from the parameter(s) of the control strategy (θ in this case).Controller
parameters provide an abstraction of GA’s parameters.It is pertinent to wonder
whether it is worth replacing GA parameters by controller parameters.We think
that this substitution is beneficial in two cases:
– When the effect of controller parameters is less sensitive than GA’s parame-
ters.Consider,for instance,the case of mutation rate:small changes in this
parameter have a drastic effect over GA performances;so it is interesting to
use a controller which is able to wrap these parameters,providing a more
stable operation,even by including additional control parameters.
– When the controller provides a more comprehensible abstraction of GA pa-
rameters.This is the case in our approach:it is easier for a human to think in
terms of raising and lowering EEB instead of modifying multiple operators’
parameters,specially when their behavior is ill-known.
The parameter τ is set to 100,and P
min
to
1
3k
(see Sect.2.1).Each run,
consisting of a specific problem instance,population size,controller and Θ (just
for Compass),was replicated 30 times for significant statistical comparisons.AP
and APGAIN parameters were set to published values,or tuned to obtain good
performance.According to the notations used in [7,6],for AP:α = 0.8,β = 0.8,
P
min
=
1
2k
.For APGAIN:v
L
= 0,v
U
= 1,δ = 0.05,σ = 700,ρ =
σ
4
,ξ = 10,
φ = 0.045 (about 10% of re-evaluations).
4 Results and discussion
The average number of false clauses obtained over 30 runs is shown in table 1.
Comparisons were done using a student-t test with a significance level of 5%.
Values are boldfaced when Compass outperforms UC,and italicized when UC
is better than Compass.No font modification means that results are statistically
indistinguishable.Cells are
grey when Compass outperforms AP,and
black
when AP outperforms Compass.White cells means indistinguishability.Finally,
Compass outperformed APGAIN in all cases,except in those indicated with
underlined
values,where results are indistinguishable.Average execution times
of AP,APGAIN and Compass,relative to those of UC,are shown at the rightest
8
column of the table.Total number of clauses of each problem appear in the
bottom of the table.From now on we will refer as C.2,C.25,C.3 to Compass
with Θ values of 0.20π,0.25π and 0.30π,respectively.
Table 1.Average false clauses and comparative execution times
popsize
control
4-blocks
aim
f1000
CBS
flat200
logistics
medium
par16
sw100-0
sw100-1
uf250
uuf250
time
UC 12.9 3.2 52.6 5.4 38.4 16.7 7.7 124.2 23.2 9.4 8.3 11.7 1.00
AP 7.5 2.1 37.7 3.4 19.6 8.9 3.5 71.3 16.2 3.3 5.6 8.3 0.86
APGAIN 11.8 3.3 51.6 5.0 27.5 14.0 5.4 109.2 20.6 6.0 8.8 10.8 0.97
3 C.2
5.8 2.1
25.5
2.1
13.2
8.1
2.0
41.6
13.7
1.3
3.3
5.5 0.88
C.25
6.4
1.6
26.7
2.3
11.7
8.1
2.0
38.4
13.4
1.8
3.6
5.9 0.89
C.3
6.1
1.6
26.8 2.8
15.9
8.1 3.0
47.2 15.5
2.2
4.3
6.4 0.73
UC 13.8 3.3 61.4 7.6 34.6 16.5 7.9 126.2 24.6 8.3 11.2 14.0 1.00
AP 8.9 3.0 47.4 4.8 23.3 11.3 4.9 88.2 18.4 4.5 8.3 10.2 0.88
APGAIN 11.2 4.9 60.1 6.6 31.1 14.0 6.4 118.7 20.1 6.0 9.8 13.6 1.09
5 C.2
6.2
2.2
27.5
3.0
15.5
9.1
2.6
45.0
15.0
2.8
4.6
6.7 0.89
C.25
6.3
1.9
27.2
2.7
16.2
8.8
2.6
43.4
14.8
2.9
4.4
7.1 0.91
C.3
7.8 2.5
36.5 4.2
20.0
9.0
3.5
66.7
16.8
3.4
5.9 10.3 0.80
UC 13.8 3.3 54.2 6.3 28.8 15.5 7.0 110.0 19.4 6.1 10.1 12.2 1.00
AP 9.9 3.9 55.2 5.0 26.3 12.9 5.7 98.6 18.1 5.9 9.3 12.2 1.00
APGAIN 11.7 4.8 66.0 5.8 30.4 16.3 6.2 120.4 18.8 6.8 11.3 13.9 0.62
10 C.2
8.3 3.5
44.9
3.9
21.9
11.3
4.5
72.5 17.9
4.3
7.5
10.4 0.92
C.25
8.0
2.9
42.7 4.4
23.1
11.2
4.2
72.6 17.8
4.6
7.2
9.5 0.83
C.3
9.1 3.7
49.3 5.7
24.7
11.5 5.1 96.8 19.1
5.9
9.1 12.6
0.78
UC 13.8 2.8 42.0 4.5 23.1 13.8 5.2 88.5 16.5 3.4 6.4 10.3 1.00
AP 9.1 2.4 54.1 4.9 26.0 14.6 5.3 102.6 17.2 5.7 8.4 11.3 1.28
APGAIN 11.3 4.3 68.0 5.5 30.0 17.8 6.2 120.1 18.7 6.3 11.1 13.7 2.38
20 C.2 9.1
3.5
55.2 4.8
26.0
13.3 5.2
90.4
18.9
6.2
8.1 12.5
0.92
C.25 9.2
3.3
53.2 4.8
25.2
13.3 4.8
90.6 17.8
6.2
8.7
13.2
0.93
C.3 9.4
3.9
58.6
4.9
27.5 13.4 6.2
99.2
18.7
6.1
10.4
12.4
0.88
Total clauses 47820 320 4250 449 2237 6718 953 3310 3100 3100 1065 1065
The mean number of generations required to reach the best values varies
between 1000 and 7000,therefore,the 10000 allowed generations seem sufficient
for all controllers to insure a fair comparison.Of course,the results are not
competitive with specific SAT evolutionary solvers,since we do not use the best
dedicated operators and neither try to optimize ours.Our purpose here is rather
to highlight the differences between controllers.Better results for SAT using GAs
were obtained by a hierarchical memetic algorithm[19].However,given the early
stage of this research,we preferred a simpler GA that applies one operator each
time,in order to facilitate understanding.Further research will consider more
complex operator architectures.
The predominance of Compass,and specially C.25 over UC,AP and AP-
GAIN is noticeable,particularly for small populations.Something similar hap-
pens with C.2,and,to some extent,with C.3.As mentioned previously,a value
Θ = 0.25π works well with all kind of problems.
9
Small populations lose diversity easily,so controlling diversity is a critical
issue.APGAIN does it by penalizing common individuals.However,when all in-
dividuals are the same,this penalization is not effective.It seems that in practice,
diversity is mostly induced by the first period in APGAIN (operator evaluation).
AP controls diversity by defining a minimum application rate equal to
1
2k
.This
value could be excessive if operators are mostly exploitative.A smaller value of
1
3k
,used in Compass,grants the controller a greater range to balance EEB.
Small populations provide better results than larger ones.This is probably
due to the operators,that were inspired by local search heuristics:they are
applied more repetitively over the same individual in smaller populations than
in large ones,thus producing better results.Surprisingly,UC is quite competitive
as population size increases.It seems that applying operators of both low d
it
and
q
it
produce “bad” individuals that are able,however,to escape fromlocal optima.
Nevertheless,this practice is beneficial only if the population is big enough to
keep their good elements at the same time.
Execution times of AP and APGAIN are shorter than those of UC for the
smallest populations.Compass has stable short execution times for different
population sizes.This is interesting because it means that the effort spent in
performing control induces savings in total execution time.
From an implementation point of view,we found that Compass and AP were
more independent from the logic of the GA than APGAIN,which introduces its
diversity control mechanism in the GA fitness function.Both AP and Compass
provide a separate layer of control.Parameterization of Compass is quite intu-
itive.We have already discussed the effect of Θ and P
min
.The last parameter,
τ,is quite stable,we have replicated experiments with several values for this
parameter without detecting a considerable influence over the performances.
5 Conclusions
In this paper we have presented Compass,a GA controller that provides an
abstraction of parameters and simplifies control by adjusting the level of explo-
ration/exploitation along the search.This controller measures operators’ effects
over population’s mean fitness,diversity and execution time.Compass is inde-
pendent from the GA,in order to provide an additional control layer that could
be used by other of population-based algorithms.Experiments were performed
using a 6-operators GA to solve instances of the SAT problem.Results were fa-
vorably compared against a basic uniformchoice and state-of-the-art controllers.
The twofold evaluation of operators (quality and diversity variation) is co-
herent with the guiding principles of population-based search algorithms,i.e.
maximizing quality of solutions while avoiding the concentration of the popula-
tion,in order to benefit fromtheir parallel nature.By considering both measures,
we observed a natural mechanism to escape from local optima.
The search direction is easily apprehensible by observing a dynamic vectorial
representation,thus Compass could also be used as a tool for understanding the
role of operators.
10
The management of nonstandard unknown operators also opens the perspec-
tive of using Compass to evaluate operators generated automatically,for example
by means of Genetic Programming.
References
1.Eiben,A.,Michalewicz,Z.,Schoenauer,M.,Smith,J.:Parameter Control in Evo-
lutionary Algorithms.[20] 19–46
2.Jong,K.D.:Parameter Setting in EAs:a 30 Year Perspective.[20] 1–18
3.Meyer-Nieberg,S.,Beyer,H.:Self-Adaptation in EAs.[20] 47–75
4.Kee,E.,Airey,S.,Cyre,W.:An adaptive genetic algorithm.In:Proceedings of the
Genetic and Evolutionary Computation Conference (GECCO),Morgan Kaufmann
(2001) 391–397
5.Maturana,J.,Saubion,F.:Towards a generic control strategy for EAs:an adaptive
fuzzy-learning approach.In:Proceedings of IEEE International Conference on
Evolutionary Computation (CEC).(2007) 4546–4553
6.Wong,Lee,Leung,Ho:A novel approach in parameter adaptation and diversity
maintenance for GAs.Soft Computing 7(8) (2003) 506–515
7.Thierens,D.:Adaptive Strategies for Operator Allocation.[20] 77–90
8.Igel,C.,Kreutz,M.:Operator adaptation in structure optimization of neural net-
works.In:Proceedings of the Genetic and Evolutionary Computation Conference
(GECCO),Morgan Kaufmann (2001) 1094
9.Lobo,F.,Goldberg,D.:Decision making in a hybrid genetic algorithm.In:Proc.
of IEEE Intl.Conference on Evolutionary Computation (CEC).(1997) 122–125
10.Whitacre,J.,Pham,T.,Sarker,R.:Use of statistical outlier detection method in
adaptive evolutionary algorithms.In:Proceedings of the Genetic and Evolutionary
Computation Conference (GECCO),ACM (2006) 1345–1352
11.Eiben,A.,Marchiori,E.,Valk´o,V.:Evolutionary algorithms with on-the-fly pop-
ulation size adjustment.In:Proceedings of Parallel Problem Solving from Nature
(PPSN).Volume 3242 of LNCS.,Springer (2004) 41–50
12.Ursem,R.:Diversity-guided evolutionary algorithms.In:Proc.of Parallel Problem
Solving from Nature (PPSN).Volume 2439 of LNCS.,Springer (2002) 462–474
13.Eiben,A.,Horvath,M.,Kowalczyk,W.,Schut,M.:Reinforcement learning for
online control of evolutionary algorithms.In:Engineering Self-Organising Systems,
4th International Workshop.Volume 4335 of LNCS.,Springer (2006) 151–160
14.Lis,J.:Parallel genetic algorithm with dynamic control parameter.In:Proc.of
IEEE Intl.Conference on Evolutionary Computation (CEC).(1996) 324–329
15.Tsutsui,S.,Fujimoto,Y.,Ghosh,A.:Forking GAs:GAs with search space division
schemes.Evolutionary Computation 5(1) (1997) 61–80
16.Harik,G.,Lobo,F.:A parameter-less GA.In:Proceedings of the Genetic and
Evolutionary Computation Conference (GECCO).(1999) 258–265
17.Cook,S.A.:The complexity of theorem-proving procedures.In:STOC ’71:Pro-
ceedings of the third annual ACMsymposium on Theory of computing,New York,
USA,ACM (1971) 151–158
18.Hoos,H.,St¨utzle,T.In:SATLIB:An Online Resource for Research on SAT.IOS
Press,www.satlib.org (2000) 283–292
19.Lardeux,F.,Saubion,F.,Hao,J.K.:GASAT:A genetic local search algorithm for
the satisfiability problem.Evolutionary Computation 14(2) (2006) 223–253
20.Lobo,F.,Lima,C.,Michalewicz,Z.,eds.:Parameter Setting in Evolutionary
Algorithms.Volume 54 of Studies in Computational Intelligence.Springer (2007)