1
Hybrid Genetic Algorithms: Modeling and Application to
the Quadratic Assignment Problem
Patrick Copalu and Pataya Dangprasert
*
Faculty of Business Administration, Assumption University
Bangkok, Thailand
Abstract
Presented in this paper are ‘hybrid genetic algorithms’ (HGA), which are
collaborative performances of ‘genetic algorithms’ (GA) and local searches. Such
algorithms are also called memetic algorithms. Firstly, some of the local searches used
in the hybrid algorithms, such as ‘simulated annealing’ (SA), steepest ascent, and a
modification improvement strategy called ‘tabu search’ (TS) are briefly described.
Then, the hybrid algorithms models, combining the robust TS and CHC algorithms are
conceptually introduced, formulated and mathematically represented. Finally, the
defined HGA, which are practically applied to different instances of the ‘quadratic
assignment problem’ (QAP) by and their results to the bestknown QAPLIB problems
from the QAP library are discussed.
The competitiveness of the HGA is demonstrated through experimental results
since 12 of the 16 considered QAP instances produce the bestknown solutions. Factors
of that performance and competitiveness are also highlighted in this paper.
Keywords: Tabu search, steepest ascent, simulated annealing, genetic algorithms,
hybrid genetic algorithms, quadratic assignment problem.
Introduction
‘Hybrid genetic algorithms’ (HGA), also
called ‘memetic algorithms’ in some other
literature, are combination performances of
‘genetic algorithms’ (GA) and local search, i.e.
steepest ascent, steepest descent, ‘tabu search’
(TS), and ‘simulated annealing’ (SA).
Traditional GA often explore the
candidate solution encoded in chromosomes
and exploit those with better fitness iterally till
the solution is reached. The local search by
itself explores the solution space using specific
move in its neighborhood. The HGA combine
those two aspects by using the local
(traditionally randomized) of the GA and the
chromosomes that are produced by genetic
operators (Npoints search to improve the
initial population crossover operator with N1
*
Department of Computer Information System,
Assumption University, Bangkok, Thailand.
and mutation operator with its associated
mutation rate). The improvement versions of
chromosomes are then carried out by the
traditional genetic algorithms as shown in Fig.1.
The key idea of the HGA is to use
traditional GA to explore in parallel several
regions of the search space and simultaneously
incorporates a good mechanism like SA to
intensify the search around some selected
regions. In addition, the improvement strategy
like TS may be used to keep track of some
states that should be avoided for the second
visit.
In 1995, Whitley described how to model
a basic form of HGA in the context of the
existing models for GA (Whitley 1995). This
model takes into consideration a deterministic
local search called ‘steepest ascent’. With
this basic form of HGA, it is possible to modify
the model by integrating other search
techniques in replacement of steepest ascent.
Even though Whitley (1995) used a deterministic
2
Population
Local
Search
Traditional
GA
Initial
population
Fig. 1. Hybrid genetic algorithm concept
local search, the HGA can be reformulated by
considering the probabilistic moves. For
example, instead of testing all neighbors
orderly and choosing the best improvement, the
neighbors can be tested randomly to choose the
first improvement found. SA is another local
search that drives its moves randomly and can
be considered as a local partner in HGA.
Using QAP problems as test beds, it has
been shown by Vásquez and Whitley (2000)
that the HGA produces better results than any
individual heuristic techniques that proposed to
solve QAP by Cela (1998). Those heuristic
approaches used by Cela included TS, SA, GA,
and ‘greedy randomized search procedure’
(GRASP). In the following section, the
‘steepest ascent’, SA and TS will be briefly
described. The second section will emphasize
on HGA. The third section will discuss the
QAP along with the algorithms implemented
and results obtained will be discussed. The last
part will then offer conclusion.
The Steepest Ascent
The ‘steepest ascent’ (or ‘steepest
descent’) technique uses a fundamental result
from calculus (that the gradient points in the
direction of the steepest increase of a function),
to determine how the initial settings of the
parameters should be changed to yield an
optimal value of the response variable. The
direction of movement is made proportional to
the estimated sensitivity of the performance of
each variable. This is a hillclimber that always
goes for the steepest route up from any point.
When ‘steepest ascent’ is applied to a binary
bit string, as shown in Table 1, the local
neighbors consist of strings with a single bit
changed at a time. The fitness of all neighbors
is evaluated and the search moves toward the
best improvement found. It is a deterministic
algorithm and very easily gets stuck on local
optima. Fig. 2 describes the steepest ascent
local search.
Simulated Annealing
‘Simulated annealing’ (SA) borrows its
basic ideas from statistical mechanics, when a
metal cools down. Those electrons in the metal
align themselves in an optimal pattern to ease
the transfer of energy. In general, left to itself,
a slowly cooling system, eventually finds the
arrangement of atoms that has the lowest
energy. This is the behavior that motivates the
method of optimization by SA. In SA we
construct a model of a system and slowly
decrease the temperature of this theoretical
system until the system assumes a minimal
energy structure.
Table 1. A steepest ascent example
String Fitness
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111
1
3
5
10
8
5
2
6
8
12
14
9
5
4
4
4
Choose
0110 (2)
Calculate
neighbors
1110 (4)
0010 (5)
0100 (8)
0111 (6)
Choose
0100
Calculate
neighbors
1100 (5)
0000 (1)
0110 (2)
0101 (5)
No fitness
increase so
choose new
starting point
Current
Hilltop
0110 (2)
0100 (8)
3
C(s): Cost of a solution s
1. Choose a solution x
2. Choose a solutions in the
neighborhood of x such that
C(s) < C(x) (*)
If no neighbor verifies (*), then x is a
local minimum, stop.
3. x ? s
4. Back to step 2
Fig. 2. Steepest ascent algorithm
SA as an optimization technique was first
introduced to solve problems in discrete opti
mization, mainly combinatorial optimization.
Subsequently, this technique has been
successfully applied to solve optimization
problems over the space of continuous decision
variables. SA is a simulation optimization
technique that allows random ascent moves in
order to escape the local minima, but a price is
paid in terms of a large increase in the
computational time required. It can be proven
that the technique will find an approximated
optimum. The annealing schedule might
require a long time to reach a true optimum.
SA’s major advantage over other methods
is an ability to avoid becoming trapped at local
minima. The algorithm uses a random search,
which not only accepts changes that decrease
objective function f, but also some changes that
increase it.
The latter are accepted with a probability
)exp( Tfp , where f is the increase in f
and T is a control parameter. The parameter T,
by analogy with the original application, is
known as the system ‘temperature’ irrespective
of the objective function involved as shown in
Fig. 3. The Advantages of SA can be therefore
stated in four points. First, it can deal with
arbitrary systems and cost functions. Second, it
statistically guarantees finding an optimal
solution. Third, it is relatively easy to code,
even for complex problems. Finally, it
generally gives a ‘good’ solution (Battiti et
al.1994a). This makes SA an attractive option
for combinatorial optimization problems.
The Tabu Search
‘Tabu search’ (TS) is an iterative procedure
designed for the solution of optimization
Compute a random initial state s
N=0,
*
n
x =s
Repeat I=1,2,…
Repeat j=1,2,….,m
i
Compute a neighbor s’=N(s)
if (f(s’)f(s)) then
s=s’
if (f(s’)< f(
*
n
x )) then
*
n
x =s
n=n+1
endif
else
s=s’ with probability p(T
i
,s,s’)
endif
EndRepeat
EndRepeat
Fig. 3. Simulated annealing pseudocode
problems. TS was invented by Glover (1986)
and has been used to solve a wide range of hard
optimization problems such as jobshop
scheduling, graph coloring problems, the
‘traveling salesman problem’ (TSP) and the
capacitated arc routing problem.
TS can be considered as a ‘metastrategy’
for guiding known heuristics to overcome local
optimality. TS is based on the premise that
problem solving, in order to be qualified as
intelligent, must incorporate adaptive memory
and responsive exploration. The adaptive
memory feature of TS allows the
implementation of procedures that are capable
of searching the solution space economically
and effectively. The role of TS will most often
be to guide and orient the search of another
search procedure.
To avoid cycling, solutions that were
recently examined are declared ‘tabu’ (taboo)
for a certain number of iterations. Applying
intensification procedures can accentuate the
search in a promising region of the solution
space. In contrast, diversification can be used
to broaden the search to a less explored region.
Although still in its infancy stages, this meta
heuristic has been reported in literature
during the last few years as successful
solution approaches for a great variety of
problems.
To prevent the search from endlessly
cycling between the same solutions, the
attributebased memory of TS is structured at
4
its first level to provide a short term memory
function, which may be visualized to operate as
follows. Imagine that the attributes of all
explored moves are stored in a list, named a
running list, representing the trajectory of
solutions encountered. Then, related to a sub
list of the running list a socalled tabu list may
be introduced. Based on certain restrictions the
tabu list implicitly keeps track of moves, or
more precisely salient features of these moves,
by recording attributes complementary to those
of the running list. These attributes will be
forbidden from being embodied in moves
selected in, at least one subsequent iteration,
because their inclusion might lead back to a
previously visited solution.
The goal is to permit ‘good’ moves for
each iteration without revisiting solutions
already encountered and therefore, the tabu list
restricts the search to a subset of admissible
moves consisting of admissible attributes or
combinations of attributes. A general outline of
TS procedure based on applying such a short
term memory function is described in Fig. 4
(for solving a minimization problem). Evidently
the key to this procedure lies in the tabu list
management, that is, updating the tabu list and
deciding on how many moves and which ones
have to be set tabu within any iteration of the
search. We may distinguish two main
approaches: static methods and dynamic
methods.
There are in fact several basic ways for
carrying out this management, generally
involving a recently based record that can be
maintained individually for different attributes
or different classes of attributes. In addition,
many applications of TS introduce memory
structures based on frequency (modulated by a
notion of move influence), and the coordination
of these memory elements is made to vary as
the preceding short term memory component
becomes integrated with longer term one.
The purpose of this integration is to
provide a balance between two types of
globally interacting strategies, called
intensification strategies and diversification
strategies. TS is concerned with finding new
and more effective ways of taking advantage of
the mechanisms associated with both adaptive
memory and responsive exploration.
Given a feasible solution x* with objective function
value z*, let x := x* with z(x) = z*.
Iteration:
while stopping criterion is not fulfilled do
begin
(1) select best admissible move that transforms
x into x' with objective function value z(x') and
add its attributes to the running list
(2) perform tabu list management: compute
moves (or attributes) to be set tabu, i.e.,
update the tabu list
(3) perform exchanges: x := x', z(x) = z(x');
if z(x) < z* then z* := z(x), x* := x
endif
endwhile
Result: x* is the best of all determined
solutions, with objective function value z*.
Fig. 4. Tabu search algorithm.
Much remains to be discovered about the
range of problems for which the TS is best
suited. The development of new designs and
strategies mixes makes TS a fertile area for
research and empirical study.
Hybrid Genetic Algorithms
Overview of HGAs
A central goal of the research efforts in
GAs is to find a form of algorithms that is
robust and performs well across a variety of
problems types. GAs exploit only the coding
and the objective function value to determine
plausible future generations. Therefore, it may
be advantageous to incorporate various specific
search techniques, like SA or TS, to form a
hybrid that the global perspective of GAs and
the convergence of problemspecific
techniques.
A promising approach to use domain
knowledge in GAs is the incorporation of local
search or other metaheuristics. The resulting
algorithms, often called ‘hybrid genetic
algorithms’ (HGA), memetic algorithms
(Moscato 1989) or ‘genetic local search
algorithms’ (GLS) (Yamada and Nakano1996),
have proven to be highly effective since they
combine the advantages of an efficient local
search for exploiting the neighborhood of a
5
single solution and population based algorithms
that effectively explore the search space.
The first use of the term ‘memetic
algorithms’ in the computing literature appeared
in 1989 in a paper written by Moscato (1989).
The method is gaining wide acceptance, in
particular in wellknown combinatorial
optimization problems where large instances
have been solved to optimality and where other
metaheuristics have failed. An open research
issue is to understand which features of the
representation chosen had lead to
characteristics of the objective functions, which
are efficiently exploited by a hybrid approach.
Mathematical Representation: Models of
HGA can also be represented mathematically
using formal notations (Radcliffe and Surry
1994).
Let (s) be the representation function
that returns the chromosome solution. The
search space will be denoted as S (i.e. the set of
all phenotypes) and the set of chromosomes
will be denoted C (i.e. the set of all genotypes).
The function can therefore be represented as:
CS : (1)
Since all chromosomes are not
necessarily solution of the problem, it is
sufficient for the function to be injective,
formally we get: s S , (s) C (injective)
and c C / c is not solution in C (non
surjective).
The GA is to evaluate a fitness function
for each chromosome and it can be regarded as
a positive real number, the mapping can
therefore be done from the set of genotypes
onto the set of positive real numbers
+
. Let’s
denote this function:
Cf: (2)
to be maximized and the set of global maxima
is denoted
*
C such that CC
*
.
Let Q be a stochastic unary move
operator over C. The stochastic element of
such an operator belongs to a control set
Q
K,
from which a control parameter will be drawn
to determine which of the possible move
actually occur (a binary mask might be used as
the control parameter for mutation of binary
strings). Q can therefore be denoted into
functional form as:
CKSQ
Q
: (3)
With those notations, a chromosome xC
is said to be locally optimal with respect to Q
(or Qopt) if and only if no chromosome of
higher fitness than x can be generated from it
by a single application of Q, i.e.
k K
Q
, f(Q (x,k)) f(x) (4)
Let’s denote the set of those Q opt
chromosomes by
C
Q
= {xC / x is Qopt} (5)
It is also true that:
Q
,
if x C
*
x is Q opt x C
Q
Thus
Q
CC
*
and therefore, the search for the
genetic algorithm to optimize the fitness
function f over C can be (by transitivity)
formulated over C
Q
instead.
Traditional genetic algorithms combine
crossover (recombination) and mutation to
produce new generations. If the control sets for
crossover and mutation operators are K
X
and
K
M
respectively, the crossover operation can be
functionally denoted as:
CKCC
: (6)
and the recombination operator can be
functionally denoted as:
CKC
: (7)
Therefore the generic reproductive
function
g
R is the composition of the mutation
and reproduction, that is
g
R.
CKKCCR
g
: (8)
defined by:
g
R (x,y,k
M
,
k )=
x,y,
k ),
k ) (9)
The HGA combines the above
functional genetic algorithm with a local
search. Let’s represent that search H with
control set
H
K and define the function
QH
CKCH : (10)
If we now compose
g
R with H, we have the
6
Loc 1
Fac 1
Loc 3
Loc 4
Loc 2
Fac 3
Fac 2
Fac 4
hybrid genetic reproduction function
m
R such
that:
QMHQQm
CKKKCCR
: (11)
that is:
m
R (x,y,k
H
,k
M
,
k )= H ( M (
x,y,k
X
),k
M
),k
H
)
(12)
The Quadratic Assignment Problem
Overview of the QAP
The QAP introduced by Koopmans and
Beckmann (1957) is a combinatorial
optimization problem (COP). The objective of
COP is to minimize the cost function of a
finite, discrete system characterized by a large
number of possible solutions. The term
quadratic comes from the reformulation of the
problem as an optimization problem with a
quadratic objective function.
Cela (1998) pointed out several reasons
why the QAP still gives rise to a lot of
researches. First, a lot of realworld problems
can be modeled by QAP such as the VLSI
module placement, location design of factories,
scheduling problem, statistical data analysis
and processprocessor mapping in parallel
computing, etc. Second, many of the
combinatorial optimization problems can be
formulated as QAP, for example the traveling
salesman problem can be formulated as a set of
cities (associated with distances) and a set of
positions (facilities). Third, QAP are difficult
problems from a computational point of view.
As n grows large it becomes impossible to
enumerate all the possible assignments, even
by fast computers. For example, if n = 25 and a
computer were able to evaluate 10 billion
assignments per second, it would still take
nearly 50 million years to evaluate all
assignments! Last, QAP are considered to be
NPhard problems and can generally not be
solved using optimization techniques.
In a standard problem in location theory,
we are given a set of n locations and n
facilities, and told to assign each facility to a
location. To measure the cost of each possible
assignment, there are n! of them. We multiply
the prescribed flow between each pair of
facilities by the distance between their assigned
locations, and sum over all the pairs. Our aim
is to find the assignment that minimizes this
cost, and this problem is precisely a quadratic
assignment problem. Fig. 5 shows one possible
solution to a quadratic assignment problem/
facility location problem with four facilities.
The permutation p corresponding to this
graphical solution is (2,1,4,3). This means that:
facility 2 has been assigned to
location 1,
facility 1 assigned to location 2,
facility 4 assigned to location 3,
facility 3 assigned to location 4.
The lines drawn between the facilities
imply that there is a required flow between the
facilities, and the thickness of the line denotes
the value of the required flow. The goal, in
some sense, is to try to get the ‘fat’ lines as
close together as possible. To make the idea
more concrete, let's say that the distances are:
d(1,2) = 22, d(1,3) = 53, d(2,3) = 40 and
d(3,4) = 55. Also, the required flows between
facilities are: f(2,4) = 1, f(1,4) = 2, f(1,2) = 3
and f(3,4) = 4. The assignment cost of the
permutation shown is:
[d(1,2)*f(1,2)]+[d(1,3)*f(2,4)]+d(2,3)*f(1,4)] +
[d(3,4)*f(3,4)]
= 22*3+53*1+40*2+55*4
= 419.
This is not the best possible permutation.
In this case the best answer is 395. As the
problems get larger, it becomes much, much
more difficult to find the optimal solution.
Fig. 5. A quadratic assignment problem
Mathematical Representation
Mathematically, we can formulate the
problem by defining two n by n matrices: a
flow matrix F whose (i,j) element represents
the flow between facilities i and j, and a
7
distance matrix D whose (i,j) element represents
the distance between locations i and j.
We represent an assignment by the vector
p, which is a permutation of the numbers {1, 2,
... ,n}. p(j) is the location to which facility j is
assigned.
With these definitions, the QAP can be
written as:
n
i
n
j
jpipij
p
df
1 1
)()(
min
The most effective algorithms for
optimally solving QAPs are based on branch
andbound. The branchandbound technique
can be outlined in simple terms. An
enumeration tree of continuous linear programs
is formed, in which each problem has the same
constraints and objective as except for some
additional bounds on certain components of x.
At the root of this tree is the problem with the
requirement x Z
n
removed. The solution to
this root problem will not, in general, have all
integer components.
Heuristic Approaches on QAP
As already stated earlier in this section,
QAP are NPHard problems. Therefore,
heuristics or metaheuristics like those
discussed in the first part of this paper like TS,
SA, GAs and HGA, are widely tested and
compared in terms of solution accuracy (are the
best known solutions obtained or even
improved?) and cost of the computation related
to how fast we obtain that solution and how
much memory is required?
In this paper, we review a total of four
algorithms that have been implemented by
Vásquez and Whitley (2000) and compare
them for 16 instances of the QAP. Namely,
those algorithms are:
The first algorithm is a traditional SA
algorithm as described in part I.
The second algorithm is a ‘reactive tabu
search’ (RTS) algorithm, which is an
improvement derived from TS as described by
Battiti and Tecchiolli (1994b). The tabu
scheme based on a fixed list size is not strict
and therefore the possibility of cycles still
remains. The proper choice of the list size is
obviously a critical factor to the ‘good’
convergence of the algorithm. The RTS goes
further in the direction of robustness by
proposing a simple mechanism for adapting the
list size to the properties of the optimization
problem. The configurations visited during the
search and the corresponding iteration numbers
are stored in memory so that, after the last
movement is chosen, one can check for the
repetition of configurations and calculate the
interval between two visits. The basic fast
‘reaction’ mechanism increases the list size
when configurations are repeated. A slower
reduction mechanism is also present so that the
size is reduced in regions of the search space
that do not need large sizes. Compression
techniques can also be implemented if the use
of an excessive memory space to store the
entire configurations is required (hashing
techniques).
The third algorithm is the CHC
algorithm. The CHC is a generational genetic
search algorithm that uses truncation selection.
The choice for the CHC is made because it
produces the best results for GA. The CHC
generational genetic search is similar to a
traditional GA without regular mutation.
Instead, when a convergence is detected, the
search restarts with the mutation of the current
best individual (for example 35% mutation
rate) to regenerate the entire population.
In addition to this mutational difference,
‘uniform crossover’ (HUX) in which all pairs
of parents are allowed to produce offspring is
changed into ‘distance preserving crossover’
(DPX). DPX can be viewed as a threshold
crossover equipped with an ‘incest prevention
strategy’, i.e. the parents are mated randomly, a
threshold is set and only pairs exceeding that
threshold are allowed to reproduce. Since this
operator relies only on the notion of a distance
between solutions, it can be used for the QAP
problem. In general, the DPX is aimed at
producing an offspring, which has the same
distance to each of his parents, and this
distance is equal to the distance between the
parents themselves. The alleles that are
identical for the same genes in both parents
will be copied to the offspring. The alleles for
all other genes change. Thus, although the
crossover is highly disruptive, the local search
procedure subsequently applied is forced to
8
explore a region of the search space that is
defined by the genes with different alleles in
the parents, which is the region where better
solutions are most likely to be found. The DPX
operator for the QAP works as follows.
Suppose that two parents A and B (as shown in
Fig. 6) are given. First, all assignments that are
contained in both parents are copied to the
offspring C. The remaining positions of the
genome are then randomly filed with the yet
unassigned facilities, taking care that no
assignment that can be found in just one parent
is inserted into the child. After that, we end up
with a child C, for which the condition d(C,A)
= d(C,B) = d(A,B) holds. In the example, d=5.
A
2
4
7
1
8
9
3
5
0
B
7
4
5
3
8
9
2
1
0
C
4
8
9
0
C
5
4
1
2
8
9
7
3
0
Fig. 6. The DPX crossover operator
The fourth and last algorithm is the HGA
(CHC+TS). Three different types of crossover
operators are implemented through the GA.
The first one is the OBC operator (orderbased
crossover) in which a number of elements are
selected from one parent and copied to the
offspring. The missing elements are taken from
the other parent in order. The second one is the
HUX operator, that is to say, the traditional
uniform crossover operator. The third and last
one is the DPX operator described previously.
For comparative reasons, two different versions
of GA are implemented. Namely a generational
GA for which mating individuals are selected
in a fitness biased fashion, the operators are
applied and the best individuals are selected
(form the old population and new offspring),
and SteadyState GA for which two parent are
selected randomly and two new offspring are
created. The best offspring replaces the worst
individual of the population. As for the
parameters involved, the CHC with no
hybridization restarts the population five times.
The hybrid version restarts the population only
three times and then run the robust TS for n
iterations, n being the problem size.
Heuristic vs Random Initialization
When performing search techniques in
general like simulated annealing steepest ascent
or genetic techniques (GT), the question of
how to generate the initial solution often arises
as a primordial matter of concern.
Should initialization of the solution be
based on a heuristic rule or on a randomly
generated one? Theoretically, it should not
matter, but in practice this may depend on the
problem. In some cases, a pure random
solution systematically produces better final
results. On the other hand, a good initial
solution may lead to lower overall run times.
This can be important, for example, in cases
where each iteration takes a relatively long
time; therefore, one has to use some clever
termination rule, as simulation time is a crucial
bottleneck in an optimization process.
In many cases, a simulation is run several
times with different initial solutions. Such a
technique is most robust, but it requires the
maximum number of replications compared
with all other techniques. The pattern search
technique applied to small problems with no
constraints or qualitative input parameters
requires fewer replications than the GT.
However, GT can easily handle
constraints that have lower computational
complexity. Finally, SA can be embedded
within the ‘tabu’ search to construct a probabilistic
technique for global optimization. The next
section will present results and discuss those
results obtained by the implemented HGA.
Results and Discussion
The results obtained by Vazquez and
Whitley (2000) when implementing the HGA
(CHC + TS) are compared with existing results
found from literature for SA, TS (both robust
and reactive), CHC results and ‘genetic
hybrids’. These results are summarized in
Table 2. A total of 16 problems have been
selected for testing, ten problems are Taillard’s
QAP instances and six problems are Skorin
9
Kapov’s QAP instances. Twenty different trials
have been performed for each algorithm and
compared to the bestknown solutions from
literature.
During those experiments, TS and CHC
produced a good nearoptimal solution for the
QAP. As for SA, OB and GLS, the results are
not satisfying and tuning parameters need to be
implemented to improve the performance. It is
also true to say that the Taillard instances for
the QAP are harder than the SkorinKapov
instances (TS can produce results within 0.07%
above the bestknown solution).
In order to improve those results, two
types of tuning were implemented: the first
tuning consisted of adding a backtracking
mechanism to the RTS. In that situation, all
the SkorinKapov instances are consistently
solved and near optimal solutions within
0.73% above the bestknown solution are
obtained for Taillard instances. The second
tuning is to Combine TS and CHC (hybrid
solution). In that case, 12 of the 16 problems
can be solved and obtain the bestknown
solution.
According to the experimental results
obtained and shown in Table 2, it is reasonable
to say that the HGA proposed (TS+CHC) is
very competitive and produces good near
optimal solutions. It was beaten in only
three cases and found the bestknown
solution for 12 of the 16 chosen benchmark
problems.
Conclusion
In this paper, a HGA for solving several
instances of the QAP was presented. The
ingredients of the ‘memetic algorithm’,
evolutionary operators and local neighborhood
search, were described. The performance of
the HGA was investigated on a set of QAP
instances and compared to the bestknown
results known from the literature and to the
performance of some very good heuristic
approaches to the QAP like RTS, SA and
CHC.
Table 2. Percentage above the bestknown
solutions
(Vásquez et al. 2000)
Tai: Taillard Problems
Sko: SkorinKapov Problems
‘*’: bestknown solution has been reached.
The hybrid algorithm combining the RTS
and CHC algorithm was able to outperform
these alternative heuristics on all QAP
instances of practical interest. Furthermore, the
approach proves to be very robust, since the
bestknown solutions can be found for 12 of
the 16 instances considered. This derives form
the properties of both RTS and CHC. The CHC
explores in parallel several regions of the
search space and the RTS intensifies the search
around some selected regions.
There are at least two avenues for future
research. First, the algorithm should be applied
to larger instances of the QAP to investigate its
scalability. Second, the algorithm performance
with other parameter settings for population
size, operator rates and running times should be
investigated.
Problem RTS SA CHC CHC+TS
Tai10a 0.000 (*) 0.000 (*) 0.000 (*) 0.000 (*)
Tai20a 0.000 (*) 0.000 (*) 0.000 (*) 0.000 (*)
Tai30a 0.000 (*) 1.735 0.641 0.000 (*)
Tai40a 0.344 2.335 0.936 0.000 (*)
Tai50a 0.825 2.016 1.279 0.219
Tai60a 0.785 2.398 1.313 0.253
Tai80a 0.387 2.170 1.014 0.239
Tai100a 0.730 1.771 1.491 0.434
Tai150b 0.788 0.752 0.444 0.000 (*)
Tai256c 0.299 0.122 0.076 0.000 (*)
Sko100a 0.052 1.452 0.247 0.000 (*)
Sko100b 0.027 0.241 0.115 0.000 (*)
Sko100c 0.021 1.136 0.715 0.000 (*)
Sko100d 0.062 1.450 0.347 0.000 (*)
Sko100e 0.021 0.855 0.744 0.000 (*)
Sko100f
0.068
1.142
0.224
0.000 (*)
10
References
Battiti, R.; and Tecchiolli, G. 1994a. Simulated
Annealing and Tabu Search in the Long
Run: A Comparison on QAP Tasks.
Computer and Mathematics with
Applications 28(6): 18.
Battiti, R.; and Tecchiolli G. 1994b. The
Reactive Tabu Search. ORSA J. Comput.
6(2): 6183.
Cela, E. 1998. The Quadratic Assignment
Problem: Theory and Algorithms. Kluwer
Academic Publ., Dordrecht, Germany.
Koopmans, T.C.; and Beckmann, M.J. 1957.
Assignment Problems and the Location of
Economic Activities. Economics 25: 5376.
Moscato, P. 1989. Report No. 826 (C3P).
Caltech Concurrent Computation Program,
pp.15879. Pasadena, CA, USA.
Radcliffe, N.J.; and Surry, P.D. 1994. Formal
Memetic Algorithms. In: Proc. Evolutionary
Computing AISB Workshop, pp 116.
SpringerVerlag, New York.
Vásquez, M.; and Whitley, D. 2000. A Hybrid
Genetic Algorithm for the Quadratic
Problem. In: Proc.Genetic and Evolutionry
Computation Conference (GECCO) July
2000, Las Vegas, Nevada, USA.
Whitley, D. 1995. Modeling Hybrid Genetic
Algorithms. Genetic Algorithms in Engineering
and Computer Sciences, pp.191201. John
Wiley, NewYork.
Yamada, T; and Nakano, R. 1996. Scheduling
by Genetic Local Search with MultiStep
Crossover. In: Proc. 4
th
Conf. Parallel
Problem Solving from Nature, pp.96069.
Springer, Germany.
Enter the password to open this PDF file:
File name:

File size:

Title:

Author:

Subject:

Keywords:

Creation Date:

Modification Date:

Creator:

PDF Producer:

PDF Version:

Page Count:

Preparing document for printing…
0%
Σχόλια 0
Συνδεθείτε για να κοινοποιήσετε σχόλιο