COMPARATIVE STUDY OF MULTIOBJECTIVE GENETIC ALGORITHMS

grandgoatAI and Robotics

Oct 23, 2013 (3 years and 10 months ago)

179 views

BULETINUL INSTITUTULUI POLITEHNIC DIN IAI
Publicat de
Universitatea Tehnică „Gheorghe Asachi” din Ia%i
Tomul LVI (LX), Fasc. 1, 2010
SecŃia
AUTOMATICĂ %i CALCULATOARE

COMPARATIVE STUDY OF MULTIOBJECTIVE
GENETIC ALGORITHMS

BY

MIHAELA SIMONA CÎRCIU and FLORIN LEON

Abstract. The objective of this paper is to study three algorithms for solving
multiobjective optimization problems. The first algorithm is a weight<based genetic
algorithm, which consist in the application of a scaling function to the weighted sum of
objectives. This is the simplest extension of a simple genetic algorithm. The determination
of weights that the user must choose and the impossibility to discover all the solutions for
problems with non<convex Pareto<optimal front are the main disadvantages of this
algorithm. Vector evaluated genetic algorithm (VEGA) was the first genetic algorithm
proposed for multiobjective optimization. This is a non<Pareto approach based on the
selection of some relevant groups of individuals, each group being assigned an objective.
The method tends to “gather” results around the extremes of the solution space producing
a sub<optimal Pareto<optimal front convergence. The third algorithm NSGA<II is based on
Pareto dominance to exploit the search space towards the optimal front. It is an efficient
algorithm used in many studies. The algorithms are implemented in an application that
allows solving any multiobjective problem with one of the three algorithms presented
above and therefore can be used as a framework for a comparative study regarding the
performance of different optimization methods.

Key words: multiobjective optimization, genetic algorithms, VEGA, NSGA<II.

2000 Mathematics Subject Classification: 80M50, 58E17.


1. Introduction

“Evolutionary algorithms” is a generic term used to denote any
stochastic search algorithm that uses mechanisms inspired by the biological
evolution and genetic operators such as mutation, crossover, selection and the
fittest survival.
The solutions proposed for optimization problem are playing the role of
individuals in a population. The solutions with high fitness will have higher and
Mihaela Simona Cîrciu and Florin Leon

36

probability to survive and will reproduce superior solutions to the evolutionary
process.
There are three main types of Evolutionary Algorithms, among others:
Genetic Algorithms, Evolutionary Strategies, and Evolutionary Programming.
All three were developed independently, but the general framework is the same.
They evolve a population of candidate solutions to the given problem to
find optimal solutions, using operators inspired by natural genetic variation
(creating new individuals through the crossover and mutation), natural selection and
fittest survival (competition for reproduction and resources among individuals).
However, each type has several variants due to different origins and
their targets toward specific domains. Genetic algorithms use crossover and
mutation and are suitable for optimization problems. Evolutionary strategies use
real parameters. At first evolutionary strategies did not use crossover operator, it
have been introduced later. Therefore, their framework is similar to that of
genetic algorithms with real parameters.
Multiobjective problems are usually more difficult, but they find many
applications in real<life situations.
We organize our paper as follows. In section 2, we present an overview
of the genetic algorithms. In section 3 we describe the three algorithms used for
comparison: the weighted genetic algorithm, VEGA and NSGA<II. Finally, in
section 4, we illustrate some case studies with three especially devised
multiobjective optimization problems, used to compare the performance of the
algorithms under investigation.
2. Genetic Algorithms

Many real world problems involve simultaneously optimization of
multiple objectives, often contradictory. In a multiobjective optimization
problem, it is not possible to find the best solution with respect to all objectives.
A solution may be optimal regarding one objective, but inferior regarding
another objective. The goal is to find a set of optimal solutions, known as
Pareto<optimal set. These solutions are optimal in sense that no other solutions
in the search space are better considering all the objectives. Since none of the
solutions in the set is better than any other solution, with respect to all
objectives, any one of them is an acceptable solution.
A multiobjective genetic algorithm can be a modified version of a
simple genetic algorithm, designed to solve multiobjective optimization
problems. The ability to use evolutionary algorithms to solve multiobjective
optimization problems was suggested by Rosenberg in his dissertation [1]. He
suggested using a genetic algorithm for finding the chemistry of a population of
single<celled organisms with multiple properties or objectives. Since a genetic
algorithm works with a population of candidate solutions, the dominance
criteria can be used to adjust the search toward the Pareto<optimal front and
Bul. Inst. Polit. Ia%i, t. LVI (LX), f. 1, 2010

37

multiple Pareto<optimal solutions can be obtained in a single run. This is the
main reason that makes genetic algorithms suitable for multi objective
optimization [2].
Genetic algorithms operate with a collection of chromosomes, called the
population. The population is randomly initialized. As the search evolves, the
population includes better and better solutions and then it converges, meaning
that it is dominated by a single solution [3]. However, a simple can only provide
one solution for a given combination of weights, unlike a dedicated MOGA,
which can provide a set of solutions simultaneously.

2.1. Genetic Algorithm Structure

It is considered that the length and number of chromosome used are
constant and the population P(t+1) at time t+1 is obtained by taking all descendants
of the population P(t) and then delete chromosomes of previous generation [4].
Genetic algorithm structure is the following:
Step 1: t ← 0;
Step 2: Initialize the population P(t) with random values;
Step 3: Evaluate chromosomes from population P(t). For this purpose is
used a performance function problem dependent;
Step 4: While the termination condition is not satisfied run the
following steps:
Step 4.1: Select chromosomes from P(t) which will promote to the new
generation. Let P
1
be the set of selected chromosome (the intermediate population).
Step 4.2: Apply genetic operators to chromosomes from the recombination
pool of P(t) (crossover and mutation). Let P
2
be the population obtained, i.e. the
offspring of population P(t). Delete from P
1
offspring parents. Chromosome
remained in P
1
are included in P
2
. Build a new generation: P(t+1) ← P
2
; delete
all chromosome from P(t);t ← t+1; Evaluate P(t). A commonly used
termination condition is reaching a specified number of generations, although
there are other termination conditions that test the convergence of the GA by
evaluating the fitness of the population. The result of the algorithm is given by
the better individual from the last generation. In reality, nothing guarantees that the
best individual will pass its genes into the next generation. Therefore, it may be
beneficial to retain the best individual in every generation, a process called elitism.

2.2. Multiobjective Genetic Algorithms

In contrast to single optimization, in multiobjective optimization both
fitness and selection must support several objectives. Therefore, multiobjective
algorithms differ from simple genetic algorithm in the way the fitness
assignment and selection works. Several different variants of multiobjective
Mihaela Simona Cîrciu and Florin Leon

38

algorithms have been introduced with different fitness assignment and selection
strategies. Based on fitness assignment and selection strategies, multiobjective
algorithms can be classified as aggregation based approach, population based
approach and Pareto based approach [2].

The first multiobjective genetic algorithm, vector evaluated algorithm
(VEGA) was proposed by Schaffer [8]. Afterwards, several multiobjective
evolutionary algorithms were developed, such as: Multiobjective Genetic
Algorithm (MOGA), Niched Pareto Genetic Algorithm (NPGA), Weight<based
Genetic Algorithm (WBGA), Random Weighted Genetic Algorithm (RWGA),
Non<dominated Sorting Genetic Algorithm (NSGA), Strength Pareto
Evolutionary Algorithm (SPEA), Pareto<Archived Evolution Strategy (PAES),
Fast Non<dominated Sorting Genetic Algorithm (NSGA<II). Although there are
many variations of multiobjective genetic algorithms these cited are well known
and credible algorithms that have been used in many applications and their
performances were tested in several comparative studies. Generally,
multiobjective genetic algorithms differ based on their fitness assignment
procedure, elitism or diversification [3].

In this study, three algorithms are chosen for discussion: the weighted
genetic algorithm, the vector evaluated genetic algorithm (VEGA), and the fast
non<dominated sorting genetic algorithm (NSGA<II). In the following we
describe each of the three algorithms.

3. Classes of Multiobjective Genetic Algorithms

3.1. Weighted Genetic Algorithm

Since the simple genetic algorithm is based on a scalar function to guide
the search, the most intuitive approach for using a genetic algorithm to solve a
multiobjective optimization problem is to combine all objectives into a single
objective problem using one of the traditional aggregating functions methods.
Then simple genetic algorithm is used to solve the problem. An example of this
approach is a linear sum of weights, which consists of adding all objective
functions together using different weights for each one. Thus, the multiobjective
optimization problem is transformed into a scalar optimization problem of the
form:

=
k
i
i
w
1
where,
i
w are the weights representing the relative importance of
the objectives. Usually, 1
1
=

=
k
i
i
w [2].
Even if the idea of the algorithm is simple, it introduces a difficult
question. What values of weights must one use? Of course, there is no unique
answer to this question. The answer depends on the importance of each
objective in the context of the problem. The weight of an objective is usually
Bul. Inst. Polit. Ia%i, t. LVI (LX), f. 1, 2010

39

chosen in proportion to its importance.

This method doesn’t require any changes to the mechanism of a simple
genetic algorithm. Therefore, it is simple, efficient and easy to implement. It can
be used to solve simple multiobjective optimization problems with few
objective functions and convex search space. Method suffers from the following
difficulties [2]: a Pareto optimal solution is specific to the preference parameters
used in converting a multiobjective optimization problem into a single
optimization problem. In order to find different Pareto optimal solutions, the
parameters must be changed and a new problem to be solved. Thus, in order to
find n different Pareto<optimal solutions, n different single objective
optimization problem need to be formed and solved. Another disadvantage is
that the method is sensitive to the choice of weights and requires the user to
have knowledge about the problem for generate correct parameters.
The main drawback is that for some problems which exhibit a non<
convex Pareto<optimal front, this approach is unable to find all the solution, no
matter how the weights are chosen.
However, this algorithm is easier to implement than specific
multiobjective optimization algorithms, and if the importance of each objective
is known, it may be a simple method to find acceptable solutions.

3.2. Vector Evaluated Genetic Algorithm (VEGA)

VEGA is the first genetic algorithm used to approximate the Pareto
optimal set by a set of non<dominated solutions; it was implemented by
Schaffer [8]. The name is appropriate for multiobjective optimization, because
the algorithm evaluates an objective vector.

VEGA is a straightforward extension of a simple genetic algorithm for
multiobjective optimization. Since a number of objectives (M) have to be
handled, Schaffer thought of dividing the genetic algorithm population into M
equal subpopulations randomly, in each iteration. Each subpopulation is
assigned a fitness based on a different objective function. In this way, each of
the M objective functions is used to evaluate some members in the population.
This algorithm emphasizes solutions which are good for individual
objective functions. Schaffer uses the proportional selection operator.

In order
to find intermediate solutions, Schaffer allowed crossover between any two
solutions in the entire population. In this way, a crossover between two good
solutions, each corresponding to a different objective may find offspring which
are good compromised solutions between the two objectives. The mutation is
applied to each individual as usual.

Criticisms of VEGA [7] include the following arguments. VEGA is very
simple and easy to implement, since only the selection mechanism has to be
modified. One of its main advantages is that, despite its simplicity, can generate
several solutions in one run of the algorithm. However, this shuffling and
Mihaela Simona Cîrciu and Florin Leon

40

merging of all the subpopulations that VEGA performs corresponds to
averaging the fitness components associated with each of the objectives. Since
Schaffer uses proportional fitness assignment, these fitness components are in
turn proportional to the objectives themselves. Therefore, the resulting expected
fitness corresponds to a linear combination of the objectives where the weights
depend on the distribution of the population at each generation. This means that
VEGA has the same problems than the algorithm previously discussed.

3.3. Fast Non6dominated Sorting Genetic Algorithm II (NSGA6II)

The Non<dominated Sorting Genetic Algorithm was proposed by Srinivas
and Deb [9]. In this study it is discussed an improved version of NSGA, called
NSGA<II, which is a fast multiobjective algorithm with elitism proposed by Deb
et al. [6] to remove the weaknesses of NSGA. The main features of NSGA<II,
such as low computational complexity, parameter less diversity preservation,
elitism and real valued representation are described in the following [2].
Low Computational Complexity: The NSGA<II requires at most O(mN
2
)
computational complexity, which is lower compared to O(mN
2
) of NSGA. The
procedure for finding non<dominated front used in NSGA<II is similar to the
non<dominated sorting procedure suggested by Goldberg [5] except that a better
bookkeeping strategy is used to make it more efficient. In this bookkeeping
strategy, every solution from the population is compared with a partially filled
population for domination instead of with every other solution in the population
as in the NSGA. Initially, the first solution from the population is kept in a set
P’. Then, each solution i is compared with all solution in P’ one by one. If the
solution i dominates any solution j of P’ then the solution j is removed from
P’. Otherwise, if the solution i is dominated by any solution j in P’, then the
solution is ignored. If solution i is not dominated by any solution in P’ then it
is saved in P’. Therefore, the set P’ grows with non<dominated solutions.
When all solutions of the population is checked, the solutions in P’ constitute
the non<dominated set. To find the other fronts, the non<dominated solutions
in P’ will be reduced from P and the procedure is repeated until all solutions in
P are ranked.
Diversity preservation: To maintain diversity among solutions, the
NSGA<II replaces the fitness sharing approach in the NSGA with a crowded
comparison approach, which does not require any user<defined parameter. In the
crowded comparison approach, every solution in the population has two
attributes: a non domination rank (i
rank
) and a crowding distance (i
distance
). The
crowding distance of a solution i serves as an estimate of the size of the largest
cuboid enclosing the point i without including any other point in the population.
Fig. 1 [2] illustrates the crowding distance calculation for solution i in
its front, which is the average side<length for the cuboid enclosing solution i
(shown with a dash box).
Bul. Inst. Polit. Ia%i, t. LVI (LX), f. 1, 2010

41

The crowded tournament selection operator which is used to guide the
search towards a spread<out Pareto<optimal front is defined as follows [2]: A
solution i wins a tournament with another solution j if solution i has a better
rank (i
rank
< j
rank
) or i and j has the same rank but solution i has a better
crowding distance than solution j (i
rank
= j
rank
and i
distance
> j
distance
). If i and j has
the same rank and the same crowding distance then one of them is randomly
chosen as a winner.
c

is the crowded comparison operator and is formally
defined as i
c

j if (
rank
rank
ji
<
) or
rankrank
ji
=
( and i
distance
> j
distance
).



Fig. 1 − Crowding distance calculation.

Elitism in NSGA<II is ensured by comparing the current population
with previously found best non<dominated solutions (kept in P’) and by
combining the parent and the child populations to form a combined population
with size 2N. Then the population combined is sorted according to non<
domination. The best solutions are solutions belonging to the nest non<
dominated front F
1.
If the size of F
1
is smaller than N, then all solutions in F
1
are
selected for the new population. The remaining solutions of the new populations
are selected from subsequence non<dominated fronts in order of their ranking
F
2
,

F
3
,

and so on. This procedure is continued until N solutions are selected for
the new population. To choose exactly N members, the solutions in the last front
are sorted using crowded comparison operator in descending order.
Fig. 2 [2] illustrates this procedure. The new population is now used for
selection, crossover and mutation to create a new population of size N [2], [6].



Fig. 2 − Elitism in NSGA<II.
Mihaela Simona Cîrciu and Florin Leon

42

Real valued representation: NSGA<II supports real<valued representation,
in which a chromosome is represented as a vector of variables in real<valued
numbers. Using a real<valued representation, NSGA<II can achieve arbitrary
precision in the obtained solutions (the precision would depend on the machine)
and it can handle problems having a continuous search space. In real<coded
genetic algorithms the main challenge is how to create a new pair of offspring
vectors from a pair of real valued parent vectors and how to mutate them [2].
NSGA<II uses arithmetic crossover which linear combines two parent
chromosomes to produce two offspring as shown in the following equations,
where a is a real number between 0 and 1:
(1) offspring
1
= a S parent
1
+ (1 – a) 5 parent
2

(2) offspring
2
= (1 – a) S parent
1
+ a S parent
2

The mutation operator replaces the value of all genes with uniformly
distributed random value in the range specified by the user.
The diversity among non dominated solutions is introduced by using the
crowding comparison procedure which is used in the tournament selection
during the population reduction phase. Since the solutions compete with their
crowding distance, no extra niching parameter is required. Although the
crowding distance is calculated in the objective function space, it can also be
implemented in the parameter space [6].
4. Case Studies

There are several features that may cause difficulties for
multiobjective genetic algorithms in converging to the Pareto optimal font and
maintaining diversity within the population. Multimodality, deception and
isolated optima are known problems in single objective optimization. Certain
characteristics of the Pareto<optimal front may prevent a multiobjective
algorithm from finding diverse Pareto optimal solutions: non<convexity,
discreteness and non< uniformity. The proposed test functions used in this
study reflect these problems.
Each test considers two objectives, each objective demanding the
minimization of a different objective function.
4.1. Convex Pareto6Optimal Front

The first set of test functions has a convex Pareto<optimal front, when
the goal is to simultaneously minimize the functions f and h:

1
xf =

2
2
1
x
xg +=

gfh

=

Bul. Inst. Polit. Ia%i, t. LVI (LX), f. 1, 2010

43

Each algorithm used the following parameters: population size: N=200,
number of generations: G=100, mutation rate: p
m
= 0.03, and crossover rate:
p
c
=0.9.
The non<dominated solutions obtained by each algorithm are displayed
in Fig. 3. For the weight<based GA, the weights were chosen in the interval
[0,1], and several runs were performed for each combination.



Fig. 3 − Non<dominated solutions obtained for the problem with
a convex Pareto<optimal front.

For a convex front, all the algorithms obtain a good approximation of
the Pareto<optimal front.

4.2. Concave Pareto6Optimal Front

The second set of test functions has a non<convex Pareto<optimal front:


1
xf =
30,
1
1
1
=⋅+=

=
nx
n
g
n
i
i


g
g
f
h ⋅






−=
2
2
1
Mihaela Simona Cîrciu and Florin Leon

44

The algorithms use the same parameters as before. The non<dominated
solutions obtained by each algorithm are visualized in Fig. 4.



Fig. 4 − Non<dominated solutions obtained for the problem
with a concave Pareto<optimal front.

NSGA<II and VEGA obtain a good approximation of Pareto<optimal
front. However, the weighted genetic algorithm is unable to find the
intermediate solutions.

4.3. Discontinuous Pareto6Optimal Front

The third set of test functions has a discontinuous Pareto<optimal front:



1
xf =

2
2
1
x
xg +=

( )f
g
f
g
f
h ⋅⋅−−=
π
6sin1
2
2



The introduction of the sine function causes a discontinuity in the
Pareto<optimal front, although in the parameter space there is no discontinuity.
Each algorithm uses the same parameters as before. The non<dominated
solutions obtained by each algorithm are displayed in Fig. 5.
Bul. Inst. Polit. Ia%i, t. LVI (LX), f. 1, 2010

45



Fig. 5 − Non<dominated solutions obtained for the problem
with a discontinuous Pareto<optimal front.

For this problem, only NSGA<II approximates the correct Pareto<
optimal front. The weighted genetic algorithm can find some solution clusters in
the neighborhood of the true Pareto fronts. In this case, VEGA doesn’t provide
good solutions.

4. Conclusions

The weighted genetic algorithm works well on the convex front, but has
difficulties on a non<uniform or concave front. Another difficulty is to choose
the weights, which requires the user to have knowledge about the problem. The
algorithm obtains a single final solution, thus to find different Pareto optimal
solutions, the algorithms has to be run with several sets of weights.
Vector evaluated genetic algorithm exceeds the performance of the
weighted genetic algorithm. The problem for this algorithm is that final
solutions tend to converge to the optimal values of each objective, and “lose”
middle solutions. That’s because of the fitness assignment. It is observed that
among the final solutions are dominated solutions, thus was implemented a non<
dominated sorting function that removes these false solutions.
NSGA<II is the most efficient algorithm, exceeds the performance of
both algorithms and obtains a good approximate of Pareto<optimal front
regardless of its shape. Real<world engineering problems involve simultaneously
optimizing multiobjective where trade<offs are important. All the problems will
require some customization of the genetic algorithms approach to handle the
Mihaela Simona Cîrciu and Florin Leon

46

objectives. It is imagined that the Pareto solutions identified by genetic
algorithms would be reduced to a representative small set for the designer or
engineer to further investigate. Therefore, for most implementations it is not
vital to find every Pareto optimal solution, but rather, efficiently and reliably
identify Pareto optimal solutions across the range of interest for each objective
function.

A c k n o w l e d g e m e n t s. This work was supported in part by CNCSIS
grant 316/2008, contract no. 671/2009, Behavioural Patterns Library for Intelligent
Agents Used in Engineering and Management.

Received: November 22, 2009 “Gheorghe Asachi” Technical University of Ia=i,
Department of Computer Science and
Information Technology
e>mail: fleon@cs.tuiasi.ro


R E F E R E N C E S

1. Rosenberg R.S., Simulation of Genetic Populations with Biochemical Properties.
Ph. D. Diss., University of Michigan, 1967.
2. Tran K.D., An Improved Multi>Objective Evolutionary Algorithm with Adaptable
Parameters. International Journal of Intelligent Systems Technologies and
Applications archive, Vol. 7, 4, 347−369, 2009.
3. Konak A., Coit D.W., Smith A.E., Multi>Objective Optimization Using Genetic
Algorithms: A Tutorial. Reliability Engineering & System Safety, Vol. 91, 9,
September 2006, 992−1007, 2006.
4. Leon F., InteligenŃă artificială > principii, tehnici, aplicaŃii. Tehnopress, Ia%i, 2006.
5. Goldberg D.E., Deb K., A Comparative Analysis of Selection Schemes Used in
Genetic Algorithms. Foundations of Genetic Algorithms (1991).
6. Deb K., Pratap A., Agarwal S., Meyarivan T., A Fast and Elitist Multiobjective
Genetic Algorithm: NSGA>II. Proceedings of the Parallel Problem Solving
from Nature VI Conference, 2002.
7. Coello C.A., Lamont G.B., Van Veldhuizen D.A., Evolutionary Algorithms for
Solving Multi>Objective Problems. Second Edition, Springer, 2007.
8. Schaffer D.J., Multiple Objective Optimization with Vector Evaluated Genetic
Algorithms. Proceedings of the international conference on genetic algorithm
and their applications, 1985.
9. Srinivas N., Deb K., Multiple Objective Optimization Using Nondominated Sorting
in Genetic Algorithms. Evolutionary Computation, 2, 3, 221−248, 1994.
10. Deb Kalyanmoy, Multi>Objective Optimization Using Evolutionary Algorithms.
Wiley, 2001.
11. Zitzlet E., Deb K., Thiele L., Comparison of Multiobjective Evolutionary
Algorithms: Empirical Results. Technical report 70, Computer Engineering and
Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH)
Zurich, 2000.
Bul. Inst. Polit. Ia%i, t. LVI (LX), f. 1, 2010

47

STUDIU COMPARATIV ASUPRA UNOR ALGORTIMI
GENETICI MULTIOBIECTIV

(Rezumat)

Lucrarea prezintă studiul a trei algoritmi pentru rezolvarea problemelor
multiobiectiv. Primul algoritm ales pentru studiu este un algoritm genetic vectorial, care
este cea mai simplă extensie a unui algoritm genetic simplu. Problema multiobiectiv
este transformată într<o problemă cu un singur obiectiv prin combinarea tuturor
obiectivelor folosind o sumă de ponderi. Problema este apoi rezolvată cu un algoritm
genetic simplu. De%i simplu, acest algoritm are câteva dezavantaje. Primul este alegerea
ponderilor care sunt introduse de utilizator, ceea ce presupune o bună cunoa%tere a
problemei de rezolvat. Ponderile sunt alese, de obicei în funcŃie de importanŃa
obiectivului. Un alt dezavantaj este faptul că în final algoritmul obŃine o singură soluŃie,
de aceea pentru a obŃine mai multe soluŃii optime e necesară rularea algoritmului cu mai
multe seturi de ponderi. Algoritmul genetic vectorial cu ponderi este sensibil la forma
frontului Pareto %i nu reu%e%te să determine soluŃii pe un front concav.
Al doilea algoritm ales pentru studiu este algoritmul genetic evaluat vectorial,
primul algoritm genetic propus pentru optimizarea multiobiectiv. Acesta este o abordare
non<Pareto bazată pe selecŃia unor grupuri relevante de indivizi, fiecărui grup fiindu<i
aplicat un obiectiv. Dacă numărul de obiective este M, algoritmul împarte populaŃia în
M subpopulaŃii %i atribuie fiecăreia o funcŃie obiectiv. SubpopulaŃiile sunt apoi unite. Se
aplică operatorii genetici: operatorul de selecŃie proporŃională, operatorul de încruci%are,
care este aplicat între oricare doi indivizi. Aceasta permite ca din două soluŃii bune în
raport cu obiective diferite să se obŃină soluŃii copii ce sunt compromisuri bune.
Algoritmul este mai eficient decât algoritmul vectorial, dar este %i el sensibil la forma
frontului Pareto.
Algoritmul cu sortare nedominată este unul dintre primii algoritmi evolutivi. În
lucrarea de faŃă este prezentată o variantă îmbunătăŃită a acestuia, NSGA<II. Acesta este
un algoritm bazat pe dominanŃa Pareto pentru ghidarea căutării către frontul optim. Este
cel mai eficient algoritm dintre cei trei studiaŃi, aproximând corect orice front Pareto,
indiferent de forma acestuia.
Cei trei algoritmi sunt implementaŃi într<o aplicaŃie ce permite rezolvarea
oricărei probleme multiobiectiv %i care poate fi utilizată ca o platformă pentru studierea
comparativă a performanŃelor diferitelor metode de optimizare.