Optimization of Complex Industrial Processes Using Constrained Genetic Algorithms

aroocarmineΤεχνίτη Νοημοσύνη και Ρομποτική

29 Οκτ 2013 (πριν από 3 χρόνια και 7 μήνες)

104 εμφανίσεις

Optimization of Complex Industrial Processes
Using Constrained Genetic Algorithms
Hamza Turabieh
Information Technology Department
Al-Balqa Applied University
Al-Salt,Jordan
H
turabieh@yahoo.com
Alaa Sheta
Information Technology Department
Al-Balqa Applied University
Al-Salt,Jordan
asheta2@yahoo.com
ABSTRACT
Tuning the parameters of complex industrial processes to provide an outstanding product with minimum
losses represents a challenge for optimization theory.Many algorithms were proposed in the past to handle
such type of optimization problems.In this paper we explore the use of Evolutionary Computation tech-
niques to handle such a problem.We focus on the use of Genetic Algorithms (GAs) in solving constrained
optimization problems for industrial processes.On doing this,we plan to explore the sensitivity of the
evolutionary process with respect to variations in the tuning parameters of the GAs (i.e.population size).
A careful tuning for the evolutionary process parameters lead to fast convergence to the optimal solutions.
Number of test cases and a reactor network design problem are solved.
Key Words
:Industrial Processes,Genetic Algorithms,Constrained Optimization
1 Introduction
Most industrial manufacturing processes involve
dynamic nonlinearity,uncertainty and constraints.
Currently there is a growing interest on using Evo-
lutionary Algorithms (EAs) to assist providing a
reasonable solution for physical nonlinear systems
in industry.EAs techniques are among those opti-
mization techniques which have been used to solve
a variety of optimization problems in industry.EA
include Genetic Algorithms (GAs) [1],Evolutionary
strategies (ESs) [2],Evolutionary Programming [3]
and Genetic Programming (GP) [4].In [5,6],Ge-
netic Algorithms and Evolutionary strategies have
been used in the parameter identification process
of nonlinear systems with various degrees of com-
plexity.In [7],GAs have been successfully used
to provide an automatic methodology for gener-
ating model structure for nonlinear systems based
the Volterra time-series and least-square estimation
(LSE) was used to identify the model parameters.
Using this methodology an efficient model structure
was built to model the dynamics of the automotive
engine.Modeling the dynamics of a winding ma-
chine in industry using genetic programming was
presented in [8].
For any optimization problem,there is an opti-
mization criterion (i.e.evaluation function) has to
be minimized or maximized.The evaluation func-
tion represents a measure of the quality of the de-
veloped solution.Searching the space of all possi-
ble solution is a challenging task.Additional con-
straints of the domain of search for the parameters
makes the problem quite difficult.The constraints
might affect the performance of the evolutionary
process since some of the produced solutions (i.e
individuals) may be unfeasible.Unfeasible solution
represents a wast of computation effort.
Although there is no general methodology to
handle constraints several methods were introduced
[9,10,11].Evolutionary Strategies and Evolution-
ary Programming were modified to handle numer-
ical optimization problems with constraints simply
by rejecting unfeasible individuals.Genetic Algo-
rithms used an alternative approach that is penal-
izing the unfeasible individuals.Unfortunately is
there is no general adopted strategy to design the
penalty functions [9,10].In the following sections,
we will give an overview of GAs,formulate the con-
strained optimization problem using GAs and fi-
nally provide number of case studies in industry.
2 Why Genetic Algorithms?
GAs are the most famous among EA algorithms.
GAs have been employed as a tool that can han-
dle multi-model function and complex search space.
They have the capability to search complex spaces
with high probability of success in finding the
points of minimumor maximumon the search space
(i.e.landscape).Genetic Algorithms (GAs) are
derivative-free stochastic search algorithms.
GAs apply the concept of natural selection.This
idea was first introduced by John Holland at the
University of Michigan in 1975 [1].GAs have been
successful used in solving numerous applications in
engineering and computer science [12,13,14,15].
GAs gain a great popularity due to their known
attributes.These attributes include:
• GAs can handle both continuous and dis-
crete optimization problems.They require no
derivative information about the fitness crite-
rion [16,17].
• GAs have the advantageous over other search
algorithm since it is less likely to be trapped
by local minimum.
• GAs provide a more optimal and global so-
lution.They are less likely to be trapped by
local optimal like Newton or gradient descent
methods [18,19].
• GAs have been shown to be less sensitive to
the presence of noise and uncertainty in mea-
surements [5,20].
• GAs use probabilistic operators (i.e.crossover
and mutation) not deterministic ones.
3 How GAs Code a Solution?
Genetic algorithms code the candidate solutions of
an optimization algorithm as a string of characters
which are usually binary digits [1].In accordance
with the terminology that is borrowed fromthe field
of genetics,this bit string is usually called a chro-
mosome (i.e.individuals).
A number of chromosomes generate what is
called a population.The structure for each indi-
vidual can be represented as follows:
gene
1
gene
2
......
gene
n
11101
00101
......
11011
This a chromosome has number of genes equal to
n.These genes are used in the evaluation function
f.Thus,f(gene
1
,gene
2
,...,gene
n
) is the function
to be minimized or maximized.
4 Evolutionary Process
The evolutionary process of GAs start by the com-
putation of the fitness of the each individual in the
initial population.While stopping criterion is not
yet reached we do the following:
• Select individual for reproduction using some
selection mechanisms (i.e.tournament,rank,
etc).
• Create an offspring using crossover and mu-
tation operators.The probability of crossover
and mutation is selected based on the appli-
cation.
• Compute the new generation of GAs.This
process will end either when the optimal so-
lution is found or the maximum number of
generations is reached.
A flowchart for a simple GA process is given [21]
in Figure 1.
4.1 Selection Mechanism
Selection is the process which guides the evolution-
ary algorithm to the optimal solution by preferring
chromosomes with high fitness.The chromosomes
evolve through successive iterations,called genera-
tions.During each generation,the chromosomes are
evaluated,using some measure of fitness.To cre-
ate the next generation,new chromosomes,called
4.4 Fitness Function
GA evaluates the individuals in the population us-
ing a selected fitness function (criterion).This func-
tion indicates how good or bad a candidate solution
is.The way to select the fitness function is a very
important issue in the design of genetic algorithms,
since the solution of the optimization problem and
the performance of the algorithm count mainly on
this function.
It is important to recognize that GAs is differ-
ent from other optimization techniques like gradi-
ent descent,since they evaluate a set of solution
in the population at each generation,makes them
more likely to find the optimum solution.
The fitness of the individuals within the popula-
tion is evaluated,and new individuals are generated
for the next generation using a selection mechanism.
Although convergence to a global optimum is not
guaranteed in many cases,these population-based
approaches are much less likely to converge to lo-
cal optimal and are quite robust in the presence of
noise [16,17].
4.5 Genetic Algorithms Summary
Assume that Pop(k) and Offspring(k) are the par-
ents and offspring in current generation t;the gen-
eral structure of a genetic algorithm procedure can
be described by the simple C code.
begin
k=0;
initialize Pop(k);
evaluate Pop(k);
while (termination not reached) do
recombine Pop(k) to generate Offspring(k);
evaluate Offspring;
Select Pop(k + 1) from Pop(k) and
Offspring(k);
k = k +1
end while
end
5 Optimization Problem
A constrained optimization problem can be pre-
sented as follows:
Optimize f(X) (1)
Subject to:
g
j
(X) ≤ 0,∀j = 1,...,q
h
j
(X) = 0,∀j = q +1,...,m
where X = (x
1
,...,x
n
) represents the array of sys-
tem variables.The search space S for the above
problemis split into two domains.They are the fea-
sible S
f
and the unfeasible space S
unf
.The func-
tion variables are defined is a specific domain de-
fined as:
l(i) ≤ x
i
≤ u(i),1 ≤ i ≤ n (2)
The feasible S
f
set is defined by a number of addi-
tional m constraints (i.e.g
j
(X),h
j
(X)).
6 Constrained Handling
In [24],Michalewicz and Attia presented a method-
ology to deal with the unfeasible individual.This
methodology can be described in the following
steps:
• The problem constraints can be classified into
four types.There are linear equalities (LE),
linear inequalities (LI),nonlinear equalities
(NE),nonlinear inequalities (NI) constraints.
• A random start point is selected for the
search.This initial random point should sat-
isfy both LE and LI constraints.
• Set the initial temperature λ = λ
0
.
• Evaluate each individual in the population us-
ing the evaluation function eval.
eval(X,λ) = f(X) +
1

m
￿
j=1
f
2
j
(X),(3)
• if λ < λ
f
stop,else
– decrease λ.
– use the best individual as an initial solu-
tion for the next generation.
– repeat the previous steps of the algo-
rithms.
This method requires an initial starting temper-
ature λ
0
and a final freezing temperature λ
f
.A
recommended values are,reported in [24],λ
0
= 1,
λ
i+1
= 0.1 ×λ
i
with λ
f
= 10
−6
.
7 Constrained Optimization Software
Solving constrained optimization problem was ex-
plored by Michalewicz and others [9,25,10,24].
To develop our results we used the GENOCOP 5.0
software tool which was provided in [26,27].To run
the GENOCOP software we need to specify a set of
variables in an input file.These variables include
the number of variables,the number of equalities,
the number of inequalities,the domains specified
for each variable.We also specify the population
size and the total number of generations.
The proposed solution of the constrained opti-
mization problem was compared with the Sequen-
tial Quadratic Programming (SQP) [28] solution.
We used the Optimization Toolbox with Matlab
to develop a solution for the cases under study based
SQP.The sequential quadratic programming (SQP)
algorithm is a powerful technique for solving non-
linear constrained optimization problems [29].
SQP allows you to closely mimic Newton’s
method for constrained optimization just as is done
for unconstrained optimization.At each major it-
eration,an approximation is made of the Hessian of
the Lagrangian function using a quasi-Newton up-
dating method.This is then used to generate a QP
subproblem whose solution is used to form a search
direction for a line search procedure [30,31].
8 Test Problem 1
A nonlinear constrained optimization problem de-
scribed in [32] and extensively discussed in [33,34,
11,35] is presented in this section.
Min φ(x,y) = 2x +y (4)
Subject to:
1.25 −x
2
−y ≤ 0
x +y ≤ 1.6
Given that 0 ≤ x ≤ 1.6 and y ∈ {0,1}.To optimize
the above function,we generated the problem sur-
face (i.e.landscape) defined within the given search
space.The landscape is shown in Figure 4.To
check the performance of the evolutionary process
we changed the population size number of time.
Our goal is to do some sensitivity analysis to
show that GAs will converge every time we change
Table 1
Solution provide by GAs and SQP:Case 1
x
y
φ(x,y)
Technique
0.5
1
2
GAs
0.5
1
2
SQP
the population size.In Figure 5,we show the con-
vergence of the evolutionary process with various
population size.It can be seen that with various
population size the optimal value of the function
reached the acceptable level.
A comparison between the developed results us-
ing the constrained GAs and Matlab Optimization
Toolbox is provided in Table 1.The results show
that GAs can provide the same results as the SQP
technique.This means that both techniques are ef-
fective in this case.
9 Test Problem 2
This problem was presented in [36] and was studied
in [11,35,25].
Min φ(x
1
,x
2
,y) = −y +2x
1
+x
2
(5)
Subject to:
x
1
−2e
−x
2
= 0
−x
1
+x
2
+y ≤ 0
Given that 0.5 ≤ x
1
≤ 1.4 and y ∈ {0,1}.
The above problem can be formulated to eliminate
equality constraints as shown in Equation 6.
Min φ(x
1
,y) = −y +2x
1
−ln(
x
1
2
) (6)
Subject to:
−x
1
−ln(
x
1
2
) +y ≤ 0,y ∈ {0,1}
In this section,we explore the issue of selecting
the optimal tuning parameters for the second test
case using genetic algorithms with constraints.The
problem landscape is presented in Figure 6.The
landscape seems not very complex but the domain
of search space for each model parameters repre-
sents a challenge since we are having nonlinear con-
straints.
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0
0.5
1
1.5
2
2.5
3
x
Search Space
y
function value
Fig.4.Search space of Test Problem 1
10
20
30
40
50
60
70
2
2.05
2.1
2.15
2.2
2.25
2.3
2.35
2.4
Generation
Fitness
Best so far curves with various population sizes
population size= 30
population size= 40
population size= 50
population size= 60
population size= 80
population size= 100
Fig.5.Test Problem 1:Convergence of the evolutionary process with various population sizes
0.4
0.6
0.8
1
1.2
1.
4
0
0.2
0.4
0.6
0.8
1
1
1.5
2
2.5
3
3.5
x
1
Search Space
y
function value
Fig.6.Search space of Test Problem 2
5
10
15
20
25
2.15
2.2
2.25
2.3
2.35
2.4
Generation
Fitness
Best so far curves with various population sizes
population size= 30
population size= 40
population size= 50
population size= 60
population size= 80
population size= 100
Fig.7.Test Problem 2:Convergence of the evolutionary process with various population sizes
Table 2
Solution provide by GAs and SQP:Case 2
x
1
x
2
φ(x
1
,x
2
)
Technique
1.375
1
2.124
GAs
1.3748
1
2.12452
SQP
In Figure 7,we show the best so far curves of
the GAs with various population sizes.In Table 2,
the GAs provided a slightly better results than the
SQP technique.
10 Reactor Network Design Problem
The reactor network consist of two CSTR reactors
where the a sequence of reaction A,B,then C takes
place.The design problem objective is to maximize
the concentration of product B in the exit stream.
This can be achieved by finding the optimal value
of the states x
1
,x
2
,x
3
,x
4
,x
5
and x
6
.The optimiza-
tion problem can be represented mathematically as
given in Equation 7.The problem under study was
fully described in [34].
Min φ = −x
4
(7)
Subject to:
x
1
+k
1
x
2
x
5
= 1
x
2
−x
1
+k
2
x
2
x
6
= 0
x
3
+x
1
+k
3
x
3
x
5
= 1
x
4
−x
3
+x
2
−x
1
+k
4
x
4
x
6
= 0
x
0.5
5
+x
0.5
6
≤ 4
The domain of search for the states x
1
,x
2
,x
3
,x
4
,x
5
and x
6
are given as follows.0 ≤ x
1
≤ 1,0 ≤ x
2
≤ 1,
0 ≤ x
3
≤ 1,0 ≤ x
4
≤ 1,10
−5
≤ x
5
≤ 16,10
−5

x
6
≤ 16.The values of the coefficient k
1
,k
2
,k
3
and
k
4
are given as:
k
1
= 0.09755988
k
2
= 0.99k
1
k
3
= 0.0391908
k
4
= 0.9k
3
To deal with the above problem,we decide to trans-
fer the problemto a maximization problemby elim-
inating the equality constraints.The new math-
ematical description can be given as in Equation
Table 3
Solution provide by GAs and SQP:Reactor
Network Problem
x
5
x
6
φ(x
5
,x
6
)
Technique
3.038
5.096
0.3881
GAs
15.975
1e-005
0.3746
SQP
8.with the boundary values of x
5
and x
6
are
10
−5
≤ x
5
≤ 16,10
−5
≤ x
6
≤ 16.
Max φ =
k
2
x
6
(1 +k
3
) +k
1
(1 +k
2
x
6
)
(1 +k
1
x
5
)(1 +k
2
x
6
)(1 +k
3
x
5
)(1 +k
4
x
6
)
(8)
Subject to:
x
0.5
5
+x
0.5
6
≤ 4
We ran GA with population sizes 30,40,50,60,80
and 100 and computed the best solution after each
generation.The landscape for the network design
problem is shown in Figure 8.The results of each
run is shown in Figure 9.In Figure 10,we show
the network design problemstructure.To maximize
the function φ(x
5
,x
6
),we used both GAs and SQP.
In this case,GAs outperform SQP in providing a
better maximum to the function φ(x
5
,x
6
).This is
shown in Table 3.
11 Conclusions
In this paper,we used Genetic Algorithms (GAs) to
solve constrained optimization problems for number
of processes.We explored the performance of the
evolutionary process under variations in the popu-
lation size.The results show that GAs are robust
and can provide optimal solution after each run.
A practical example of an industrial process,the
reactor network design problem,was studied with
promising results.
References
[1] J.Holland,Adaptation in Natural and Artifi-
cial Systems.Ann Arbor,MI:University of
Michigan Press,1975.
[2] T.B¨ack and H.-P.Schwefel,“An overview
of evolutionary algorithms for parameter op-
timization,” Evolutionary Computation,vol.1,
no.1,pp.1–24,1993.
0
5
10
15
2
0
0
5
10
15
20
0
0.1
0.2
0.3
0.4
0.5
x
6
Search Space
x
5
function value
Fig.8.Search space of reactor network design problem
5
10
15
20
25
30
35
40
0.355
0.36
0.365
0.37
0.375
0.38
0.385
0.39
0.395
0.4
Generation
Fitness
Best so far curves with various population sizes
population size= 30
population size= 40
population size= 50
population size= 60
population size= 80
population size= 100
Fig.9.Reactor Network Problem:Convergence of the evolutionary process with various population sizes
Engineering Design and Control,pp.128–133,
UK,1994.
[20] A.Sheta and K.De Jong,“Time-series fore-
casting using GA-tuned radial basis functions,”
in special issue of the Information Scienec
Journal,pp.221–228,2001.
[21] H.Al-Duwaish and W.Naeem,“Nonlinear
model predictive control of hammerstein and
winner model using genetic algorithms,” in
Proceedings of the IEEE International Con-
ference on Control Applications,pp.465–469,
2001.
[22] M.Gen and R.Cheng,Genetic Algorithms and
Engineering Design.New York:Jonh Wiley
and Sons,Inc,1997.
[23] G.Syswerda,“Uniform crossover in genetic
algorithms,” in Proceedings of the Third In-
ternational Conference on Genetic Algorithms,
pp.2–9,1989.
[24] Z.Michalewicz and N.Attia,“Evolutionary
optimization of constrained problems,” in Pro-
ceedings of the Third Annual Conference on
Evolutionary Programming,pp.98–108,eds.
A.V.Sebald and L.J.Fogel,River Edge,NJ,
World Scientific Publishing,1994.
[25] R.L.Salcedo,“Solving nonconvex nonlinear
programming problems with adaptive random
search,” Industrail and Engineering Chemistry
Research,vol.31,p.262,1992.
[26] S.Koziel and Z.Michalewicz,“A decoder-
based evolutionary algorithm for constrained
parameter optimization problems,” in Proceed-
ings of the 5th Parallel Problem Solving from
Nature,pp.231–240,Springer-Verlag,Lecture
Notes in Lomputer Science,Lecture Notes
in Computer Science,A.E.Eiben,T.Baeck,
M.Schoenauer,and H.-P.Schwefel (Editors),
1998.
[27] S.Koziel and Z.Michalewicz,“Evolution-
ary algorithms,homomorphous mappings,and
constrained parameter optimization,” Evolu-
tionary Computation,vol.7,no.1,pp.19–44,
1999.
[28] M.Biggs,“Constrained minimization us-
ing recursive quadratic programming,” in in
Towards Global Optimization,pp.341–349,
North-Holland,1975.
[29] P.Boggs and J.W.Tolle,“Sequential
quadratic programming,” in Acta Numerica,
pp.1–15,Cambridge University Press,Cam-
bridge,1995.
[30] S.Han,“A globally convergent method for
nonlinear programming,” in Journal Optimiza-
tion Theory and Applications,vol.22,p.279,
1977.
[31] P.Boggs and J.W.Tolle,“Sequential
quadratic programming for large-scale nonlin-
ear optimization,” in Journal Comutation Ap-
plication Math.,pp.123–137,2000.
[32] G.R.Kocis and I.E.Grossmann,“Global opti-
mization of nonconvex mixed-integer nonlinear
programming MINLP problems in process syn-
thesis,” Industrail and Engineering Chemistry
Research,vol.27,pp.1407–1421,1988.
[33] C.A.Floudas,A.Aggarwal,and A.R.
Ciric,“Global optimum search for nonconvex
NLP and MINLPs problems,” Computers and
Chemical Engineering,vol.13,pp.1117–1132,
1989.
[34] H.S.Ryoo and B.P.Sahinidis,“Global opti-
mization of nonconvex NLPs and MINLPs with
application in process design,” Computers and
Chemical Engineering,vol.19,p.551,1995.
[35] R.Cardoso,R.L.Salcedo,S.F.de Azevedo,
and D.Barbosa,“A simulated annealing ap-
proach to the solution of MINLPs problems,”
Computers and Chemical Engineering,vol.21,
pp.1349–1364,1997.
[36] G.R.Kocis and I.E.Grossmann,“Relaxation
strategy for the structural optimization of pro-
cess flow sheets,” Industrail and Engineering
Chemistry Research,vol.26,pp.1869–1880,
1987.