A Comparative Study Using Genetic Algorithm and Particle Swarm Optimization for Lower Order System Modelling

bankpottstownAI and Robotics

Oct 23, 2013 (4 years and 16 days ago)

101 views

A Comparati
ve study using Genetic Algorithm and Particle

Swarm Optimization for Lower order System Modelling


International Journal of
t
he Computer, the Internet and Management Vol.
17. No.3

(
September
-

December, 2009
)

pp 1
-
10



1

A Comparative
S
tudy
U
sing Genetic Algorithm and Particle
Swarm Optimization for Lower
O
rder System Modelling



S.N. Sivanandam
1



S.N.Deepa
2



1
Professor and Head


2
Research Scholar

Department of Computer Science and Engineerin
g

PSG College of Technology

Coimbatore


641 004, India

E
-
mail Id:
snsivanandam@yahoo.co.in
,
deepapsg@rediffmail.com




Abstract


In recent years Evolutionar
y
Computation has its growth to extent. Amidst
various Evolutionary computation
approaches, Genetic Algorithms and Particle
swarm optimisation are used in optimisation
problems. The two approaches find a
solution to a given objective function
employing dif
ferent procedures and
computational techniques; as a result their
performance can be evaluated and
compared. The problem area chosen is that
of lower order system modelling used in
control systems engineering. Particle Swarm
Optimization and Genetic Algori
thm obtains
a better lower order approximant that
reflects the characteristics of the original
higher order system

and the performance
evaluated using these methods are
compared
. Integral square error is used as
an indicator for selecting the lower order
m
odel.


Keywords Particle Swarm Optimization

-

Genetic Algorithm

-

Lower Order
System
Modelling

-

Integral square error.



I. Introduction



During the early 1950s researchers
studied evolutionary systems as an
optimisation tool, with an introduction to
the
basics of evolutionary computing. Until
1960s evolutionary systems was working in
parallel with Genetic Algorithm (GA)
research. At this stage, evolutionary
programming was developed with the
concepts of evolution, selection and
mutation. John Holland
[
1
] introduced the
concept of Genetic Algorithm as a principle
of Charles Darwinian theory of evolution to
natural biology. The working of genetic
algorithm starts with a population of random
chromosomes.

T
he algorithm then evaluates
these structures and a
llocates reproductive
opportunities such that chromosomes, which
have a better solution to the problem, are
give more chance to reproduce. While
selecting the best candidates, new fitter
offspring are produced and reinserted, and
the less fit is removed. T
he exchange of
characteristics of chromosomes takes place
-
using operators like crossover and mutation.
The solution is defined with respect to the
current population. GA operation basically
depends on the Schema theorem. GAs

are
recognized as best function

optimisers and is
used broadly in pattern discovery, image
processing, signal processing, and in training
Neural Networks.



Particle swarm optimization (PSO) is a
population based stochastic optimization
technique developed by Eberhart and


S.N. Sivanandam and S.N.Deepa



2

Kennedy [
2
] in

1995, inspired by social
behavior of bird flocking or fish schooling.
The PSO method is a member of the wide
category of Swarm Intelligence methods

[
3
].
PSO shares many similarities with
evolutionary computation techniques such as
Genetic Algorithms (GA).

The system is

initialised with a population of random
solutions and searches for optima by
updating generations. However, PSO has no

evolution operators such as crossover and
mutation. In PSO, the potential solutions,
called particles, fly through the pro
blem
space by following the current optimum
particles. PSO can be easily implemented
and is computationally inexpensive sine its
memory and CPU speed requirements are
low. Also, it does not require gradient
information of the objective function being
consi
dered, only its values. PSO is proving
itself to be an efficient method for several
optimization problems, and in certain cases it
does not suffer from the problems
encountered by other Evolutionary
Computation techniques. PSO has been
successfully applied

in many areas: function
optimization, artificial neural network
training, fuzzy system control, and other
areas where GA can be applied. Even
though, PSO typically moves quickly
towards the best general area in the solution
space for a problem, it often h
as difficulty in
making the fine grain search required to find
the absolute best point.



Many control system applications, such
as satellite altitude control, fighter aircraft
control, model
-
based predictive control,
control of fuel injectors, automobile
spark
timer, possess a mathematical model of the
process with higher order, due to which the
system defined becomes complex. These
higher order models are cumbersome to
handle. As a result, lower order system
modelling can be performed, which helps in
alle
viating computational complexity and

implementation difficulties involved in the
design of controllers and compensators for
higher order systems. Further, the
development and usage of micro controllers
and microprocessors in the design and
implementation o
f control system
components has increased the importance of
lower order system modelling

[4
-
6]
. Thus, in
this paper, Genetic Algorithm and Particle
Swarm Optimization are used independently
to higher order systems and a suitable lower
order system is model
led.




2. Problem Definition



Consider an
n
th

order linear time
invariant system with
q

inputs and
r

outputs
described in time domain by state space
equations as,









(1)


where

is
n
dimensional stat
e vector,
u
is
q

dimensional control vector and
y
is
r
dimensional output vector with
q


n
and
r


n
. Also,
A

in
n


n

system matrix,
B

is
n


q

input matrix and
C

is
r


n

output matrix.



Alternatively, equation (1) can be
described in frequency domain
by the
transfer function matrix of order
r


q
as,









(2)


where
N(s)

is the numerator matrix
polynomial and
D(s)

is the common
denominator polynomial of the higher order
system. Also,
A
i

and
a
i

are the constant
matrices of nume
rator and denominator
polynomial respectively.


Irrespective of the form represented in
equation (1) or (2) of the original system
A Comparati
ve study using Genetic Algorithm and Particle

Swarm Optimization for Lower order System Modelling


International Journal of
t
he Computer, the Internet and Management Vol.
17. No.3

(
September
-

December, 2009
)

pp 1
-
10



3

G(s)
, the problem is to find a
m
th

lower order
model
R
m
(s)
, where
m<n

in the following
form represented by equation (3), such

that
the reduced model retains the important
characteristics of the original system and
approximates its response as closely as
possible for the same type of inputs with
minimum integral square error.






(3)


where,
N
m
(s)

and
D
m
(
s)

are the
numerator matrix polynomial and common
denominator of the reduced order model
respectively. Also,
B
i

and
b
i

are the constant
matrices of numerator and denominator
polynomial of the same order respectively.



Mathematically, the integral square e
rror
[
7
] can be expressed as,











(4)


where, Y
t

is the unit step time response
of the given higher order system at the
t
th

instant in the time interval
0


t


, where


is
to be chosen and
y
t

is the unit step time
response of
the lower order system at the
t
th

time instant.


The objective is to model a system
R
m
(s)
, which closely approximates
G(s)

for a
specified set of inputs.



3. Genetic Algorithm Operation



To illustrate the working process of
genetic algorithm, the ste
ps to realise a basic
GA

[
8
]

are as follows:


Step 1: Represent the problem variable
domain as a chromosome of fixed
length; choose the size of the
chromosome population, the
crossover rate and mutation rate.


Step 2: Define a


fitness


function

to
measur
e the performance of an
individual chromosome in the
problem domain.


Step

3: Generate



initial

population
randomly.


Step 4: Compute

the


fitness

of

each
individual chromosome.


Step 5:

Select a pair of chromosomes to
mate from the current popula
tion.
Parent chromosomes are selected
with a probability related to their
fitness. The chromosomes with
high fitness have higher
probability to be selected for
mating than chromosomes with
less fitness.


Step 6: Apply

genetic

operators

like
crossover, m
utation, inversion,
diploidy to create a pair of new
offspring chromosomes.


Step 7: Place

the

created

offspring
chromosomes in the new
population.


Step 8: Perform step 5 upto the size of the
new population equals that of the
initial population.


Ste
p 9: Replace

the



initial

(parent)
chromosome population with the
new (offspring) population.


Step 10: Go to

Step

4,

and

perform

the
process until the stopping
condition is satisfied.



S.N. Sivanandam and S.N.Deepa



4

GA is an iterative process where each
iteration is called a g
eneration. A typical
number of generations for a simple GA can
vary from 50 to over 500. In general, GA is
terminated after a specified number of
generations is reached and then to examine
the best chromosomes in the population.




4.
Particle


Swarm Op
timization
Algorithm Operation


Particle Swarm Optimization

[9
-
1
4
]

optimizes an objective function by
undertaking a population


based search. The
population consists of potential solutions,
named particles, which are metaphor of birds
in flocks. These par
ticles are randomly
initialized and freely fly across the multi
dimensional search space. During flight,
each particle updates its own velocity and
position based on the best experience of its
own and the entire population.
The various
steps involved in Pa
rticle Swarm
Optimization

Algorithm

[15]

are as follows:


Step 1: The velocity and position of all
particles are randomly set to
within pre
-
defined ranges.


Step 2: Velocity
u
pdating





At each
iteration, the velocities of all
particles are updated accord
ing
to,





















(5)


where
p
i

and
v
i

are the position and
velocity of particle
i
, respectively;
p
i,best

and
g
i,best

is the position with the ‘best’ objective
value

found so far by particle
i

and the entire
population resp
ectively;
w

is a parameter
controlling the dynamics of flying;
R
1

and
R
2

are random variables in the range [0,1];
c
1

and
c
2

are factors controlling the related
weighting of corresponding terms. The
random variables help the PSO with the
ability of stochast
ic searching.


Step 3:

Position
u
pdating


The positions
of all particles are updated
according to,








(6)


After updating,
p
i

should be checked and
limited to the allowed range.


Step 4:

Memory updating


Update
p
i,best

and
g
i,best
when condition is met,



(7)


where
f(x)

is the objective function to be
optimized.


Step 5: Stopping

Condition




The
algorithm repeats steps 2 to 4
until certain stopping conditions
are met, such as a pre
-
def
ined
number of iterations. Once
stopped, the algorithm reports the
values of
g
best

and
f(g
best
)

as its
solution.


PSO

[16]

utilizes several searching
points and the searching points gradually get
close to the global optimal point using its
pbest and gbest.

Initial positions of pbest and
gbest are different. However, using thee
different direction of pbest and gbest, all
agents gradually get close to the global
optimum.



5. Lower Order System Modelling Results


Simulations were conducted on a
Pentium

4.0
, 2.8 GHz computer, in the
A Comparati
ve study using Genetic Algorithm and Particle

Swarm Optimization for Lower order System Modelling


International Journal of
t
he Computer, the Internet and Management Vol.
17. No.3

(
September
-

December, 2009
)

pp 1
-
10



5

MATLAB 7 Environment.

The flowchart
depicting the entire lower order system
modelling using Genetic Algorithm and
particle Swarm Optimization is as shown in
Figure 1.

The higher order system considered for
lower order modelling
is given by

[
1
7]
:











(8)


The transient and steady state gain for
given G(s) in above equation is calculated as
below:





















(
9
)



A simple auxiliary scheme discussed in
Appendix is used

to obtain a basic lower
order model R(s) from the given G(s), whose
coefficients are used as initial seed values for
training in Genetic Algorithm and Particle
Swarm Optimization.
On applying the
auxiliary scheme to G(s) in equation (
8
), the
basic lower o
rder model R(s) is given by,









(10)


The above equation (
10
), should be
scaled and tuned to satisfy the transient gain
and steady state gain of given G(s). Gain
adjustments are performed to maintain the
characteristics of R(
s) in par with that of
G(s). Thus on scaling and adjusting gains,
R(s) becomes,



(11)

where the algorithms had to identify B
0
,
b
1

and b
0
.
The proposed parameters
(B
0
=5.1887, b
1
=0.7703, b
0
=0.2561)

are used
as initial seed values for
tuning in GA and
PSO using the Integral Square Error (ISE) as
the objective function to be minimized. For
both the algorithms the population was set to
40 individuals and a maximum generation of
100. The results of applying the GA and PSO
to the lower orde
r system modelling problem
are provided in Table 1.

Table 1 Lower Order System Modelling
Results using GA and PSO Approach

Algorithm

B
0

b
1

b
0

No. of
generations

Time taken for
simulation

Integral
Square Error

Genetic
Algorithm

66.2670

3.3571

3.2708

100

2
4 seconds

1.7519

Particle Swarm
Optimization

61.9485

3.2467

3.0577

100

9 seconds

1.0069

For each parameter the final value
determined by the respective algorithm is
given, followed the number of generations
the simulation was run. The first to last
colu
mn reports the time taken in seconds
required by the CPU for the complete
simulation of 100 generations. And the final
column presents the minimized fitness value
of the objective function Integral Square
Error. The time taken for optimization
process is c
omparatively higher for GA
search than PSO search algorithm. The
Integral Square Error (ISE) computed is also
minimal in PSO compared to GA.

S.N. Sivanandam and S.N.Deepa



6























































No

No

Read the coefficients of the numerator and denominator from,


which represents the transfer function of the given higher order
continuous
system.

Calculate transient gain
(TG)

and st
eady state constraint
(SSG)


Apply the proposed auxiliary scheme in appendix to
G(
s
)

and obtain
the approximate lower order model
R(
s
)
, scale it and tune it to maintain
the transient gain of
G(
s
).

Start

Return the optimized lower o
rder model obtained by invoking
GA and
PSO

Stop

Invoke
Genetic Algorithm

by
passing numerator and
denominator coefficients of R(
s
)

Invoke
Particle Swarm
optimization

by passing
numerator and denominator
coefficients of R(
s
)


Initialize

Swarm

Calculate
Velocities

Calculate

New
Positions


Evaluate Swarm

Update memory
of each particle

Initialize

Population

Evaluate Fitness

Optimal or good
solution

found?

Perform Selection

Create
n
ew population
via Crossover

Perform mutation

Optimal or good
solution found?

Yes

Yes

Figure 1 Flowchart for Lower order System Modell
ing using Genetic
Algorithm and Particle Swarm Optimization


A Comparati
ve study using Genetic Algorithm and Particle

Swarm Optimization for Lower order System Modelling


International Journal of
t
he Computer, the Internet and Management Vol.
17. No.3

(
September
-

December, 2009
)

pp 1
-
10



7

From Table 1 the lower second order
model generated for
the given higher order
system G(s), can be written as,





(12)



(13)


It is observed from Table 2 that the
proposed scheme yields better value for
integral square error with respect to other
methods considere
d for comparison.


The unit step responses of the given
original higher order system represented in
equation (
8
), the lower order models
shown
in equations (12) and (13)
obtained from

algorithms GA


and PSO are shown in
Figure

2
respectively. The figures d
epict that
output produced using both algorit
hms
appears closer to the ideal i.e., the lower
order model maintains the original
characteristics of the given higher order
system.





Fig
ure

2

Unit step response curves using Genetic Algorithm

and Part
icle Swarm
Optimization Approach



The

parameters used during GA

[
1
8]

search are

c
ross over rate


0.1
, m
utation rate


0.001
, s
election
m
ethod


Roulette Wheel
Selection
.
The values of parameters involved
S.N. Sivanandam and S.N.Deepa



8

in PSO

[9]

search are
,
c
1

and c
2



0.5
, n
o. of
pa
rticles


40
.



6. Discussion



The

GAs strength is in the parallel
nature of their search.
The genetic operators
used are central to the success of the search.
All G
A
s
requires

some form of
recombination, as this allows the creation of
new solutions that
have, by virtue of their
parent’s success. In general, crossover is the
principal genetic operator, whereas mutation
is used less frequently. Crossover attempts to
benefit offspring solutions and to eliminate
undesirable components, while the random
nature

of mutation is probably to degrade a
strong offspring solution than to improve it.
The algorithms power is the implicit
parallelism inherent in the evolutionary
metaphor. By restricting the reproduction of
weak offsprings, GAs eliminates not only
that sol
ution but also all of its descendants.
This makes the algorithm converge towards
high quality solutions within a few
generations.



Particle Swarm optimization shares
many similarities with Evolutionary
Computation (EC) techniques in general and
GAs in par
ticular.
All these techniques begin
with a group of a randomly generated
population and utilize a fitness value to
evaluate the population. They all update the
population and search for the optimum with
random techniques. The main difference
between the PS
O approach compared to EC
and
GA

is that PSO does not have genetic
operators such as crossover and mutation.
Particles update themselves with the internal
velocity;

they also have a memory important
to the algorithm.

Also, in PSO only the ‘best’
particle g
ives out the information to others. It
is a one
-
way information sharing
mechanism, the evolution only looks for the
best solution. Compared to GAs, the
advantages of PSO are that PSO is easy to
implement and there are few parameters to
adjust.



7. Conclus
ion



In this paper a comparative study

has
been made using Particle Swarm
Optimization and Genetic Algorithm for
lower order system modelling.
Overall the
simulation results indicate that both GAs and
PSO can be used in the search of parameters
during sys
tem modelling. With respect to
minimizing the objective function Integral
Square Error, the PSO determines a
minimal
value than does the GA. In terms of
computational time, the PSO approach is
faster than GA, although it is noted that
neither algorithm tak
es

what can be
considered an unacceptably long time to
determine the results.



The algorithms like GA and PSO are
inspired by nature, and have proved
themselves to be effective solutions to
optimization problems. These techniques
possess apparent robustne
ss.
There are
various control parameters involved in these
meta


heuristics, and appropriate setting of
these paramet
ers is a key point for success.
In general, any meta
-
heuristic should not be
thought of as an isolation: the possibility of
performing hyb
rid approaches should be
considered. Additionally for both approaches
the major issue in implementation is based
on the selection of an appropriate objective
function.



Acknowledgement



The author’s wishes to thank the All
India Council technical Educati
on (AICTE),
India for providing the grant to carry out this
research work.


A Comparati
ve study using Genetic Algorithm and Particle

Swarm Optimization for Lower order System Modelling


International Journal of
t
he Computer, the Internet and Management Vol.
17. No.3

(
September
-

December, 2009
)

pp 1
-
10



9

Appendix
-

Auxiliary Scheme



Consider an
n
th

linear time invariant
continuous higher order system represented
by its transfer function as:









(
14
)




















(
15
)



The auxiliary scheme for obtaining the
approximated lower order models from the
given higher order system is as follows:

First order:








(
16
)

Second order:





(
17
)

.

.

.

(n
-
1)
th

order:





(
18
)


Equations (
16
) through (
18
), gives the
lower order models formulated using
auxiliary scheme from the given higher order
system
G(s)
. Based on the requirement,
suitable lower order model can b
e selected
and operates. It should be noted for a higher
order system of order ‘
n
’,
(n
-
1)

lower order
models could be formulated. This method of
selection of approximate lower order models
helps to set the initial values of operating
parameters to be used
in the
genetic
algorithm
particle swarm optimisation
process.


References


[1]

J.Holland

(
1975
)
,
Adaptation in
Natural and Artificial Systems
,
University of Michigan Press
.

[2]

J.Kennedy and R.C.Eberhart

(1995)
,
“Particle Swarm Optimization,”
Proceedings IEEE Int
ernational
Conference on Neural Networks,
Vol.
4
, pp.1942
-
1948, IEEE
Service Centre,
Piscataway, NJ.

[3]

J.Kennedy and

R.
C
.Eberhart

(2001),
Swarm Intelligence
,

Morgan Kaufman
Publishers,

California
.

[4]

R.Prasad

(2000)
, “Pade type model
order reduction for multiva
riable
systems using Routh approximation”,
Computers and Electrical Engineering,
Vol. 26
.

[5]

R.Prasad, et al.

(2003)
, “Improved
Pade approximants for multivariable
systems using stability equation
method”,
Journal of Institution of
Engineers (India), Vol.84
,
pp.161
-
165
.

[6]

R.Prasad, et al.

(2003)
, “Linear model
reduction using the advantages of
Mikhailov criterion and factor
division,”
Journal of Institution of
Eng
ineers (India), Vol
.84, pp.9
-
10
.

[7]

M.Gopal

(2003)
,
Control System

Principles and Design
,

Tata McGraw
H
ill Publishing Company Ltd
,

New
Delhi
.

[8]

David E. Goldberg

(2000)
,
Genetic
Algorithms in search, optimization and
machine learning
,
Pearson Education
Asia Ltd, New Delhi, 2000.


[9]

R.C.Eberhart and Y.Shi

(2001)
,
“Particle Swarm Optimization:
developments, applications and
resources,”
Proceedings Congress on
Evolutionary Computation
, IEEE
Service Centr
e, Piscataway, NJ, Seoul,
Korea
.

[10]

Shi.Y and Eberhart.R.C

(1998)
, “A
modified particle swarm optimizer”,
Pro
c. IEEE International Conference
S.N. Sivanandam and S.N.Deepa



10

on Evolutionary Computation
, IEEE
P
ress, Piscataway, NJ, pp. 69
-
73
.

[11]

Y.Shi, and R.C. Eberhart

(1998)
, “A
modified particle swarm optimiser,”
Proc. IEEE Intl. Conf. Evolutionary
Computation,

IEEE P
ress, Piscataway,
NJ, pp: 69
-
73
.

[12]

P.J.Angeline

(1998)
,

“Using selection
to improve particle swarm
optimisation,”
Proc. IEEE Intl. Conf.
Evolutionary Computation
, pp: 84
-
89
.

[13]

Xiao
-
Feng, et al.

(2002)
,

“A dissipative
particle swarm optimisation,”
Congress
on Evolutionary Computation
, Haw
aii,
USA, pp: 1456
-
1461
.

[14]

J.Kennedy

(2000)
,

“Stereotyping:
Improving particle swarm performance
with cluster analysis,”
Proc. Intl. Conf.
on Evolutionary Computation
, pp:
1507
-
1512.

[15]

A.M.Abdelbar and S. Abdelshahid

(2003)
,

“Swarm optimization with
instinct d
riven particles,”
Proc. IEEE
Congress on Evolutionary
Compuatation
, pp: 777
-
782.

[16]

U.Baumgartner, et al.
(2004)
,

“Pareto
optimality and particle swarm
optimisation,”
IEEE Trans. Magnetics
,
Vol. 40, pp. 1172
-
1175.

[17]

V.Krishnamurthy and V.Seshadri

(1978)
, “Model
reduction using Routh
Stability criterion,”
IEEE Transactions
on Automatic Control
, Vol. AC
-
23,
pp.729
-
731, Augus
.

[18]

Alden H.Wright

(
1991
)
,
Foundations
of Genetic Algorithms
,
Morgan
Kaufmann Publishers
.