# Genetic Algorithms: Evolving Solutions to Problems

AI and Robotics

Oct 24, 2013 (4 years and 8 months ago)

112 views

Genetic Algorithms: Evolving Solutions to Problems

Todd Schmitt

Computer Science Department

University of Wisconsin Platteville

schmittt@uwplatt.edu

Abstract:

Evolution can create surprisingly efficient and e
ffective methods to solve a problem
without the need for a programmer to write code for every possible circumstance that
may take place. Genetic algorithms rely on code that is originally randomly created and
then selects the best of the current programs t
o create “child” programs that attempt to
combine
successful

features of their parents. By continuing to use this process of natural
se
lection, an efficient program is

created over several generations. I will discuss the
history and accomplishments of gene
tic
algorithms, as well as detail

some of the current
methods used to make them. These methods include how parent programs are selected,
how the code of the parents in combined to create children,
how a
fitness rating
is used
to
judge the programs, and the

use of random mutation in a population. I will also discuss
different specific examples of how a genetic algorithm can be applied to a problem and
their code structure.

Introduction

A genetic algorithm

is an attempt to solve a problem by using a rando
mly created initial
population and selecting the best of these current programs to create “child” programs
that attempt to combine successful features of their parents. By continuing to use this
process of natural se
lection, an efficient program is

created

over several generations.
The
problems that this method works best for are prob
lems in which there is no reasonably
fast algorithm or problems that are to be solved in a complex dynamic environment.
Also, as stated in [1]: “Another important property o
f such genetic
-
based search methods
is that they maintain a population of possible solutions”. They are used to create “novel
and robust” solutions because they are not
constrained by trying to rationalize why they
are doing something as in human problem
solving.[2]

Common uses of genetic
algorithms are optimization problems and problems involving small building blocks that
can be combined together to create a new system.

Genetic algorithms are prized for their
ability to explore a large search space q
uickly, while still finding the best solution in that
space. Genetic algorithms can be implemented in many different ways, but most have
several common features.

Process of Genetic Algorithms

Most genetic algorithms go through a process of finding thei
r solutions as shown in
figure 1. Applying a genetic algorithm to a problem
requires careful consideration of
how each step of the process should be implemented based on the problem at hand. A
method that works for one problem may not be able to find a s
olution for another
problem. Randomly creating the initial population involves determining how the data
will be stored, and how to generate a random individual. Encoding the data may be as
easy as creating a binary string of a given length
e.g. 101011 o
r it may be more
complicated and store given input situations that lead to an encoded output. In order for
genetic algorithms to move towards an ideal solution, they must have a heuristic or
fitness function that gives an individual a fitness rating base
d on how close it is to an
optimum solution.

Figure 1: Process of a Genetic Algorithm

Once each member of the population has been evaluated for fitness, the finesses can be
checked to see if they are
close enough

for this given situation.

In an optimization
problem, a solution may be desired that
is above 75% efficient. In many situations a
quick solution may be better than a perfect solution, so the process can be terminated
based on needs. If the process does not terminate, the member
s of the next generation
must be selected. Ideally, members that will provide genetically superior material into
the gene pool will be selected along with members that wi
ll provide genetic diversity.
Once the members of the new generation are selected,
they can be crossed over and
mutated to try to create new individuals that combine successful features of their parents.
Crossover is taking two individuals and choosing sections of their code to create the new
individual.
Mutation is a more subtle way o
f introducing change in the population, and
generally only changes one item in an individual at a time e.g. 1100110 becomes
0100110.
After the new generation has been created, the cycle starts again by evaluating
these new individuals to find their new f
itness level.

Encoding a Genetic Algorithm

One of the first steps in
creating a solution using a genetic algorithm is encoding the
information that individuals will use to try to solve the problem. In a very simple case
like trying to find the maximu
m of the equation: G(x) = sin(x^3)

where x is an integer
between
-
15

and
15. In this case a four digit binary number could keep the data of the
number and an additional first digit could designate if it was positive or negative. So
10110 would be
-
6 and
01111 represents 15. Crossover and mutation are easily done
with binary numbers and the encoding scheme covers all possible values. The only
drawback in this example is that the sign bit is at the beginning of the value and therefore
the first parent wil
l always choose the sign during crossover.

An example of a more complicated case is given by a game known as the Prisoner’s
Dilemma in which the two players are prisoners being held separately who must choose
whether to
defect and betray the other playe
r or cooperate. If both players choose to
defect they are both punished, while if one player cooperates and the other defects the
defector goes free and the cooperator is punished, or if both cooperate they are both given
mild punishments. In this situati
on a simple string of
binary numbers
won’t provide for a
chance at intelligent play. Axelrod

encoded his genetic algorithms to play the Prisoner’s
Dilemma with an emphasis on strategy. In order to do this the individuals are given a
“memory” of the last
three moves played by the opponent. Figure 3 gives an example of
one of Axelrod’s individuals.

[3]

All possible move
Histories

Move to make this
round

(CC)(CC)(CC)

C

(CC)(CC)(CD)

D

C=Cooperate

(CC)(CC)(DC)

C

D=Defect

(CC)(CC)(DD)

D

(CC)(CD)(CC)

C

(CC)(DC)(CC)

C

…and so on

Starting History: (DC)(CD)(CC)

Figure
2
: Axelrod’s P
risoner’s
Dilemma

individual

Each individual contains a D or C for what move should be made to correspond with the
64 possible histories that lead up to the current mo
ve, and 6 characters that describe what
the genetic algorithm should use as its history during the first move when it has no data
on which to base its decision
. Individuals can thus be represented by a 70 character long
string of D’s and C’s each of which

has a special meaning based on its position. Again,
this string of characters makes crossover and mutation easy because it is a string with a
set length and only valid individuals can be made using this method.

Another encoding example is a game in wh
ich a strategy and a sense of the world around
the individual must be taken into account is a simulation of an animal trying to eat plants
in order to survive into the next generation. The animals are called
Eaters and each of
them is aware of what is in
the space directly in front of it and has to base their decision
of what to do on that data. Each Eater also

stores a state as an integer

between 1 and 16;

each time the Eater
makes a move it also changes it
s state so that it can carry out more
complex ac
tions. For example, the Eater shown in Figure
3

is in state 1 and there is a

plant in front of it. From it
s encoding it would move forwards and then change to
s
tate
16. After moving forward, there is now a wall in front of it so it follows its code and
tries
to move forward and then go to
s
tate 13.
Due to their states,
Eaters can make more
complicated routines and avoid things like turning around endlessly searching for plants.
In one trial of this simulation all the plants grow in clumps and
the most
successful Eaters
are the ones who recognize this pattern and search for groups of plants to circle around
and gather. In another trial all the plants grow at one small area and Eaters must be
intelligent enough to search the whole space, find the plants,

and stay in that area
afterwards. [4]

Figure
3
: Encoding of an Eater

Another issue when encoding an initial population is ensuring that all members of that
population are valid. An example of this is the Knapsack
P
roblem, i
n which the
purpose

is to fit as much value of items in a given knapsack without going over its size
constraints. Each individual is a binary string with each one or zero declaring whether an
item will be put in the knapsack or not. If we tried to rando
mly create a binary string, it
could generate
a solution that tries to put everything in the knapsack. This would be the
highest value solution, but would be illegal. This does not just affect the initial
population either, since crossover afterwards cou
ld also make illegal individuals. There
are three ways to deal with possible illegal individuals: penalties, repairs, and
decoders.[5]

Penalties are the most straight
-
forward way of dealing with illegal individuals; if an
individual does not follow the r
ules they are penalized in their fitness rating based on the
level of infraction. These penalties can increase
logarithmically,
linearly or exponent
ially
as the bounds are exceeded depending on the problem
.
In Michalewicz’s use of penalties
on the
Knapsa
ck P
roblem he noted that penalties produced the best results in cases where
the knapsack was made to fit about half the weight of them items, but could not come up
with a legal result when the knapsack was made small enough to only hold one or two
items.
This stresses again the need to tailor the use of genetic algorithms to the particular
problem at hand.

Repair follows the concept that individuals can be randomly made in the initial
population and crossover can be done as normal, but that a repair will
have to be done
afterwards in order to make all the individuals legal again. A repair for the knapsack
population could randomly drop items from the knapsack until everything left fit, or it
could do additional calculation and drop items with a low value
to weight ratio. Repair is
very effective with knapsacks of all sizes,
and

should be considered in situations where
illegal individuals may arise.

The decoder method functions to interpret individuals in such a way that an illegal one
cannot be created.
In the knapsack example instead of a binary representation this could
be done with a list of numbers corresponding to an item in the list. For example the list
of items could be
{
A, B, C, D, E, F
}

and the individual could be
{
1, 3, 6, 2
}
. This
individual

would grab item A (first item in list), item D (third item in remaining list),
item C (using wraparound since only 4 items remain) and item E (the second item
remaining). Before putting each item into the knapsack, the system would check to see if
that w
ould overfill the sack. In this way the interpretation of the instructions changes so
that no individual is illegal, whether they are created randomly or through crossover.

Determining Fitness

Genetic algorithms rely on improving the performance of the
ir parents in order to move
towards the optimal solution. For this to happen, there must be a rating of
how much
better an individual in the population is compared to the others
. This rating is the fitness
of a function and it tries to give a high score
to solutions near the ideal solution and a low
score to hopeless solutions
. A fitness equation for a game might involve each of the
algorithms playing one another and tracking their number of wins versus losses. For a
maximization problem, it might yiel
d a number and the higher the number the higher the
fitness should be. If the difference in fitness between the average individuals and the
highest individuals is too high this can create “Super Individuals”[6] which are chosen
frequently during the next
generation and can wipe out diversity in the population of
solutions.
An example of this is an individual with a fitness of 5 when the average
individual has a fitness of 1. In this case it may seem reasonable at first to bias the next
generation towards

the path of the next individual, but if the optimal solution is in the
tens of thousands then the difference between 1 and 5 is insignificant and should not bias
the population and lose the diversity of the other solutions.

In these situations scaling th
e
fitness may allow the population to more accurately reflect how
much better one solution
is than

another. This scaling can be linear where there is a

all fitness
values or it can restrict the fitness to be within a given range of the a
verage or standard
deviations. In the case

with 1 and 5

fitness; a
would yield that 5005 is only .07% better than 5001 instead of 5 being 500% better than
1.

Using averages to help dictate fitness values can also hel
p to keep them comparable
by making the maximum fitness two to three times the average fitness.
Choosing a
successful fitness evaluation system is a difficult choice between favoring algorithms that
are currently good performers and favoring genetic diver
sity that may help in generations
to come.

Other systems may make it a point to stress diversity by decreasing the fitness of
individuals that are very similar. This allows solutions that are not looking in the same
area for a solution to compete in fitn
ess even though their performance may not currently
be paying off.

Selecting the Next Generation

The best individuals for current performance and genetic diversity must be determined
and selected to compete in the next generation of the population. The
re are many

ways to
select the members of the next generation and each effects diversity and selective
pressure in its own way.

One of the oldest and most popular ways is known as roulette selection, in which there is
a wheel with each genetic algorithm

getting space on the wheel proportional to its fitness.
The wheel is spun
n

times where
n

is the size of the new population. If the wheel lands
on an individual then
it is chosen as one of the members of the next generation, this leads
to having relativ
ely more of the higher ranking algorithms and often a low ranking
algorithm may get no individuals in the next generation. This method is not healthy for
genetic diversity and it could generate all individuals from one super
-
individual. A
method closely
related to it is stochastic universal sampling which is a roulette wheel
with
n
pointers evenly placed around the wheel. The wheel is spun once and the number
of pointers in an individual’s area gives a new member in the new population. This has
ntage
of
limiting the effects of chance
.

[6]

[13]

Figure
4
: Roulette Wheel (left) and Stochastic Universal Sampling (right)

Another simple model is ranking where each individual is put in order by fitness and then
given a predetermined number of ch
ildren based on its rank. The top 25% may get 3
individuals, the next 25% may get 2, the next 25% 1 and the remaining 25% may get 0.
This can help keep the population from being created from one individual, while keeping
pressure on by dropping the worst

25% of performers each time.

Another popular method for selection is the tournament in which for each member of the
new population two members of the old population are selected and the one with the
higher fitness gets that slot in the new population.

This has a diversity advantage in that
even low ranked algorithms can still make it into the population if their competition is
even worse than they are. Highly ranked individuals are likely to make it into the next
generation but
should not completely o
verwhelm the other algorithms in numbers. This
is considered to give a good balance of diversity and pressure.

A radically different method of selection is known as aging in which gi
ves an algorithm a
certain number of generations to live based on its
fitness and it is allowed to produce
children for as long as it lives. At the end of the algorithm’s time to live it dies and must
depend on its children to pass on any advances it may have made. The time to live is
contrived so that there is a small ini
tial population, large intermediate population, and a
small number of solutions that survive to the very end.
[8]

Another model of selection is the crowding model in which selection is combined with
crossover and mutation and similar children replace thei
r parents in the next generation.
This prevents exact copies of one individual from taking up many spaces in the new
generation as they are likely to do with roulette wheel or other selection methods.

A method that seems contrary to diversity is called e
litism in which the top few of the
population are always picked automatically and placed into the next generation without
crossover and mutation in order to preserve the best solution from this generation.
Keeping one or two individuals from the last gene
ration should not hurt diversity as much
as other factors could, and if the best generation has already been found this could ensure
that it is not lost through crossover and mutation later.
[14]

Crossover and Mutation

Crossover moves a system to try to
combine good portions of two individuals and to
create a successful hybrid of the two approaches. Mutation makes small random changes
in an individual to see if a small change could result in a better solution. Together these
two methods keep the populat
ion from growing stagnant.

One of the most basic crossovers is the single
-
point crossover as shown in Figure 2. This
single
-
point crossover is easy but can be very dangerous to good solutions because it
always takes the beginning of one parent and the en
d of the other parent. If the ideal
solution requires some code from the beginning and some code from the end of the
sequence, single
-
point will kill it. For this reason using more than one crossover point is
recommended because it helps to keep these ot
her solutions alive. There is also a method
called uniform crossover in which each pair of genes, two binary bits in this case, are
randomly chosen as being from parent 1 or parent 2. Choosing between uniform
crossover
and

multiple crossover depends on h
ow much the genes

that are near one
another

interact with each other.
High gene interaction may mean that uniform crossover
destroys unique solutions.

Figure 5: Example of Single
-
Point Crossover

During crossover there is also a choice to control
which algorithms are allowed to breed
with other algorithms. The two opposite ways to control this are called speciation and
incest prevention. Speciation is only allowing similar algorithms to breed with each other
to prevent super
-
individuals from dest
roying unique solutions. Incest prevention is
preventing similar algorithms from breeding to prevent minor improvements from being
lost by being crossed over again with a similar parent. [12]

Mutation is much friendlier towards diversity because it is ge
nerally set to only change
one value in a sequence and is unlikely to destroy progress. If the frequency of change is
too high, however, mutation becomes a source of chaos for the algorithms. This is
because it randomly changes bits with no reasoning as
to why it changes them. If
mutation is done correctly, however, it can be used without crossover as the way to get a
high
-
diversity group of algorithms to come to a solution.

Uses of Genetic Algorithms

The real challenge in the field of genetic algorit
hms is finding new uses for them that
they are well suited to handle. They have been used extensively in optimization
problems where the maximum value of something is found through a combination of
variable values or components.

The goal of the genetic a
lgorithm in this situation is to
find the maximum over the entire search space and not to find a local maximum and
assume that it is the global maximum. Other algorithms are also used for optimization,
most notably hill
-
climbing and simulated annealing.
Hill
-
climbing takes several samples
and changes the input in order to move up the slope until a maximum is found.
Simulated annealing is like hill
-
climbing but it is willing to go downhill temporarily to
try to search for a possible maximum solution. In
many situations genetic algorithms are
found to search more of the graph space without taking the time to try every possible
value in that space. Figure 6 shows a graph where hill
-
climbing and simulated annealing
would often go towards a local maximum and

be unable to find the global maximum due
to their prejudice towards going uphill.

Figure 6: Optimization Graph Favoring Genetic Algorithm

An example of this is an antenna developed by NASA that needed to have the best
reception while still fit
ting in a 2.5cm x 2.5cm

space.
A fitness function could be based
on the known physics of antennas and the type of transmissions that are trying to be
received. After the algorithm was set up it took 10 hours to come up with the optimal
design given the
space considerations. Once the framework is in place it is relatively
easy to adjust if the specifications should change as noted by the antenna design team:
“Following the first design of the ST5 satellite antenna, NASA Ames scientists used the
software
to

re
-
invent

specifications
-

a very quick turn
-
around in the space hardware redesign process.”
[
10]

Genetic algorithms can also be used in conjunction w
ith other methods such as expert
systems or neu
ral networks.

An expert system is one in which a person or a system is developed that knows what
decision should be made in a certain situation
where

there are no solid rules defining
what should be done. An example of this would be a loan officer deci
ding whether a
person’s loan should be approved or denied. The expert system tries to capture all the
possible variables that the expert uses to make a decision and automate the system. In
this case genetic algorithms can be used to test the system and t
ry to evolve problems that
cause the expert system to break down. There was an instance of an expert system to
control a coal
-
fired steam power plant created by the Naval Research
Laboratory to tell
lant. After making a genetic
algorithm to test it they found: “the GA successfully exposed an error in the expert
system which had not previously been detected by designers or operators over three years
of actual use”.[11] These errors could usually be d
etected through the use of formal
logic, but only at great expense of time for large systems.

A neural network is a system which tries to duplicate the learning and responses formed
by the human brain using artificial neurons with given weights that chang
e during
training and after training reflect what they have learned. Genetic algorithms can be used
to
determine what these weights should be
more accurately than other classical training
methods.

[15]

Conclusion

Genetic algorithms are a valuable too
l that requires careful deliberation on the creation of
the framework when it is being made. Once this framework is made the genetic
algorithm can be very helpful in finding a solution in a dynamic environment. Genetic
algorithms are not always the easie
st or best solut
ion, but in the problems that they are

suited
for they

can come up with very good answers and

References

[1]
Michalewicz
, Zbiniew

(1996).
Genetic Algorithms + Data Structures = Evolution
Programs

Germany: Springer
-
Verla
g Berlin Heidelberg
,
pg

16

[2]
Withby, Blay

(2003).

Artificial Intelligence: A beginner’s guide

Oxford, England: Oneworld Publications,
pg 62

[3] Axelrod
, R.
(1987).
Genetic Algorithms for the Prisoner Dilemma Problem

pg 32
-
41

[4]
Eck, David
(2001)
A D
emonst
ration of the Genetic Algorithm
.

Retrieved April 3
rd
,
2005, from Hobart and William Smith Colleges, Department of Mathematics and
Computer Science Web site:
http://math.hws.edu/xJava/GA/

[5]
Michalewic
z, Zbiniew

(1996).
Genetic Algorithms + Data Structures = Evolution
Programs

Germany: Springer
-
Verlag Berlin Heidelberg,
pg

82
-
85

[6]
Michalewicz, Zbiniew (1996).
Genetic Algorithms + Data Structures = Evolution
Programs

Germany: Springer
-
Verlag Berlin He
idelberg,
pg
58
-
59

[7]
Michalewicz, Zbiniew (1996).
Genetic Algorithms + Data Structures = Evolution
Programs

Germany: Springer
-
Verlag Berlin Heidelberg,
pg
65
-
66

[8]
Michalewicz, Zbiniew (1996).
Genetic Algorithms + Data Structures = Evolution
Program
s

Germany: Springer
-
Verlag Berlin Heidelberg,
pg
73

[9]
Michalewicz, Zbiniew (1996).
Genetic Algorithms + Data Structures = Evolution
Programs

Germany: Springer
-
Verlag Berlin Heidelberg,
pg
90

[10
]
Ames Research Center Press Release

(2004).

NASA Evoluti
onary Software
Automatically Designs Antenna
.

Retrieved April 3
rd
, 2005, from
http://www.spaceref.com/news/viewpr.html?pid=14394

[11]
Wildburger
, A (1999). Martin edited by Jain et al.
Cu
rrent and Future applications
of Intelligent Techniques

in the Electric Power Industry

Boca Raton Florida:
CRC Press
LLC

[12]
Michalewicz, Zbiniew (1996).
Genetic Algorithms + Data Structures = Evolution
Programs

Germany: Springer
-
Verlag Berlin Heidelber
g,
pg 58

(2001).

Intelligent Systems for Engineers and Scientists

Boca Raton Florida: CRC Press LLC, pg. 179

(2001).

Intelligent Systems for Engineers and Scientists

Boca Raton Florida: CRC Press LLC, pg.

184