Game Design Algorithms

bigskymanΤεχνίτη Νοημοσύνη και Ρομποτική

24 Οκτ 2013 (πριν από 4 χρόνια και 15 μέρες)

135 εμφανίσεις

Game Design Algorithms




Contents

1)

Pathfinding
................................
................................
................................
..........................

3

1.1)

Preface

................................
................................
................................
.........................

3

1.

2.1)
................................
................................
................................
..............

Introductio
n
................................
................................
................................
................................
....

3

Node/vertex

................................
................................
................................
......................

3

Edge
................................
................................
................................
................................
......

3

Weight

................................
................................
................................
................................
.

3

3.1)

Dijkstra's Algorithm

................................
................................
................................
....

4

2)

Genetic Algorithms
................................
................................
................................
.............

9


TheCell

21/07/2003
................................
................................
................................
..

9

4.1)

Introduction

................................
................................
................................
.................

9

3)

Genetic A
lgorithms: Implementation Tradeoffs
................................
...............................

12

5.1)

Introduction

................................
................................
................................
...............

12

6.1)

Overview

................................
................................
................................
...................

12

7.1)

Chromosom
e Implementation

................................
................................
...................

13

Convenient Range Values

................................
................................
..........................

13

More Interesting Range Values

................................
................................
...............

14

8.1)

Genetic Operators
................................
................................
................................
......

15

Crosso
ver
................................
................................
................................
..........................

15

Inversion

................................
................................
................................
..........................

16

Mutation
................................
................................
................................
............................

16

9.1)

Selection

................................
................................
................................
....................

17

Roulette Wheel
................................
................................
................................
...............

17

Tournament Selection

................................
................................
................................
.

19

Thresholding

................................
................................
................................
...................

19

10.1)

Fitness

................................
................................
................................
....................

19

11.1)

Definitions

................................
................................
................................
.............

21

12.1)

Abou
t the Author

................................
................................
................................
...

21

13.1)

References
................................
................................
................................
..............

21

4)

Finite State Machines

................................
................................
................................
.......

23

14.1)

Introduction
................................
................................
................................
............

23

15.1)

Appl
ications of FSM
................................
................................
..............................

23

16.1)

What is a FSM?

................................
................................
................................
.....

23

17.1)

Example
-

Robotics
................................
................................
................................

24

18.1)

Example
-

Code

................................
................................
................................
.....

28

19.1)

Conclusion

................................
................................
................................
.............

28

20.1)

Glossa
ry

................................
................................
................................
.................

29








1)


Pathfinding





Russell Long

16/07/2003


1.1)

Preface

This will be my first tutorial for
Devmaster.net

and certainly not my last, bear with my
articles they will have mistakes but they will be rectified, before I go on I'd like to thank
Gamedev.net

and also
Game Tutorials

for their informative articles.

Note:

These articles won't provide the most data structures but the most relatively easy
way of implementation. Using the information and your brain, you should easil
y be able
to improve the code!

2.1)

Introduction

I see algorithms expressed as mathematical formula all the time, yet I always find it
difficult to break down and digest, so these articles will veer away. In essence they'll be
rather wordy and simplistic, but w
ill gradually increase in complexity.

First off some terminology you'll need to know:


Node/vertex

These are your destinations, if you've ever played Civilisation, then each tile is a node,
or alternatively a computer on a network can be seen as a node.

E
dge

These are what connect nodes/vertex's together. A single edge or several edges combine
to make a path.

Weight

This is the cost of an edge that connects two nodes together. Usually this is usually
decided from the distance between 2 nodes but can also b
e modified by other factors.

This can be pictured by looking at terrain. A node that starts at the bottom of a
mountain and has a edge that connects to a node at the top of a mountain will have a
large weight (hard to get there!). But two nodes on a flat e
ven terrain will have a edge
that has a low weight. Other considerations: not all nodes will connect together and a
node can only be connect to another once (i.e. it has only one edge). This article won't
comment on how weighting should be made and conside
red, that’s a design issue for
you to ponder.

3.1)

Dijkstra's Algorithm

Dijkstra's algorithm finds the shortest path from the source/root node (this can be seen
as a player/units position), using a greedy approach (i.e. it doesn't back track it sees the
best op
tion, takes it and doesn't look back), to all the nodes! Onto what we know so far.

We have a series of nodes, with edges. This is our first consideration, we need to know
which nodes are connected before we start (a limitation of dijkstra's algorithm). For

this
article we'll use a 2D array, in which we store the edge weights. The maximum number
of edges that could possibly exist would be:


Number of Nodes * Number of Nodes

So we initialise our array size to:


EdgeWeights[MAX_NODES][MAX_NODES]

Next we
populate the arrays, i.e.:


EdgeWeights[node1][node2] = 7; // node 1 and 2 are connected and
have a edge of value 7


EdgeWeights[node2][node3] = 4; // node 2 and 3 are connected and
have a edge of value 4

But what if there exists no
edges between nodes!! Well you'd have to have a number
that’s large i.e. take it as infinity, which will be used to indicate no edges exists. That is:


EdgeWeights[node1][node3] = 999999999999999;

This data should be created automatically from within an
other part of your program such
as a heightmap (e.g. you have lots of canyons, and use this algorithm to find the
quickest route to a point on the other side of the map).

Next we need another array initialised to:


SpecialDistance[MAX_NODES] = 999999999
999999;

IMPORTANT: At this point, set our root (node/vertex) player's position SpecialDistance to
0!!, this is essential. This will be explained later. And another:


Predecessor [MAX_NODES] = 0

This array stores from which node we came from, i.e.:


P
redecessor [4] = 2; // would represent that we are at node 4
and


// we got to node 4 from node 2.

We also need a priority queue ordered by
SpecialDistance[MAX_NODES]
. The priority
queue works like this, add all of the

'SpecialDistances' to a queue, arrange into a priority
queue by moving all those with lowest values to the front (the lower the value, the
higher the priority). Every time we process a node we remove it from the queue and
update.

So this means we have the

following arrays:


EdgeWeights[MAX_NODES][MAX_NODES];


SpecialDistance[MAX_NODES];


Predecessor[MAX_NODES];

The PriorityQueue of
SpecialDistance[MAX_NODES]

should have our start position at the
front (since we set its value earlier to 0! ). Now f
or the guts of the algorithm:

While (priority queue not empty)


CurrentNode = highestpriority




Remove node from priority queue




For every node with a edge to CurrentNode, and is in the priority queue


If SpecialDistance [node] >
SpecialDistance[CurrentNode] +
EdgeWeights[CurrentNode][node] then


SpecialDistance [node] = SpecialDistance[CurrentNode] +
EdgeWeights[CurrentNode][node]





Remove SpecialDistance[CurrentNode] from priority queue



Set Predecessor[node] = cur
rent node


Reorder priority queue

SpecialDistance is the current shortest path from the root/source found so far to a
particular node. If another special distance is found with a lower value than that, then
the lower valued node becomes the special distanc
e. Confused still? Here's some step by
step diagrams to show what happens. On the diagram we have 9 nodes with an obvious
shape. Node number (1) is our root/source node. Follow the diagram and try to relate it
to the pseudo code:

Red node indicates current

node,

Blue line indicates neighbours tested,

Red text indicates special value

Black Numbers indicate edge weight


We're at the root/source node, our neighbours at 2 and 4. We check their
SpecialDistance value, which is infinity and check if we have a bet
ter value:


If SpecialDistance [2] > SpecialDistance[1] + EdgeWeights[1][2]

In this code we know that SpecialDistance[2] = infinity, and that SpecialDistance[1] = 0,
so only the edge weight to the neighbours matter in this case. So SpecialDistance[2] =
0
+ 1. The same approach applies to node 4:


The priority queue has set the current node as number 2 since it has the lowest
SpecialDistance value. So


If SpecialDistance [3] > SpecialDistance[2] + EdgeWeights[3][2]

Results in, SpecialDistance [3] == i
nfinity, and SpecialDistance[2] == 1, giving us
SpecialDistance[3] = 1 + 2


Same approach as above nothing fundamentally different


This is where the greedy aspect of the algorithm kicks in and why the SpecialDistance

value is important. Node 5 had an edge both to node 4 AND node 2, but the edge from
node 2 has been dumped in favour for the edge connecting to node 4. Why? Lets look at
the code and see. We know already that node 5 already has a SpecialDistance value of
6
set, but we never moved to node 5 since it had a high SpecialDistance value giving a low
priority, since we moved to node 4 however this has happened.

If SpecialDistance [5] > SpecialDistance[4] + EdgeWeights[5][4]



6 > 3

+ 2

Thus we reset the SpecialDistance Value of 5 and set its predecessor as 4. Hopefully you
can follow the rest of the diagrams yourself.






2)

Genetic Algorithms




TheCell

21/07/2003


4.1)

Introduction

Genetic

algorithms... What is it used for? Well, for one interested in game development,
it might help in creating better A.I. It can help you tweak your choice settings in a real
-
time strategy game so that a computer
-
controlled opponent might evolve, choose it's

actions better, learn and so on. This is my first article ever, so if you feel that something
is missing or unclear, please contact me (TheCell61 at hotmail dot com) so I can update
it and hopefully make it better. This article is intended for people who
have little or no
knowledge about genetic algorithms and genetic programming. So what is a genetic
algorithm exactly? Well, it's simply doing things the mother nature's way: Only the
fittest may survive. So, how can we translate this into computer language
? Well, it's
better if we take a look at how mother earth does it in the first time. And the best way is
to tell you a little story.

Once upon a time, there was a little land called Esras. In this land, there was two kind of
creatures, plants and herbivore
s. Herbivores were wandering around, eating food as they
found it. But someday, plant supply started to run low, and so, they had to travel
further. Those who didn't travel died of starvation, and so, only those who travelled
survived and had a chance to r
eproduce. But since they travelled a lot more than their
relative, they developed leg muscles. But soon, they also were running low on food, and
thus, they travelled again. But some were more speedy than the other, and they gained
an advantage over the oth
er. They were the first to get to new plants, and so they were
able to eat more than the others, and thus survive much longer. Those who were too
slow to get to food eventually died. And so on...

So, Mother Nature's way is simply a way to get the best out
of us, allowing us to survive
and eventually mate. But how does this apply to computer? Well, it's (relatively) simple.
When a human baby is born, he/she inherits the parent's chromosomes, 23 from the
mother, 23 from the father. The chromosomes are a set o
f gene which defines hair color,
eye color, etc... and a gene is an array of data. So, a chromosome is an array of genes,
and a gene is an array of data. So, a gene looks like:


(11100010)

and a chromosome looks like (in binary data):


Gene1 Ge
ne2 Gene3 Gene4


(11000010, 00001110, 001111010, 10100011)

Each gene represents some data (Eye color, hair color, sight, etc.). So, when a baby is
born, it's a simple exchange of information, from both the mother and father. And
sometimes, muta
tion occurs. A mutation is simply a reversed bit in a gene. This way, we
ensure a diversity in the gene pool. Let me give you a concrete example in a computer
perspective. You have a pool of creatures that you want to evolve. You have a gene
value for sigh
t, speed and TotalEnergy. Once the chromosomes are decoded, each gene
is read as follows:

1stChromosone | 2ndChromosone

Sight Speed | TotalEnergy

(0110 0110) | (1100)

So now, we have a creature with defined attributes in the form of chromosomes. But
how do you make it evolve to become better? Well, that's the trickiest part. You need
rules to define what makes an animal the fittest. Usually, in artificial life, it's sim
ple if it's
still alive, but sometimes, it can be very complex. A very simple rule in real
-
time
strategy game might be that if a computer takes you more time to beat than another,
that opponent has a better chance to mate, and thus, to produce a better opp
onent. But
let's keep it simple, and let it just do it for the survival. If a life
-
form survives, it must be
fit. But if we only destroy the weak, and don't reproduce; we'll only destroy our artificial
life. Hence the need to reproduce. As in the little st
ory mentioned above, we must do
some exchange to ensure that we evolve, and get the best of each artificial life. And to
this, we must exchange information about each gene. To do so, we must make a cross
-
over in the chromosomes.


Cross
-
Over:



v


1stChromosone | 2nd Chromosone


(Sight Speed) | (TotalEnergy)


(0110 0110) | (1100)

So, when two artificial life mate, we exchange information.

First Parent's 2 Chromosomes:


1stChromo | 2ndChromo


(01100110) | (1100)

s
econd Parent's 2 Chromosomes:


1stChromo | 2ndChromo


(00101111) | (0110)

When those two mate together, we create 2 children with a mixed set of chromosomes:

Baby1's 2 Chromosomes:


1stParent | 2ndParent


1stChromo | 2ndChromo


(01100110)
| (0110)

Baby2's 2 Chromosomes:


2ndParent | 1stParent


1stChromo | 2ndChromo


(00101111) | (1100)

Mutation appears as a gene being slightly modified. So, Baby1 could get no mutation,
but baby2 could get a mutation. So, Baby2's set of Chromosomes may look like this after
mutation:

Baby2's 2 Chromosomes

1stChromo | 2ndChromo

(00101111) | (1101)



^


|

This mutation ensure a diversity in the gene pool, and permits some new twist when the
genes looks too much alike.

What we've done here is simply take the first Chromosome of each parent, and mix it
with the second Chromosome
. Notice that the first Chromosome is composed of 2 genes,
and the second chromosome is composed of only one gene.

Now we put all of this together, and apply it to artificial life. We have a pool of artificial
life
-
form creatures (called AL from now on). W
e put them in an environment which we
then define the rule as follows. Each creature needs food to survive. Eyesight permits it
to see food farther. Speed permits it to get to food faster. MaximumEnergy is the
maximum energy a creature can get. Then we def
ine that if a creature's current energy
ever gets to 0, the creature die. If 2 creatures meet, and their CurrentEnergy is over
50%, they mate and create 2 children. We assign a gene for each characteristic
(Eyesight, speed, and MaximumEnergy). In addition,

we group the first two gene
together to form a chromosome, and the last gene consists of a chromosome in itself.
What will happen is that, at first, many AL will mate. Then, many will die due to lack of
food. But those who will stay alive are the stronges
t (or luckiest) of all AL, because they
can store more energy, or they get to food faster, or see them, first. Hence, they get a
better chance to mate, and produce better creatures, and so on. Sometimes, what
happens is that a creature might survive due to

luck. Genetic algorithm is meant for
optimization, but it's hard to get an exact maximum.

Well, that's the end of this article. I hope it helped you to better understand Genetic
Algorithms in general, as it is simply a basic article. Feel free to contact

me if you have
any questions or feel that something is missing or unclear.

TheCell (TheCell61 at hotmail dot com)






3)

Genetic Algorithms: Implementation Tradeoffs





Jeff Nye

19/05/2004



5.1)

Introduction

The class of stochastic search strategies commonly classified by the name genetic
algorithms provides a very powerful and general mechanism for searching complex
solution spaces. Genetic algorithms find application in many NP
-
Hard and NP
-
Compl
ete
optimizations problems. Essentially these problems do not solve nicely using strictly
analytic methods and therefore search strategies are typically applied.

In context, for example, deriving the connection weights in a path finding neural network
woul
d qualify as a complex solution space for non
-
trivial neural networks.

There is a certain amount of vocabulary which must be accepted to make the following
discussion concise yet understandable, please forgive. I have supplied brief definitions of
these te
rms in the Definitions section. For more formal definitions consult [1] and [2].

6.1)

Overview

Genetic algorithms (GA) mimic the biological processes underlying classic Darwinian
evolution in order to find solutions to optimization or classification problems. G
A
implementations utilize a population of candidate solutions (or chromosomes). Each
chromosome in the current generation is evaluated using a fitness function and ranked.
From the ranking candidates are selected from which the next generation is created.
The
process repeats until either the number of iterations is exceeded or an acceptable
solution is found.

Details of implementation are varied but a generic view of a GA would include:

1.

Initialization

of the initial population (either randomly or from a be
st guess or
previous partial solution)

2.

Evaluation of the
fitness function

on each chromosome to determine ranking.

3.

Application of the
selection method

on the population to determine mating
rights.

4.

Application of the
genetic operators

on the chromosomes selected for mating.

5.

Return to Step #2.

The
fitness function

is a measure of how well a candidate solves the problem.
Examples of an optimization problem would be find the minimum/maximum y given
y=F(x), with y=F(x) being the fitness fu
nction.

Implementations vary in the choice and practice of the
selection method
; suffice to say
that the purpose of the selection method is to choose candidates whose genetic mix will
tend to lead to improved candidate solutions in the next generation. Exa
mples of
common selection methods include random, elitist, roulette wheel, tournament, etc.

Genetic operators

provide mixing of chromosome portions from the parent or parents
to form the offspring of the next generation. Examples of genetic operators inclu
de
crossover, mutation, inversion, etc.

In the following paragraphs I deliberately describe the steps in reverse order. It is my
feeling that, for good software design, especially performance critical software, it is
important to grasp the low level detail
s of the larger processes. Generally the design of
the basic chromosome structures have more performance impact than the fitness
function. You are usually stuck with the fitness function while the organization of the
chromosome effects the performance of t
he genetic operators. This is a broad statement
which should be qualified by your particular application constraints.

7.1)

Chromosome Implementation

One of the first issues that must be resolved when crafting your GA engine is the form of
the basic chromosome.
This determines the requirements and complexity of your genetic
operators and directly effects the performance of those operators, which in turn drives
one vector of the over all performance of the GA engine.

In part, your chromosome implementation will b
e driven by the type of problem you are
solving. Is it number based; integer or real, what are the precision requirements? Is it
symbolic; arbitrary symbol length or fixed, encoded or pre
-
decoded? etc.

Convenient Range Values

Consider a problem to find the

minimum value of y for the following equation:

y = 100(x
1
2


x
2
2
)
2

+ (1


x
1
)
2


for
-
2047 ≤ x
i

≤ +2048

This would be the fitness function for the problem. A possible representation of the x
values would be two 12 bit binary values concatenated, giving a
chromosome similar to:

AAAAAAAAAAAABBBBBBBBBBBB

While this seems like a very straight forward encoding scheme there are subtle aspects
to this that make a straight forward encoding not necessarily the best solution. For
statistical reasons it is often more

advantageous to select a binary encoding where
adjacent values have a Hamming distance of 1. Consider the difference between the
encoded value of 1023 and 1024:

1023: 001111111111

1024: 010000000000

The straight forward binary encoding causes all 12 bits
of the encoding to flip for a single
increment of xi at the 1023/1024 boundary. This is the worst case but similar
discontinuities exist at every power of two.

Informally, think of the GA as making iterative and somewhat random changes to the
encoded value
s of candidate solutions through the genetic operator process. However
due to the encoding scheme incremental improvements to the real value of xi require
drastic changes to the encoded form along the boundaries of integer powers of two.

I will make the un
supported claim that these discontinuities in encoding make it more
difficult for the GA to converge and point the reader to [3] for a derivation of the
mathematics to support this assertion.

An encoding which has a Hamming distance of 1 between adjacent v
alues is often a
better implementation. Grey code is one example.

More Interesting Range Values

But suppose your xi range is not conveniently represented as a binary value, perhaps

-
6.283 ≤ xi ≤ +6.283

Now you have a real decision to make concerning repres
entation. Depending on the
capabilities of the hardware which runs your GA engine it may be preferable to
implement a fixed point representation of x rather than a floating point, or in some cases
a look up table may make more sense if even fixed point is
not suitable.

In the case of fixed point three bits are required for the integer portion and 9 bits are
required for the fractional portion. So your chromosome may look something like:

SaAAAaaaaaaaaaSbBBBbbbbbbbbb

Sa Sb represent the sign bits of x1 and x2

A B represent the integer portions of x1 and
x2 a b represent the fractional portions of x1 and x2

The genetic operator phase of the GA will quasi
-
randomly mix bits from one parent to
the other to form the offspring. Thus without constraint it is possible

that the resulting
offspring could contain 111 in AAA. This is outside the range of the x1 variable and
results in an unusable offspring.

Illegal candidates can be handled in many ways. One possible implementation is to allow
the genetic operator to form
illegal results (simplifies the operator making it faster) then
evaluate them for legality (creates variability in the time it takes to create a valid
offspring, often a problem in real time situations like games).

01111111111111 equates to 6.283

011111111
11110 equates to 6.282

01111111111101 equates to 6.281



This has the advantage of simplifying the genetic operator implementation, but retains
the illegal value problem. However it is possible to adjust the dynamic range in the
lookup up table automatical
ly by adjusting the encoding such that a legal value occurs
more than once, smoothing out the probability distribution to account for the over
representation of the FLOOR or CEILING values.

Dynamic range also comes into play as the GA begins to converge o
n a minima or
maxima, where the contents of the population become very similar making effective
ranking and mating selection problematic. Methods for adjusting dynamic range are
covered further in the Fitness Function section.

Alternatively in the fitness
evaluation phase it is often easy to check the range of the
variable and penalize the resulting value of the fitness result such that this offspring will
not be selected for mating in the next generation. This represents a simple (and
predictable) but comp
utationally inefficient solution (random number of illegal offspring
are contained in each new population).

This represents an architectural decision, for me these are the more interesting
challenges of software design, approaching an art form. Usually the

right choice is
implementation dependent with a complex set of tradeoffs. All things being equal if you
require faster convergence then culling in the operator phase might be more suitable. If
you require predictable execution speed then culling in the fi
tness evaluation might be
more suitable.

A direct floating point representation, where x1 and x2 are natively floating point values
has a similar problem, but some hardware platforms allow efficient use of FLOOR and
CEILING functions which clamp values ins
ide a specified range.

This eliminates the creation of illegal values, but possibly creates a dynamic range
problem. In these cases rather than the offspring being a constrained but statistically
even distribution of the random values expressed by the pare
nts encoding there is now a
higher probability that a given set of offspring will contain values forced to the FLOOR
(or CEILING) of the valid range.

Consider a four bit field where only the values of 0000 through 1000 are valid, this
represents a 7 in 16
chance that a 4 bit random number generator will have its result
FLOOR
-
ed to 1000. Overall, this means that 50% of the random number generator
output values will be 1000 (7 in 16 values FLOOR
-
ed to 1000, plus the naturally
occurring 1000 value), when ideal
ly each value should have a 1 in 9 probability of
occurring.

An alternative approach to either a fixed or floating point representation would be a
lookup table function where specific binary (or Grey code values) represent specific real
values:

8.1)

Genetic
Operators

Below I cover the classic GA operators. Genetic operator construction and
implementation is a hot topic of GA research. There are many papers describing
improvements to classic operators as well as some very interesting novel operators.
Consider
this a prime opportunity to use your creativity to improve the following:

Crossover

The crossover operator is applied to mating pairs based on probability. If the probability
limit is reached then crossover occurs, otherwise the resultant offspring are sim
ply
copies of the parents. A commonly used crossover rate is about 0.75.

The mechanics of the classic single point crossover operator are simple. Using the
chromosome structure from the integer example, with C[0:3],D[0:3] representing the
two fields of Par
ent 1 and E[0:3],F[0:3] likewise for Parent 2, we have the following:

Parent 1: C0 C1 C2 C3 D0 D1 D2 D3

Parent 2: E0 E1 E2 E3 F0 F1 F2 F3

Crossover point
--------------------
^

The resulting offspring will look something like:

Offspring 1: c0 c1 c2 c3 d0
f1 f2 f3

Offspring 2: e0 e1 e2 e3 f0
d1 d2 d3

Offspring 1 shares the values of Parent 1 up to the crossover point and Parent 2 after
the crossover point. Conversely Offspring 2 shares the opposite values of the parent
pair.

Ther
e are many variations on offspring production. In what are termed constant
population GA’s, two parents create two offspring. In other implementations the more fit
of the two parents is permitted to continue in the next generation unmodified along with
the

offspring created by the mating of Parent 1. There is also interesting work being
done on variable population GA’s which allow the population to grow and shrink based on
some external value. One example would be to allow the population to grow or shrink
b
ased on the slope of the population’s average fitness for the previous N generations.
Many other opportunities exist. Consult [5].

The classic dual point crossover operator adds another crossover point with the offspring
either retaining the segment betwee
n the crossover points and exchanging the segments
outside or visa versa. An example of exchanging within the crossover range

Parent 1: C0 C1 C2 C3 D0 D1 D2 D3

Parent 2: E0 E1 E2 E3 F0 F1 F2 F3

Crossover points ^
--------
------
^

Offspring 1: c0 c1 e2 e3 f0 f1 d2 d3

Offspring 2: e0 e1 c2 c3 d0 d1 f2 f3

There are many permutations to the single and dual point crossover implementations
and offspring generation found in existing implementations. [3] and
[4] explore the
tradeoffs and underlying theory of preferred implementations.

Inversion

The inversion operator simply reverses the order of a range of values within the
chromosome, again based on some usually fixed probability. This probability is classic
ally
applied for each chromosome rather than for each gene. The purpose is to introduce new
gene combinations into the population, increasing diversity. Remember diversity is
essentially a broadening of the populations search of the solution space.

Using
the symbolic chromosome representation for clarity:

Offspring 1: A B C D E F G

Inversion range ^
----------
^

Result: A B F E D C G

The inversion range points are also usually random values within the length of the
chromosome.
And like crossover inversion is applied on a probability basis. I commonly
use an inversion rate of between .1 and .25 of the crossover rate depending on the
problem I am trying to solve.

Mutation

The Mutation operator changes individual fields within a ch
romosome based on some,
usually fixed, probability. Mutation is classically applied after all other genetic operators
but before selection.

The purpose of mutation in biological processes is not understood analytically. Therefore
it makes sense to also mak
e the case that it is not well understood in GAs. The current
general consensus is that while selection and crossover provide genetic mixing between
parents to form new offspring, selection and crossover also have a tendency to “wipe
out” valuable genetic
information just as readily as they replace less valuable genetic
information.

In another context imagine a GA nearing convergence on some value, as you might
imagine genetic mixing of high quality candidates tends to form other high quality
candidates. In

essence what this means is that as a GA converges the individual
elements of a population have a tendency to look similar. Genetic divergence is reduced,
after all we are trying to converge on a solution. This is all well and good if the
convergence point

is some globally optimal minimum or maximum. However if the
convergence value is a local maximum or minimum then the homogeneous nature of the
population will not permit a diverse sampling of the solution space and the GA fails to
find the optimal solutio
n.

Typical implementations of mutation operators are just as varied as crossover operators
but the original definition assumed a binary representation and used a very small
probability that a given bit within a chromosome would be “flipped”. Each bit in a
chromosome is walked and the probability function is evaluated. If the probability
function is true for that bit, the new bit value is the inverse of the current value.

In symbolic chromosome representation, where each gene position represents a symbol,
on
e possible implementation of mutation is to swap the positions of two adjacent genes.

Consider the traveling salesman problem where the task is to determine the shortest
path between cities [A through G] organized on a 2 dimensional grid visiting each city

only once. The fitness function would be the total path length. Each chromosome would
then represent a possible path through those cities.

Offspring 1: A B C D E F G

Mutation point
--------------
^

Result: A B
D C

E F

G

The mutation probability function is applied at each gene position in each chromosome of
the population. In a swapping implementation care must be taken to adjust the
probability of mutation downward to deal with the fact that each mutation candidate
p
otentially effects two genes. Too much mutation causes problems in convergence while
too little mutation causes premature convergence on local minimums or maximums.

A common starting point for mutation probability is 1 value out of 1000.

9.1)

Selection

Selectio
n is the process of deciding which chromosomes in the current population will
pass their solution information to the next generation. There are a large variety of
schemes for selection I will cover a few of the more commonly applied selection
schemes.

Roul
ette Wheel

Classically roulette selection is calculated on the normalized fitness values of a
population. Normalization is the summing of the fitness for the entire population and
then dividing each member’s fitness by this sum.

Assume we have the followin
g distribution of fitness in a population of 10 individual
chromosomes. The sum of the fitness for the generation is 155. Each value in the Fitness
column is divided by 155 to provide the Normalized Fitness. The Cumulative Norm
column is the running sum of

the Normalized Fitness values.

Chromosome Fitness Normalized Cumulative Probability

Fitness Norm of Selection

Chromosome

Fitness

Normalized Fitness

Cumulative Norm

Probability of Selection

0

3

0.019

0.019

0.19

1

12

0.077

0.096

0.77

2

17

0.109

0.205

1.09

3

7

0.045

0.250

0.45

4

9

0.058

0.308

0.58

5

11

0.070

0.378

0.70

6

13

0.083

0.461

0.83

7

42

0.270

0.731

2.70

8

23

0.148

0.879

1.48

9

18

0.116

1.00 (.995)*

1.21

* We allow a slight inaccuracy here due to truncation effects, the final value is .995 but
set to

1.00. This will have little effect on the properties of the GA.

The probability of selection column is a measure of the number of available slots within a
10 slot wheel for each chromosome. I have added this column strictly to illustrate the
properties of

the roulette wheel and the proportionality of selection to fitness value. Also
I have arbitrarily multiplied the difference between the cumulative norm values of
adjacent chromosomes by 10. So out of 10 spins chromosome 7 will likely be selected
2.7 times
. Notice that the smallest fitness value has the smallest probability. From this is
should be clear that this example is trying to maximize the fitness value.

The next step is to produce 10 random numbers between 0 and 1 (10 being equal to our
population s
ize). If the random number is between 0 and 0.019, chromosome 0 is added
to the list of selected chromosomes. If it is between 0.019 and 0.096 then chromosome
1 is selected.

The key here is that the probability of a particular chromosome being selected is
proportional to it’s fitness value. More fit chromosomes will tend to be replicated in the
selection list than less fit.

Tournament Selection

Tournament selection involves randomly choosing two candidates from the current
population, comparing their fitnes
s values and choosing the more fit for mating. A fixed
population GA of size N will require 2N tournaments.

This can be combined with roulette wheel selection where the two candidates are chosen
by roulette wheel probabilities and then proceed with the tou
rnament. This is also known
as Stochastic Tournament Selection. The effectiveness of this scheme is somewhat in
question. The performance of a Stochastic Tournament Selection GA with it’s additional
calls to the random function, should be evaluated against

the performance of a straight
Tournament Selection GA for your particular problem.

Thresholding

In this selection method a cut off is chosen, typically a fixed value. All chromosomes
below the threshold do not survive to the next generation, i.e. they die
. Their
replacements are formed by the random mating of the remaining individual
chromosomes. In theory if N replacements are required then N/2 mating pairs are
selected through some random process, tournament, fully stochastic, etc. Each mating
pair gener
ates two offspring. Odd values of N are solved by producing only one offspring
from one pair. Thresholding is generally similar to Elitism.

10.1)

Fitness

After the process of selection and mating the entire population is checked for fitness.
Each chromosome in t
he population is evaluated against the fitness function to
determine it’s rank and probability for mating in the next generation.

It is at this stage the the iteration count and best fit individual chromosome are checked
against the iteration limit and fit
ness limit to determine if the GA needs to proceed to the
next generation.

As previously mentioned, the fitness function is usually defines the problem, and
therefore may leave little room for optimization. If you are trying to solve a function,
that funct
ion is your fitness equation. The previous example:

y = 100(x
1
2


x
2
2
)
2

+ (1


x
1
)
2


for
-
2047 ≤ x
i

≤ +2048

involves 2 subtractions, 1 add, and 5 multiplies. Understanding the capabilities of your
hardware will determine how this function should be
implemented.

For other problems you may have more opportunity for optimization. Using a contrived
example consider a traveling salesman problem of dimension 5. I have 5 cities organized
on a two dimensional plane. The coordinates of the cities are:

City
X Y

A 1.5 1.5

B 0.0 0.0

C 2.2 16.1

D 11.0 0.1

E 3.4 3.2

The task is to find the shortest tour which can start from any city, passing through the
remaining cities only once and returning to the starting city. (These are
very difficult
problems as the number of cities increases. If interested Google “traveling salesman
genetic algorithm” and you will see the vast amount of research targeted at this NP
-
Hard
problem.)

As I said this is a contrived example, but it illustrates

an important point. If we define
the problem to be one of finding the ordered list of cities which results in the shortest
relative path length, rather than defining the problem to be “what’s the length of the
shortest path” then we can make a simple opti
mization which will improve the overall
performance of the GA engine.

Given this new definition we can simply multiplying the coordinates of each city by 10,
this moves your internal representation of the fitness function from a floating point
calculation
to an integer calculation without effecting the result.

Another factor to consider with fitness functions is dynamic range. GA’s tend to converge
asymptotically to a particular value, which may or may not be the global optimum.
Asymptotically implies that
the engine becomes less and less able to find new individuals
which dramatically improve the current best fit solution. This slowing of improvement in
turn implies that the relative difference between any two randomly selected individuals
in the population

will also decrease. The up shot of this is that it will become increasingly
more difficult for your selection mechanism to choose the “best of the best”. This
reduction in the differences between individuals is called a reduction in dynamic range.

Scaling

is a technique which improves dynamic range. This can take several forms. The
simplest is to simply raise the fitness value of each individual to some power.

A more complex form of scaling which keeps track of the performance history of the GA
has been sh
own to be effective. Suppose you are trying to determine the minimum tour
in the traveling salesman problem, then in this form of scaling your GA would track the
worst solution found in the previous N generations and use these N values to derive a
moving a
verage of the worst N solutions. The current generation would then be
evaluated against their improvement over the average worst solutions. Now we are
measuring the improvement value of each chromosome, not their absolute fitness value.
The improvement val
ue is now the metric used to determine the ranking of an individual
chromosome, and of course a high ranking increases the probability of selection for
mating.

11.1)

Definitions

Alleles

The set of valid values for each locus in a chromosome. A chromosome formed
of binary
bits, the allele for each bit position would be {0,1}. For a chromosome formed of single
characters their would be 26 possible alleles for each position in the chromosome.

Chromosome

Typical implemented as a string of symbols, characters, bits, e
tc, but can also be sets of
real numbers. A chromosome is considered a candidate solution for the fitness function.
It may or may not meet the criteria for solution. In the research the set of all
chromosomes in a generation is considered the genotype.

Loc
us

An individual position within a chromosome. A chromosome formed of 8 bit values would
have 8 loci.

Gene

The combination of locus and the current value of the allele for that locus.

Generation

GA’s are iterative. Each generation is one iteration of the
GA engine. A generation
consists of N chromosomes.

Non
-
deterministic Turing machine

A Turing machine with more than one next state for the given current state. Informally a
Turing machine with conditional branch capability is non
-
deterministic relative to
it’s
current memory contents.

Polynomial Time (also known as Big
-
O Notation)

O(g(n)) is a measure of the time required to execute an algorithm “f” based on the
problem size “n”, i.e. f(n) = O(g(n)). Informally this means that the execution time of
the fun
ction f(n) is less than some constant multiple (in this case “O”) of g(n) and
therefore there must exist constants c and k that satisfy 0 <= f(n) <= c(g(n)) for all n
>= k.

NP

Non
-
deterministic Turing machine accepted in Polynomial time

12.1)

About the Author

Je
ff Nye is a microprocessor designer and software developer for a large semiconductor
company and based in Austin, Texas. He designs processors as well as writes CAD
software. Many of his tools utilize Genetic Algorithms for optimization issues.

13.1)

References

[1] National Institute of Standards and Technology, “Dictionary of Algorithms and Data
Structures”, http://www.nist.gov/dads

[2] State University of New York, “Stony Brook Algorithm Repository”,
http://www.cs.sunysb.edu/~algorit

[3] Holland, J. H. 1962, “I
nformation processing in adaptive systems”,

[4] Kenedy J., Eberhart, E.,2001, Swarm Intelligence, Morgan Kaufman

[5] Annunziato, Mauro, et al, “Adaptive Parameterization of Evolutionary Algorithms and
Chaotic Artificial Populations”, SOPIN s.p.a,
http://er
g87067.casaccia.enea.it/stefano/papers/soco01.pdf

[6] De Jong, 1975, “An analysis of the behavior of a class of genetic adaptive systems”,
Doctoral Dissertation, Univ. Michigan




4)

Finite State Machines







Nathaniel Meyer

29/08/2004



14.1)

Introduction

This tutorial is aimed at explaining, providing tips and techniques, and gives an example
to demonstrate the use and purpose of Finite State Machines (which will

now be
abbreviated FSM). The illustrations and example take use of the Unified Modeling
Language (abbreviated UML) because it is a language best suited for it. Although
understanding UML is not a requirement for this tutorial or FSMs in general, it is
def
initely a language you should get to know.

Feel free to review the glossary of terms used within this tutorial. It is best that you
know what I am talking about before or while you read the content.

15.1)

Applications of FSM

FSMs have been used broadly in the
video game industry. Since ID Software released
the source code to the Quake and Quake 2 projects, people have noticed that the
movement, defensive, and offensive strategies of the bots were controlled by a simple
FSM. ID is not the only company to take ad
vantage of this either. The latest games like
Warcraft III take advantage of complex FSM systems to control the AI. Chat dialogs
where the user is prompt with choices can also be ran by FSMs.

Aside from controlling NPCs, bots, dialog, and environmental co
nditions within video
games, FSMs also have a large role outside of the video game industry. For example,
cars, airplanes, and robotics (machinery) have complex FSMs. You could even say
websites have a FSM. Websites that offer menus for you to traverse oth
er detailed
sections of the website act much like a FSM does with transitions between states.

16.1)

What is a FSM?

If you ever seen a flowchart before, you can think of a FSM as the same thing. It has a
set of states that follow a certain path. A state has tran
sitions to other states, which is
caused by events or actions within a state. Here is a real world example.

You are in the kitchen (
state
) and you felt a sudden urge to drink water. You then
decide to walk over to the cupboard and grab a glass. This is an

event

(walk to
cupboard) and you are now in a new
state

with new options available to you. You then
open the cupboard (
action
) and grab a glass (
action
). With the glass, you walk over to
the sink (
event
) and turn on the tap (
action
). Water into poured int
o the glass (
event
)
and you then drink it (
action
).

So in a nutshell, an event leads you to a state in which you perform an action or several
actions (though you do not always have to perform an action). So when you walk to the
cupboard, that leads you to

the state "At Cupboard" where the typical action is "Open
Cupboard". To better demonstrate a FSM, a robot example will be used throughout this
tutorial.

17.1)

Example
-

Robotics

With all things in life, careful planning is a must, especially with a FSM. The first step to
developing a FSM is to list all the known entities in your problem. Let us start with the
robot.


Figure 1.1: Bender, our robot


Here is Bender, as you may rec
ognize him from Futurama. Bender would like to turn on
and off, walk, run, raise and lower its arms, turn its head, and talk. There are many
other features left to be desired, like ducking, jumping, repairing itself, reacting to
environments, give it emoti
ons, etc… As you can see, the list goes on and this is quite
normal in a FSM. Keep in mind however, as the term
finite

implies you will need to limit
the scope of the problem to a finite set of entities. For the purpose of this tutorial, a
subset of functi
ons is chosen. Given these entities, we should construct a table listing all
the possible events and states associated with them.

Event

State



turnOn

Activated

turnOff

Deactivated (Idle)

stop

Stopped

walk

Walking

run

Running

raiseLeftArm

LeftArmRaised

lowerLeftArm

LeftArmLowered

lowerLeftArm

LeftArmLowered

raiseRightArm

RightArmRaised

lowerRightArm

RightArmLowered

turnHead

HeadTurned(direction)

speak

Talking(text)

Table: 1.1. Small list of known events with Bender

When developing FSMs, you should expec
t to see an even greater list of events and
states to support. As you will see later, as logic errors are common in programming,
event errors are common in FSMs.

Just before we move on, I would like to make a few notes about the selected states
above. States do not have multiple roles. Every time Bender is turned on, the same flow
of direction will always occur. If at any time in your projects there is a breech in
that
flow, it would need to be fixed. This is one of the problems with FSMs because sooner or
later someone is going to pick up the pattern. Ways around this would be to add more
events to a given situation or randomize the actions performed.

Secondly, st
ates are unique. No two states should have the same reaction to a specified
event. For example, having Bender "speak" and "shout" are synonymous in terms of
speech; therefore we eliminate one while keeping the other and supplement arguments
to dictate how
a certain text is pronounced (be it calm, yelling, stutter, etc…).

Thirdly, state transitions do not normally branch. By branch I mean through the use of
conditional statements. For example, Bender has a set of movement options. Such
movements can be deci
ded via IF/THEN statements, however it is best to avoid that
since the purpose behind states is to replace the need for IF/THEN statements with
events. Although states are flexible and do allow such conditional operations, it is always
best to avoid them.
It reduces complexity and lessens the chance for logic errors.

Now that we have a list of events and states, it is best to draw them out. When you
visually see your model, it is easier to pick out where things may go wrong or where
things could use improv
ements.


Figure 1.2: Sample state diagram of Bender


This model representation is actually wrong. You should note that in a state, you perform
a certain action and when you leave that state, the action is no longer performed. In this
model, we can only
have one state at a time with no transition between them. It is
possible for Bender to both walk and then run, so there should be a link between those
two states. Since there are only a few movement states, the amount of transitions won't
be all that bad.
If you were to support a much larger base of states, then you will notice
a massive set of transitions between them, undoubtedly creating confusion.
Unfortunately there is no way around this. Diagrammatically speaking, you could specify
a cleaner set of tr
ansitions, but ultimately when you program the states you will still
have a complex amount of transitions to support. Here is another way to represent
multiple states a bit cleaner.


Figure 1.3. Simplified State Model


Here we assign a new state Activit
y that holds all the activities Bender can do with
transitions to itself. This also allows for multiple events to run in parallel so you could
walk and talk at the same time. Inevitably, the programming will still be as complex.
Take note that on events li
ke talk, you would set up timers to display the text for X
amount of seconds. When the timer expires, you would stop talking.

Now that we have a state diagram, we need to examine it further and construct a state
table. A state table is nothing more than a

table listing all the states, the actions to get
to them, and the reaction performed by them. For example, when Bender turns on, it
puts itself in the turned on state. From here, it is allowed to conduct various movements
and shut itself down. When in the

activity state, it cannot directly shut itself down. So
the state table illustrates to us what and when can Bender perform certain actions.

#

Event

Actions

Comment

Caused By

Will Effect







1

turnOn

bootUp

Bender is turning on


2
-
10

2

bootUp

Activity

Allow various activities

1

3
-
8

3

walk

goWalk

Bender will begin to walk



4

run

goRun

Bender will begin to run



5

talk

say(text)

Bender will say "text"



6

turnHead

turningHead

Bender rotates head



7

raiseArm

raisingArm

Bender raises arm (left or right)



8

lowerArm

laweringArm

Bender lowers arm (left or right)



9

stop

powerDown

Bender stops current action


3
-
8

10

turnOff

shutDown

Bender will shut off


1
-
9

Table: 1.2. State Table for Random Scenarios


As you see, defining a complete FSM is an extremely
long process and we have not even
done any coding yet! The state table is left incomplete as the number of events and
actions are quite large, but there should be enough presented to illustrate the idea.

Putting it all together, the state diagram helps illustrate what the functionality is like and
the state table defines the types of inputs to those functions and the expected outputs.
So, with a complete state diagram and state table, you have the
blueprin
ts

to develop
your FSM.

18.1)

Example
-

Code

Knowing how to set yourself up for a FSM is one thing, going about programming it is
another. There are many ways to program a FSM, so the method I present here is one
that I just prefer. I tried to comment the code
as best as possible so hopefully my coding
style won't be a strain on your eyes =)

How The Code Works:

There are two major classes.

1) State:

The state class is responsible for setting transitions, specifying actions, and pointing to
the right function
that represents the action. In this code, a state supports the 3
preliminary specifications: OnEntry, Do, and OnExit, and supports an unlimited amount
of OnEvent specifications. These specifications are used to define your actions. This
should help you pra
ctice with designing much larger FSMs.



2) FSM:

The FSM class is a collection of states. Its sole purpose is to function as a controller
between states and execute the necessary state actions based on the events passed to
it.


The code does not make use o
f any conventional programming style. There are no
switch statements and there are no IF/THEN expressions. Everything in the FSM is
controlled by sending the correct events down the pipeline. After that, the FSM will
choose the correct course based on how
you configured your states.

Also, I have not placed any fancy reactions in the code (such as guard conditions). I
simply want to demonstrate how state transitions occur and provide a clean and elegant
interface to handling FSMs. Feel free to add new featu
res and reactions to them.

Click here to download the code (.zip)


Code was developed in Visual Studio .NET 2003

19.1)

Conclusion

Finite State Machines are widely used in video games to s
upplement AI given its easy
structure and maintenance. When given the problem that requires finite number of
solutions, FSMs are definitely an easy method to approach. Just remember the format to
help you design and develop FSMs.

1) Review the problem. Wr
ite down all the entities involved

2) Design a state diagram from the work you did in 1)

3) Review the state diagram. Make sure it meets all the requirements and does not fail
under simple transitions.

4) Develop a state table to clarify the state diagr
am and correlate to the code.

5) Design the code.

20.1)

Glossary

Here is a list of terminology used throughout the tutorial. If any of these seem unfamiliar
to you, it would be helpful to know what they mean before continuing on with the
tutorial. They are as
follows:

1) FSM

A collection of states and transitions that outline a path of actions that may occur.

2) State

A state is a position in time. For example, when you are at the bus stop, you are
currently in a waiting state.

3) Event

An event is something t
hat happens in time. For example, the bus has arrived.

4) Action

A task performed given a certain event that occurred. For example, you enter the bus.

5) Transition

A link between 2 states. May be unidirectional or bidirectional.