Swarm Intelligence and Automatic Development of Algorithms Through Evolution

aroocarmineAI and Robotics

Oct 29, 2013 (3 years and 9 months ago)

276 views


Swarm Intelligence and
Automatic Development of
Algorithms Through Evolution



Mastergradsoppgave i
informatikk
Carl-Erik J. Herheim

29.09.05
Høgskolen i Østfold
Avdeling for
informasjonsteknologi

Carl-Erik J. Herheim Høgskolen i Østfold

.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-1-
Carl-Erik J. Herheim Høgskolen i Østfold
Acknowledgements

I would like to thank my supervisor, Roland Olsson, for all his help and motivation, and
for having a genuine interest in the subject. Also, thanks to Åge Eide for giving me
access to his matlab ant system.
Last but not least I wish to thank my wife and my parents.



































Carl-Erik J. Herheim
Voss, Norway
September 2005
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-2-
Carl-Erik J. Herheim Høgskolen i Østfold

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-3-
Carl-Erik J. Herheim Høgskolen i Østfold
Abstract


In addition to a general introduction to Artificial Life and Collective Intelligence, this
paper gives a wide-covering description of the topic of Swarm Intelligence; an exciting
new field in which the mimicking of (insect) swarms is used to solve computational
problems. Several methods, all part of the swarm intelligence field, are described, along
with examples of their appliance. In addition, an attempt to combine machine learning
and swarm intelligence is described through a series of experiments in generating code
for clustering.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-4-
Carl-Erik J. Herheim Høgskolen i Østfold

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-5-
Carl-Erik J. Herheim Høgskolen i Østfold
Contents


A. Introduction....................................................................................................8

B. History of Collective Intelligence and Artificial Life.......................................10
1. Collective Intelligence..........................................................................................
10

2. Artificial Life.........................................................................................................
11

Tierra............................................................................................................
13

Avida.............................................................................................................
15

C. Swarm Intelligence...................................................................................18
Stigmergy.....................................................................................................
18

Self-organization...........................................................................................
19

1. Behavior of swarms.............................................................................................
21

Ant Foraging.................................................................................................
21

Cemetery Organization and Brood Sorting...................................................
22

Nest-Building and Self-Assembly.................................................................
23

2. Applications.........................................................................................................
25

3. Optimization Problems........................................................................................
27

Traveling Salesman Problem.......................................................................
27

Quadratic Assignment Problem....................................................................
35

Routing in telecommunication networks.......................................................
36

4. Clustering and Brood Sorting..............................................................................
42

5. Self-assembly......................................................................................................
47

Wasp nests...................................................................................................
47

Swarm Bots..................................................................................................
48

D. Machine-learned ant-clustering................................................................50
ADATE..........................................................................................................
50

1. Ant clustering system in Standard-ML.................................................................
51

Implementation – explaining the code..........................................................
53

Results..........................................................................................................
59

2. Experiments with synthetically created ant-brain................................................
62

Implementation – explaining the ant.spec code...........................................
64

Test Cases...................................................................................................
67

Conclusions..................................................................................................
83

3. Future work.........................................................................................................
84

E. Final summary.............................................................................................86

Appendix A – Program code, the initial version of ant.sml..................................88
Appendix B – Program code, ant.spec................................................................92
Appendix C – Complete output Test Case 1......................................................97
Appendix D – Grid generator program..............................................................101

References.......................................................................................................104
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-6-
Carl-Erik J. Herheim Høgskolen i Østfold


Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-7-
Carl-Erik J. Herheim Høgskolen i Østfold
A. Introduction

Much of the focus in this paper is on the topic of swarm intelligence. Swarm intelligence
is inspired by the workings of insect swarms seen in nature, and is closely related to
Artificial Intelligence (AI), Artificial Life, and Collective Intelligence. The essential idea
is that a large number of simple, robust, autonomous agents, together are able to perform
complex tasks.

There does not seem to be an absolute definition of Swarm Intelligence today. The
expression "swarm intelligence" was first used in the context of cellular robotic systems,
by Beni, Hackwood, and Wang [
1
,
2
]. However, after Bonabeau, Dorigo and Théraulaz'
work [
3
] on modeling of social insects into self-organizing
I
artificial systems, their
definition of Swarm Intelligence as "the emergent collective intelligence of groups of
simple agents" is more widely known. Kennedy and Eberhart [
4
] essentially agree with
this definition in their book "Swarm Intelligence", but with certain reservations:

"We agree to the spirit of this definition, but prefer not to tie swarm intelligence to the
concept of 'agents.' Members of a swarm seem to us to fall short of the usual
qualifications of something to be called an 'agent,' notably autonomy and specialization.
Swarm members tend to be homogenous and follow their programs explicitly."

Whether a swarm member can be defined as an agent or not seems to be open for
discussion. An agent must be autonomous, but even an autonomous entity must obey the
rules of its environment. Kennedy and Eberhart do have a different approach to the
subject though, focusing more on Particle Swarm Optimization as opposed to modeling
insect swarm behavior.



I
Self-organization is explained in section 3.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-8-
Carl-Erik J. Herheim Høgskolen i Østfold
This paper will begin with some background information on artificial life and collective
intelligence as a whole. In the subsequent sections, the underlying principles of swarm
intelligence will be explained, along with examples of swarm intelligence in nature. A
short introduction is given to the various applications of swarm intelligence in computer
science, before more detailed descriptions of various methods are presented. The main
uses of artificial swarm intelligence presented in this paper are optimization problems,
clustering, and self-assembly. We will then go on to present some experiments with
automatic generation of swarm agents for clustering using a system called ADATE
(Automatic Development of Algorithms Through Evolution). Finally, this paper will be
concluded with some thoughts on what the future may hold for the work in this field.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-9-
Carl-Erik J. Herheim Høgskolen i Østfold
B. History of Collective Intelligence and Artificial Life.

1. Collective Intelligence
An important aspect of, or one could almost say the precursor of Swarm Intelligence, is
collective intelligence. The work on collective intelligence can be traced back to Eugène
Marais (1872-1936). Marais made ground-breaking studies into societies of wild apes
sixty years before anyone else. He also studied termites, known at the time as white ants,
and published articles as early as 1925. In 1937 a book, The Soul of the White Ant [
5
]
was published posthumously, in which he described in painstaking detail the resemblance
between the processes at work within the termite society, and the workings of the human
body. He regarded red and white termite soldiers as analogous to blood cells, the queen
as the brain and the termites' mating flight in which individuals from separate termitaries
leave to produce new colonies as exactly equivalent to the movement of sperm and ova.

Further developments in the field were made by the French biologist Pierre-Paul Grassé
in his 1959 study of termites [
6
]. Grassé documented how the termites, tiny, short-
sighted, simple individuals, are able to create the grand termite mounds, up to six meters
high. Grassé noted that termites follow very simple rules when constructing their nests.
First they move around at random, dropping pellets of chewed earth and saliva on any
slightly elevated patch of ground the encounter. Over time, small mounds of moist earth
are formed, leading the termites to concentrate their efforts on the taller mounds. The
biggest heaps eventually turn into coloumns, which in turn are connected, by building
diagonally towards nearby coloumns. Later, other building techniques are performed, but
the point remains, that the actions of the termites are not coordinated by any collective
plan, the termites simply perform the actions that their surrounding environment /
conditions demand. Individually, each termite is not very intelligent, but collectively they
are able to construct complex mounds that certainly seem to be the result of some
intelligent behavior.


Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-10-
Carl-Erik J. Herheim Høgskolen i Østfold
2. Artificial Life
The term artificial life (also known as aLife) was first coined by Christopher Langton in
the late 1980's at the first "International Conference on the Synthesis and Simulation of
Living Systems." This is his definition:

"The study of synthetic systems which behave like natural living systems in some way.
Artificial Life complements the traditional biological sciences concerned with the
analysis of living organisms by attempting to create lifelike behaviours within computers
and other artificial media. Artificial Life can contribute to theoretical biology by
modeling forms of life other than those which exist in nature. It has applications in
environmental and financial modeling and network communications."

There are however several examples of the idea of artificial life existing before Langton
put the name to it. In the late 1940’s, John Von Neumann delivered a lecture titled "The
General and Logical Theory of Automata." He believed that natural organisms generally
follow simple rules, and defined an "automata" as any machine whose behavior
proceeded logically from step to step by combining information from the environment
and its own programming. He later developed the first cellular automaton
I
. It was
extremely complicated, with hundreds of thousands of cells which could each exist in one
of twenty-nine states.

The most famous example of cellular automata is the mathematician John Conway’s
"Game of Life." It was published in Scientific American in October 1970 [
7
]. The game
II

runs on a two-dimensional grid that is infinitely large in both directions
III
, and is built up
of squares, or cells. Each cell has two states, alive or dead (or on and off). Each cell's


I
A discrete model consisting of an infinite, regular grid (of any dimension) of cells, each in one of a finite
number of states. Time is also discrete, and for each time unit, each cell updates its state based on the state
of neighbouring cells. See description of Conway’s Game of Life.
II
The term "game" is generally used even though it may seem a little misleading, as it has no players, and
after its initial state runs by itself without human input.
III
In practice this is usually shown as a torus grid, meaning that the cells on the edges are neighbors of the
cells on the opposite side of the grid.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-11-
Carl-Erik J. Herheim Høgskolen i Østfold
state depends on the state of its neighboring cells, and the game runs by three simple
rules:

• A dead cell with exactly three live neighbors becomes alive.
• A live cell with more than three live neighbors dies of overcrowding.
• A live cell with less than two live neighbors dies of loneliness.

The game runs in time steps, where each cell is simultaneously updated based on its
neighboring cells for each time step. As the game evolves, several interesting patterns
may appear, and depending on the initial pattern, may grow indefinitely. Many patterns
have their own names, for example blinkers, gliders and guns. Below is an illustration of
the glider pattern, which moves diagonally across the grid for infinity.

Ste
p
1 Ste
p
2 Ste
p
3 Ste
p
4 Ste
p
5

Fig. 1 - Game of Life, glider pattern

What is most fascinating about the Game of Life is its unpredictability combined with the
sense of structure and purpose, giving associations to actual life-forms. The program
never defines patterns like blinkers or gliders, they simply emerge based on the
underlying rules of the system [
8
]. As we will see, the concept of complex patterns
emerging from simple rules is integral in relation to aLife systems. Besides being an
interesting programming challenge and being visually entertaining, the Game of Life
raised several interesting philosophical questions about life and existence, and enjoyed a
cult following in the 1970's and 1980's.

Its emergent properties are perhaps the most interesting aspect of cellular automata.
Emergence is a vital part of both aLife and life in general; in fact one could say that life is
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-12-
Carl-Erik J. Herheim Høgskolen i Østfold
an emergent property in itself. The easiest way to describe emergence is to say that it is
the result of something being more than the sum of its parts. The human brain is a large
collection of neurons that follow simple rules, yet the results of their simple interaction
are incredibly sophisticated. Also on a lower level, the combination of atoms in a certain
way, leads to the existence of living cells, as opposed to inanimate objects.

Christopher Langton, inspired by Conway’s Game of Life, actualized Von Neumann’s
cellular automata, and succeeded in creating the first self-replicating computer organism
in October of 1979. This is important since reproduction is an important feature of any
living species, and one step on the way to creating actual artificial life.


Tierra

Jumping over a decade ahead from Langton’s breakthrough, there was another important
development in the field of aLife. Tom Ray, an ecologist and evolutionary biologist, was
frustrated by the fact that he could only observe the products of evolution and not the
process of evolution itself. He decided to make a model where self-replicating computer
programs could evolve according to Darwinian principles. The system was named Tierra,
and the first version was completed in 1990 [
9
,
10
]. Tierra is written in C, and runs on a
virtual machine within the computer.

Initially there is only one program in the system, consisting of 80 machine instructions,
but this program multiplies and evolves through evolution and recombination. The
mutations are done by flipping bits (1 / 0) at random. Recombination is an exchange of
pieces of program code between programs. Approximately 80% of the mutated or
recombined programs become faulty and inactive, so in order to stop them from taking up
too much memory space, a reaper function (as in the Grim Reaper) clears away old and
defective programs.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-13-
Carl-Erik J. Herheim Høgskolen i Østfold
Evolution in Tierra occurs through natural selection as the programs compete over CPU
time (equivalent to an energy resource) and memory space (material resource). Memory
is organized into informational patterns that exploit CPU time for self-replication.

Ray himself considers the programs that evolve in Tierra actual life forms. He justifies
this from the fact that they can reproduce themselves, and are capable of what he calls
“open ended evolution”: there are no a priori restrictions with regard to the life forms that
could develop [
11
].

An interesting feature of the programs / organisms is that they slowly shrink the size of
their genomes, making themselves easier to copy / reproduce. Some organisms take this
to the extreme, removing critical parts of their own genomes, only to use those parts from
other organisms. Naturally, Ray named these organisms parasites. Since they become
unable to reproduce, they use the reproductive modules of other programs to reproduce
themselves. Virtual arms-races sometimes develop between parasites and the hosts,
where the hosts develop immunity, only for the parasites to work a way around it. In
some cases, hosts not only develop immunity to parasites, they even become hyper-
parasites that deceive the parasites to devote their energetic resources to replication of the
hyper-parasite genome. This can drive the parasites to extinction. Besides parasitism,
there is also the emergence of commensialism
I
and programs which cohabit within a
social context and cannot multiply independently.

The similarities between Tierra and evolution in nature are quite striking in other aspects
as well. For example, the frequency of new programs developed can vary greatly; with
periods in which no, or hardly any, new programs develop, alternated with surge periods
in which a great many different, new programs develop rapidly within a short time. This
is also found in natural evolution [
12
].



I
Commensialism, def.: “The relation between two different kinds of organisms when one receives benefits
from the other without damaging it” – WordNet, Princeton University.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-14-
Carl-Erik J. Herheim Høgskolen i Østfold
Avida

Chris Adami, currently a professor at Keck Graduate Institute of Applied Life Sciences,
attempted using Tierra to get the digital organisms to evolve solutions to specific
mathematical problems, without forcing them to use a pre-defined approach [
13
]. For
example, if he wanted a population of organisms to evolve the ability to add two numbers
together, he would monitor the input and output of the organisms. If an output was the
sum of two inputs, the successful organism would receive extra CPU cycles as a reward.
As long as the number of extra cycles was greater than the time it took the organism to
perform the computation, the leftover cycles could be applied toward the replication
process, giving the organism a competitive edge. This way, Adami was able to get the
organisms to evolve some simple tasks, but he still found too many limitations in trying
to use Tierra to study the evolutionary process.

Inspired by Tierra, Chris Adami, Charles Ofria and C. Titus Brown started developing a
new digital life system, Avida [
14
,
15
]. It was designed to have extensive and flexible
configuration capabilities, as well as detailed measurements for recording all aspects of a
population.

As opposed to the sequential execution of organisms in Tierra, Avida executes all
organisms simultaneously by simulating a parallel computer. Each organism lives in its
own protected region of memory, and is executed by its own virtual CPU. Since the
organisms can’t access each others memory space, neither for reading nor for writing,
and cannot execute code that is not in their own memory space, there are no Tierra-style
parasites in Avida. Another type of parasitism can occur, but in this case the parasites
don’t exist as individual organisms, but as an internal part of the host. Depending on the
type of parasite, it can either take all of the host’s CPU cycles and use them to spread the
infection, or it can spread more slowly, avoiding to kill the host and thereby itself.

The virtual CPU’s of the various organisms are also able to run at different speeds,
making it possible for one organism to execute for example twice as many instructions in
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-15-
Carl-Erik J. Herheim Høgskolen i Østfold
the same time interval than another organism. The organisms are rewarded with faster
CPU speed by making correct computations, thus the organisms that perform better have
more CPU power available for reproduction than others.

Each organism in the Avida system is a self-contained computing automaton with the
ability to construct new automata. Like in Tierra, evolution happens mainly as a result of
mutation, and the primary form of mutation is copy-mutations. These are random
mutations that happen as instructions are erroneously copied. In addition there are other
types of mutations like point (or cosmic ray-) mutations, which affects all living
organisms, and not just in the moment of creation.

Each organism in has a phenotype
I
associated with it. The organisms interact with their
environment by inputting numbers, performing computations on them, and outputting the
results. These computations are tracked by the phenotype. The phenotype also monitors
the organism’s age, mutations, gestation time
II
, interactions with other organisms, and its
overall fitness. The data collected by the phenotype are used for both statistical purposes
and for determining how many CPU cycles should be given to the organism.

Experiments with Avida have provided many interesting and impressive results. At times
the researchers working with it have found themselves outwitted by the organisms.
Charles Ofria decided to see what would happen if he stopped the digital organisms from
adapting. He did this by running the organisms through a test whenever they mutated, and
then killing off the ones that had a beneficial mutation. Surprisingly, this did not keep
them from evolving. In fact, they had evolved a way to tell when Ofria was testing them
by looking at the input data used for testing them. As soon as they recognized they were
being tested, they stopped processing numbers, in effect “playing dead”.



I
A phenotype is any detectable characteristic of an organism determined by an interaction between its
genotype and environment.
II
The number of instructions the organism executes to produce an offspring.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-16-
Carl-Erik J. Herheim Høgskolen i Østfold
Avida is still being developed as a joint project of the Digital Life Laboratory (headed by
Chris Adami) at the California Institute of Technology and Charles Ofria & Richard
Lenski's Microbial Evolution laboratory at Michigan State University.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-17-
Carl-Erik J. Herheim Høgskolen i Østfold
C. Swarm Intelligence

As mentioned earlier, Swarm Intelligence is based on simple agents cooperating
(unknowingly) to perform complicated tasks. It is a good example of the whole being
greater than the sum of its parts. The reason this works is because of a couple of general
principles that will be elaborated on here: Stigmergy and Self-Organization (SO).

Five basic principles of swarm intelligence according to Mark Millonas at Santa Fe
Institute [
16
] :

1. The proximity principle: The population should be able to carry out simple space
and time computations.

2. The quality principle: The population should be able to respond to quality factors
in the environment.

3. The principle of diverse response: The population should not commit its activity
along excessively narrow channels.

4. The principle of stability: The population should not change its mode of
behaviour every time the environment changes.

5. The principle of adaptability: The population must be able to change behaviour
mode when it's worth the computational price.

Stigmergy

The term stigmergy was introduced by Grassé [
6
], and is used to describe the indirect
communication between agents in a swarm. In addition to communicating directly
through physical or visual contact, insects in swarms communicate indirectly by
modifying their surroundings. Stigmergy is an essential concept in a self-organized
system. Unlike in a centralized system, where agents receive direct orders on which tasks
to perform, agents in a self-organized system, like ants in an ant-hill, perform tasks based
on their situation/surroundings. Simply put: Individual behavior modifies the
environment, which in turn modifies the behavior of other individuals [
3
]. Two main
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-18-
Carl-Erik J. Herheim Høgskolen i Østfold
types of stigmergy can be identified; quantitative (continuous) stigmergy, and qualitative
(discrete) stigmergy. Quantitative stigmergy is at work in the case of coloumn-building
termites and trail-making ants, where the amount (quantity) of stimuli (in this case
pheromone) present determines the outcome of the individuals' actions. With qualitative
stigmergy, it is not the amount of stimuli, but the type (quality) of stimuli that determines
the actions. The insect will respond to stimuli-1 with action A, and respond to stimuli-2
with action B. Qualitative stigmergy is more difficult to identify, but the nest-building of
wasps seems to be one example of this type of stigmergy. In either case, the insect
receives a form of positive feedback, encouraging it to perform some type of action.

Applying the principle of stigmergy to artificial systems, whereby reducing the need for
agent to agent communication (especially in distributed systems) has turned out to be a
great advantage and is one of the main reasons swarm intelligence has proven to be
useful.

Self-organization

Self-organization is a process where the organization of a system increases automatically
without being guided or managed by an outside source. The concept has its origins in
physics, but is also central to the description of biological systems, and is relevant in
many other disciplines, in both natural and social sciences. Five characteristics of self-
organization are as follows:

1. Multiple Interactive Agents (each agent is simple, behavior is often rule-based)
2. Positive Feedback (Positive behavior is reinforced)
3. Negative Feedback (Distribution of work, surplus work-force re-routed to
neglected activities)
4. Amplification of fluctuations (Random walks, errors, random task-switching etc.)
5. Multiple interactions (Agents make use of the result of their own actions as well
as that of others)

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-19-
Carl-Erik J. Herheim Høgskolen i Østfold
Self-organized systems often show emergent properties. Emergence is the appearance of
unexpected results from the interaction between simple components. The patterns that
develop in the Game of Life are emergent patterns, as they are not explicitly
programmed, but they emerge as a result of the game's rules.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-20-
Carl-Erik J. Herheim Høgskolen i Østfold
1. Behavior of swarms

Ant Foraging

In an experiment performed by Jean-Louis Deneubourg and his colleagues at the Free
University of Brussels [
17
], it was shown how ants forage for food, and how they
manage to find the shortest path to nearby food sources.

Deneubourg connected the ant nest with a food source using two branches, one twice as
long as the other. After a few minutes, the ants had chosen the shortest path to the food
source. The way they do this is by laying out trails of pheromones. The ant that follows
the shortest path to the food source is the first one to return to the nest, thus leaving a
double trail of pheromone. This attracts other ants, and soon the other ants will follow
this trail instead of other longer routes. In addition to this, pheromone evaporates over
time, thus, less pheromone will have evaporated from a short trail when the ant returns to
the nest than in the case of a long trail. This trail-laying behavior of ants is an example of
quantitative stigmergy, where increased amounts of stimuli work as positive
reinforcement for an action.

It was however shown, that if the shorter branch was added after the long one had been
taken into use, the ants would continue using the long one since it has been marked with
pheromone. In a computer system this can be worked around by introducing faster
pheromone decay. If the pheromone evaporates quickly it is harder to maintain stable
trails on long paths, thus increasing the chance of shorter paths being discovered later.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-21-
Carl-Erik J. Herheim Høgskolen i Østfold
b)
a)
Nest
Food
Nest
Food

Fig. 2 - Illustration demonstrating the movement of ants between the nest and a food source,
after a), 5 minutes and b), 10 minutes.


Cemetery Organization and Brood Sorting

Another aspect of ant behavior is the way they handle their dead and organize their
brood
I
. Several ant species organize their dead in what can literally be called cemeteries.
This has been demonstrated in experiments [
18
] where worker ants have been presented
to a large amount of randomly scattered ant-corpses. Within hours the ants gather the
dead in clusters, or cemeteries. The way this works is similar to how termites build their
nests by depositing pellets of dirt in heaps. When an ant comes across a corpse laying by
itself it is likely to pick it up. Subsequently it moves around randomly for some time until
finding more corpses, where it will put down the one it is carrying. If it does not come
across any such pile, the corpse is put down after a certain amount of time.

Ants also sort their brood in the nest. Workers move the larvae around, positioning the
smaller ones along with eggs in the center, and the larger ones in the periphery. Size is


I
The ant’s young, i.e.: larvae
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-22-
Carl-Erik J. Herheim Høgskolen i Østfold
however not the only determining factor, as the pupae and prepupae (as large as the
largest larvae) are placed between the larger and smaller larvae. It is likely that brood is
organized according to their required care, "items" (eggs, larvae or (pre)pupae) that
require similar care being placed close to each other, allowing the worker ants to be more
efficient.

Clustering and sorting methods inspired by this behavior are presented in section 4 of this
chapter.


Nest-Building and Self-Assembly

Most social insect societies build nests of some type, usually very complex and intricate
constructions. The building of termite mounds has already been mentioned, ant-hills are
another obvious example. Social wasps also have the ability to build nests, ranging from
simple to highly complex. The wasps build the nests by chewing plant fibers, and
cementing them together with oral secretion. This is then shaped by the wasps to create
the different parts of the nest (pedicel, combs of cells, and external envelope). The initial
phase of nest construction is making a pedicel which attaches the nest to a branch or other
already existing structure. Then, two cells are built on either side of a flat extrusion of the
pedicel. More cells are added in an evolving fashion, eventually forming closely packed
parallel rows of cells. A row of cells is generally finished before a new one is started, and
rows are initiated by the construction of a centrally located first cell. The great majority
of building locations in the nest have two adjacent walls, but the wasps have a much
greater probability of adding new cells to a corner where three adjacent walls are present.
The alternative would be to start a new row by adding a cell on the side of an existing
row. Below is an image of the early stage of nest construction.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-23-
Carl-Erik J. Herheim Høgskolen i Østfold

Fig. 3 – Early building phase of wasp's nest [
19
].

This is an example of qualitative stigmergy, as it is the qualitative characteristic (the
current state) of the wasp-nest that determines the wasp’s actions. The individual wasp
does not have a comprehensive plan of how to build the nest, it simply responds with the
appropriate action to the current situation. It can also be associated with self-assembly,
but this will be explained more clearly in section 5.

A more obvious example of self-assembly in nature is seen in some species of ants [
20
].
In order to cross gaps or even small streams, they attach themselves to each other forming
a living bridge. This behavior has been replicated in the field of robotics, as shown in
section 5.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-24-
Carl-Erik J. Herheim Høgskolen i Østfold
2. Applications

Swarm Intelligence is a relatively new field in computer science, but it has already been
applied to numerous tasks. The algorithmic technique of ant colony optimization (ACO),
has proven effective in solving the classic traveling salesman problem, the quadratic
assignment problem, and in communications network routing. This will be shown in
some detail in the following section.

Algorithms based on the way ants cluster their colony's dead and organize their larvae
can be used in analyzing banking data [
21
], as shown by Erik Lumer of University
College London, and Baldo Faieta of Interval Research in Palo Alto California. They
developed a method for analyzing a large banking database. By sorting the bank's
customers using brood sorting, they were for example able to classify which customers
would be more likely to repay a loan. This was not based on previous loan history, but on
arbitrary information which could still indicate whether they were likely to repay the
loan. This type of cluster analysis has already been done with other methods, however the
strength of the ant-based approach is that data can be easily visualized, and unlike with
most other methods, groups need not be pre-defined, but emerge automatically from the
ants' sorting.

The flexible way in which honeybees assign work-tasks can be used to find better ways
of scheduling jobs in a factory. For example, Eric Bonabeau and Guy Théraulaz worked
with Michael Campos of Northwestern University to make a technique for scheduling
paint booths in a truck factory. The exact nature of how honeybees divide their work
tasks is not known, but the general principle is that each bee works on a specific tasks,
but is still flexible enough to perform other tasks should the need be. The paint booths in
the truck factory paint trucks coming out of an assembly line, and each booth specializes
in one color. The booth can change the color, but this is time consuming, and thus costly.
However, if the queue of yellow trucks is moving slowly and a delivery of yellow trucks
is required, it might be useful for a red booth to change its color. Using the honeybee
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-25-
Carl-Erik J. Herheim Høgskolen i Østfold
system the paint booths were able to schedule their tasks more efficiently than they were
in the previously used centralized system [
21
].


Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-26-
Carl-Erik J. Herheim Høgskolen i Østfold
3. Optimization Problems

Traveling Salesman Problem

As the name implies, the traveling salesman problem (TSP) is based on the idea that a
salesman has to visit several cities. The problem is choosing the most efficient route that
visits all the various destinations and returns to the starting point. Mathematicians worked
on similar problems as early as the 1800's, but the TSP in the form we know it today
seems to have first been studied in the 1930's by Karl Menger in Vienna and at Harvard
[
22
] .


Fig. 4 - TSP graph with five nodes. Optimal route in bold.

The goal in the TSP is to find the shortest path connecting n given cities, where each city
is only visited once. With d
ij
being the distance between two cities i and j, the problem
can be defined as:
d
ij
= [(x
i
– x
j
)
2
+ (y
i
– y
j
)
2
]
1/2

where x
i
and y
i
are the coordinates of city i. It can also be defined on a graph (N, E),
where the nodes N are cities and the edges of the graph E are the connections between the
cities.

The TSP is a difficult problem to solve, as the number of possible combinations are
(n-1)!, that is, the factorial number of cities (n! / 2 for symmetrical TSP) [
23
]. With a
small number of cities it is possible to try all combinations to find the most effective, but
as the number of cities increases, this rapidly becomes impractical. The most common
version of the TSP is the Euclidean TSP, where the distance between the nodes are
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-27-
Carl-Erik J. Herheim Høgskolen i Østfold
Euclidean distances (what we think of as the "ordinary", straight line distance between
two points, as one would measure with a ruler). TSP can be symmetric or asymmetric. In
the symmetric TSP, the distance between two nodes is the same in both directions. In
asymmetric TSP, the distance from A to B is not equal to the distance from B to A.

A good TSP tour never crosses itself (if it did it would be sub-optimal), it therefore
creates a closed curve that can be divided into an "inside" and an "outside". When filling
the "inside" on sufficiently large tours, this can create quite interesting looking images
like the ones seen below.



Fig. 5 - Optimal D15112 Tour and Optimal PCB3038 Tour, shown as solid curves. [
24
]

When Dorigo, in collaboration with Colorni and Maniezzo started work on making an
algorithm based on the behaviour of foraging ants [
25
,
26
,
27
], the TSP was a logical
place to start for several reasons. As described earlier, the ants are very effective in
finding the shortest path between the nest and a food source. It is reasonable to think that
an algorithm based on their method could prove to be efficient in solving shortest path
problems like the TSP. The TSP is considered to be the benchmark problem in
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-28-
Carl-Erik J. Herheim Høgskolen i Østfold
combinatorial optimization
I
, which means it has been studied extensively and there are
several other methods with which one can compare the results. While being a very
difficult problem (NP-hard
II
), the general principle is easy to understand.



Ant Colony System

The first ant colony based algorithm was called Ant System (AS), and was able to find
good solutions within a reasonable time for small size problems, but it did not scale up
well. An improved algorithm was developed, titled the Ant Colony System (ACS). The
following is a description of this algorithm [
28
].

ACS builds solutions to the TSP by moving on the problem graph from city to city until a
tour is completed. During an iteration, each ant k, k = 1,..., m, builds a tour executing n =
|N| steps in which a probabilistic transition rule is applied. Iterations are indexed by t,
1 ≤ t ≤ t
max
, where t
max
is the user defined maximum iterations to run.

For each ant the transition from city i to city j at iteration t depends on:

1. Whether there are unvisited cities in the candidate list, if there are, constrain the
choice to these cities.
2. Whether or not the city has already been visited. Each ant maintains a tabu list
which grows within each tour, and is emptied between tours. The memory is used
to define, for each ant k, the set J
k
i
of cities the ant still has to visit when it is in
city i (in the beginning J
k
i
contains all cities except i).


I
Combinatorial optimization problems are concerned with the efficient allocation of limited resources to
meet desired objectives when the values of some or all of the variables are restricted to be integral.
Combinatorial optimization is the process of finding one or more best (optimal) solutions in a well defined
discrete problem space.
II
A problem is NP-hard if an algorithm for solving it can be translated into one for solving any other NP
(Nondeterministic Polynomial time) problem.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-29-
Carl-Erik J. Herheim Høgskolen i Østfold
3. The inverse of the distance η
ij
= 1/d
ij
, called visibility. Visibility is based on
strictly local information and represents the heuristic
I
desirability of choosing city
j when in city i. Visibility can be used to direct ants' search, although a
constructive method based solely on this would produce very low quality
solutions. The heuristic information is static, in other words it is not changed
during problem solution.
4. The amount of virtual pheromone trail τ
ij
(t) on the edge that connects city i to city
j. Pheromone trail is updated on-line and is intended to represent the learned
desirability of choosing city j when in city i. As opposed to distance, a pheromone
trail is more of a global type of information. The pheromone trail information is
changed during problem solution to reflect the experience acquired by ants during
problem solving.

Use of a candidate list
A candidate list is a data structure commonly used when trying to solve large TSP
instances. The candidate list is a list of preferred cities to be visited from a given city.
Instead of examining all the possibilities from a city, unvisited cities in the candidate list
are visited first, and only when all cities in the candidate list have been visited are other
cities examined. The candidate list contains cl cities, which are the cl closest cities. The
cities in the list are ordered by increasing distance, and the list is scanned sequentially.
Transition rule
The probability of an ant k going from city i to j on its t' th tour is determined by the
following rule:

j =
arg max
u

J
k
i
{[τ
iu
(
t
)] * [η
iu
]
β
}
if q


q
0
;

j
=
J
if q

q
0



I
Heuristic: A rule of thumb or guideline (as opposed to an invariant procedure). Heuristics may not always
achieve the desired outcome, but are extremely valuable to problem-solving processes.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-30-
Carl-Erik J. Herheim Høgskolen i Østfold
where q is a random variable uniformly distributed over [0,1], q
0
is a tunable parameter (
0 ≤ q
0
≤ 1),
and J ∈ J
k
i
is a city randomly selected according to the following probability:


l

J
k
i


il
(t)
]
*

il
]
β

iJ
(t)
]
*

iJ
]
β
p
k
iJ
(t) =



By adjusting q
0
so that q > q
0
, more exploration will be favored. When q ≤ q
0
, heuristic
knowledge about the distance between cities and learned knowledge memorized as
pheromone trails will lead to less exploration and more focus on the best solutions.
β is an adjustable parameter that controls the relative weight of the visibility. If β = 0,
distance between cities will be disregarded and only pheromone amplification is at work.
This will lead to rapid selection of tours that may not be optimal.

Pheromone trail update rule
Unlike in Ant System (the earlier version of this algorithm), where all ants were allowed
to deposit pheromone after completing their tours, in ACS only the ant that generated the
best tour since the beginning of the trial is allowed to globally update the concentrations
of pheromone on the branches. This leads to exploration being more directed.
The update rule is:
τ
ij
(
t
) ← (1 - ρ) * τ
ij
(
t
) + ρ

* Δτ
ij
(
t
)
,

where ( i, j )'s are the edges belonging to T+, the best tour since the beginning of the trial,
ρ is a parameter governing pheromone decay, and

Δτ
ij
(
t
) = 1/L
+
,

where L
+
is the length of T
+
.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-31-
Carl-Erik J. Herheim Høgskolen i Østfold
Local updates of pheromone trail
The local update rule makes the edge pheromone level diminish when an ant visits the
edge. This makes visited edges less and less attractive as more ants visit them, indirectly
favoring the exploration of not yet visited edges. This prevents ants from converging on
the same sub-optimal tour.
The local update is performed as follows: when, while performing a tour, ant k is in city i
and selects city
j ∈ J
k
i
, the pheromone concentration of ( i, j ) is updated by the following:

τ
ij
(
t
) ← (1 - ρ) * τ
ij
(
t
) + ρ *

τ
0

Results
AS-TSP was tested and compared to several other algorithms, including the elastic net
algorithm (EN), self-organizing maps (SOM), simulated annealing (SA), genetic
algorithms (GA), and evolutionary programming (EP).

In this first table, the algorithm is tested against SA, EN and SOM on randomly generated
problems





ACS-TSP
SA
EN
SOM
City set 1
City set 2
City set 3
City set 4
5,88

6,05
5,58

5,74






City set 5
6,18
5,88

6,01

5,65
5,81
6,33
5,98
6,03
5,70
5,86
6,49
6,06
6,25
5,83
5,87
6,70

Table 1 - 50-city problems. ACS-TSP was run for 2500 iterations with 10 ants.
Best results are underlined.[
3
]

Below is a table comparing ACS-TSP with GA, EP, and SA on four test problems
(available at TSPLIB [
29
]). Each algorithm has two coloumns, in the first is the best
integer tour length (in parentheses the best tour length when distances are given as real
numbers), and in the second coloumn the number of tours that were run before the best
integer tour was discovered. n/a means the results were no available.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-32-
Carl-Erik J. Herheim Høgskolen i Østfold



Tour
ACS-TSP
best
ACS-TSP
# iter.
GA
best
GA
# iter.
EP
best
EP
# iter.
SA
best
SA
# iter.
Eil50
(50-city
problem)
425

(427,96)
1830
428
(n/a)
25000
426
(427,86)
100000
443
(n/a)
68512
Eil75
(75-city
problem)
535

(542,37)
3480
545
(n/a)
80000
542
(549,18)
325000
580
(n/a)
173250
KroA100
(100-city
problem)
21282

(21285,44)
4820
21761
(n/a)
103000
(n/a)
(n/a)
(n/a)
(n/a)
Table 2 - Comparison of algorithms on three different TSP problems. Best results are underlined.[
3
]

A local search procedure to be used together with ACS-TSP was introduced by Dorigo
and Gambardella, in order to make the algorithm more effective on larger problems. They
used a local search procedure called 3-opt, that can be used for both symmetric and
asymmetric versions of TSP. 3-opt rearranges a tour into three segments, reorders them,
and if the new tour is shorter it becomes the current tour [
30
]. The results are compared
with the results of the genetic algorithm (STSP) that won the First International Contest
on Evolutionary Optimization. The STSP operators were finely tuned to the TSP
application and a local optimizer took the solutions generated by STSP to the local
optimum.


Tour
ACS-3-opt
best
ACS-3-opt
average
STSP
best
STSP
average



d198 (198 cities)
lin318 (318 cities)
att532 (532 cities)
15780

42029

27963



rat783 (783 cities)
8818
15781,7
42029

27718,2
8837,9
15780

42029

27686

8806
15780

42029

27693,7

8807,3


Table 3
-
Results from 10 runs on problems of various sizes. Best results are underlined.[
3
]

It is not clear why the algorithm has not been tested on larger tours, as this would have
more clearly exposed the performance of the algorithms, separating the good ones from
the bad. The largest TSP tour to be solved is the 24.978 city tour of Sweden. It was
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-33-
Carl-Erik J. Herheim Høgskolen i Østfold
solved [
31
] (meaning an optimal solution was found) in May 2004 with the LKH [
32
]
code by Keld Helsgaun.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-34-
Carl-Erik J. Herheim Høgskolen i Østfold
Quadratic Assignment Problem

The QAP can be understood by picturing the following: at a number of locations, an
equal number of activities have to be performed. Between each activity there is a flow of
resources (data, materials, humans, etc). The objective is to find the ideal placement of
activities, which would minimize the distance resources need to be moved. For instance,
two activities with a high flow of resources between them should preferably be placed
close to each other. With a high number of activities/locations, this problem becomes
very complex, and like the TSP it has been shown to be NP-hard [
33
].

Maniezzo et al.[
34
] were the first to apply an ant colony algorithm to the QAP, namely
AS-QAP. The general principle was similar to solving the TSP: ants are placed in each
location, and leave pheromone trails as they move from location to location, coupling an
activity with each location along the way.
The results of this initial algorithm were good, but not extraordinary. It was outperformed
by simulated annealing, tabu search, and sampling and clustering.

A new algorithm, Hybrid Ant System (HAS-QAP) was proposed by Gambardella et al.
[
35
] in 1999. It differs from the previous ACO algorithms in that ants do not create
solutions, but modify them. Consequently pheromone trails are used to guide
modifications as opposed to aiding in their construction. The algorithm consists of three
procedures: pheromone-trail based modification, local search, and pheromone trail
updating. The algorithm was found to have top of the line performance on "structured"
problems, but did not perform well on "regular" problems which lack structure. What is
referred to as structured problems includes most real-world problems and is characterized
by its attributes being highly irregular. In regular problems the attributes follow the same
statistical distribution. The reason for the difference in performance is that in structured
problems, the majority of good solutions are in close vicinity of the best solution.
Reinforcing a good solution will in such a case most likely move the algorithm toward
the optimum. In regular problems the good solutions are scattered all over the set of
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-35-
Carl-Erik J. Herheim Høgskolen i Østfold
feasible solutions, which means that reinforcing a good solution will not necessarily bring
the algorithm closer to an optimal solution.

A different version by Maniezzo and Colorni [
36
] which among other things includes a
local search procedure has, according to Maniezzo [
37
], outperformed GRASP
I
(a well-
known heuristic for the QAP) both in terms of the quality of the best and average
solutions produced.



Routing in telecommunication networks

Another optimization problem where swarm-based algorithms have proven useful is
telecommunication network routing. Routing is a core concept and integral part of any
network such as the internet or a telephone network. Routing is the mechanism that finds
paths through the network, allowing information (in a computer network this is usually
packets) to be sent from the source to its destination. This is necessary because all nodes
in a network are not directly connected to each other, meaning that data has to pass
through several nodes before it reaches its final destination.

What makes a good routing algorithm is its ability to maximize network performance and
at the same time minimizing costs (throughput
II
, packet delay, hop-count
III
etc.) [
3
, pp.
80]. The algorithm needs to be able to handle the dynamic nature of the network: traffic
conditions are constantly changing, nodes or links may fail, and the complexity of the
network may increase as it expands in size. Static routing systems also exist, where
routing remains fixed regardless of changes in the network, but these systems are
ineffective and hardly ever implemented. Demands for routing techniques are ever-
growing, driven by the increase in diverse and heterogenous networks [
38
]. As a result of
this, researchers have looked to swarm intelligence in search of more effective
algorithms. A brief description of a few of these will be given in this section.


I
Greedy Randomized Adaptive Search Procedure
II
Transfer speed, correctly delivered bits/sec.
III
The number of legs traversed by a packet between its source and destination.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-36-
Carl-Erik J. Herheim Høgskolen i Østfold
ABC: Ant-Based Control

ABC is a swarm intelligence based routing algorithm designed for telephone networks
developed by Schoonderwoerd et al. [
39
]. ABC uses many simple agents (ants), that
move from node to node in the telephone network.The ants can be launched from any
node in the network at any time, and the destination node is chosen randomly.

As the ants move around the network, they leave an amount of simulated pheromone at
each node it encounters on its way to its destination. The pheromone is a function of the
congestion of the node and the distance the ant has traveled from the source node. The
ant chooses which node to move to next based on local pheromone distribution. Once an
ant reaches its destination it is eliminated.

Each node in the network has a routing table, where each row corresponds to a
neighboring node and each coloumn corresponds to a destination node. The illustration
below shows an example of a routing table and a diagram of the nodes in a network.



4
5
Destination nodes


1
3
4
5
1
0,3
0,4
0,1
0,2
3
0,1
0,5
0,2
0,2
4
0,1
0,2
0,5
0,2

2

3

Neighbor nodes

1


Fig. 6 - Routing table for node 2, and diagram showing nodes in the network.

An ant going from node 1 to node 4, currently located in node 2, will update the coloumn
with the dotted lines, while using the coloumn marked with bold to probabilistically
choose which node to move to next. The ant's influence on a routing table depends on the
ant's performance. This is done by reducing the amount of reinforcement deposited by an
ant that has spent a long time in the network since leaving the source.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-37-
Carl-Erik J. Herheim Høgskolen i Østfold
To avoid stagnation in the exploration of new routes, Schoonderwoerd et al. [
39
]
included "noise" ("exploration factor") in the algorithm, leading to some randomization
in the ant’s node-to-node movement. Also, routes used both recently and frequently by
ants are favoured when building paths to route new calls.

When a call is made in the network, a route is set up, going from node to node until the
destination is reached. The choice of which node to choose next is based on the
probabilistic value in the current node's routing table (pheromone distribution). Once the
call is set up, the capacity of each node is reduced according to a fixed value. If any of
the nodes along the route run out of spare capacity, the call is rejected.

ABC was tested against several other routing algorithms with very good results. The
algorithm was tested on a 30-node interconnection network, as is used by British
Telecom. The performance of the algorithm can be measured by the number of call
failures. Below is a table showing the results of Schoonderwoerd et al.'s simulations [
39
].


Algorithm
Average percentage
of call failures.
Standard deviation.
Shortest path
12,57%
2,16
Mobile agents
9,19%
0,78
Improved mobile agents
4,22
0,77
Ant-Based Control (0% noise)
1,79
0,54
Ant-Based Control (5% noise)
1,99
0,54
Table 4 - Average percentage of call failures and its standard deviation for five different algorithms [
39
].


AntNet

AntNet is a routing algorithm somewhat similar to ABC, but with a few differences that
enables it to be used in data communications networks. AntNet was introduced by Di
Caro and Dorigo in 1997, and presented in a number of papers in the following years
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-38-
Carl-Erik J. Herheim Høgskolen i Østfold
[
40
,
41
,
42
]. In 2002 Di Caro published a thesis [
43
] supervised by Dorigo presenting a
revised version of the algorithm, AntNet++.

Like with ABC, AntNet is based on ants moving around the network from node to node.
However, there are two types of ants in AntNet, forward ants and backward ants. The
only purpose of forward ants is to report network delay conditions to the backward ants.
This raw data is inherited by the backward ants, who use it to update the routing tables of
the nodes. Also, in addition to a routing table, each node has an array of local traffic
statistics.

This following is a step by step description of how AntNet works:

1. At regular time intervals, each network node launches a forward ant to a
destination node. Forward ants share the same queues as data packets, thus
experiencing the same traffic loads.
2. The ants find a path to the destination based on the current routing tables and
some random behavior.
3. As each node is reached, the identifier of the node and the time it took to reach it
is pushed onto a memory stack.
4. When the destination is reached, the forward ant creates a backward ant which
inherits the memory stack, and the forward ant dies.
5. The backward ant follows the forward ant's path in reverse by popping the stack
entries as it goes along. Backward ants do not share the same link queues as data
packets but use higher priority queues since their task is to quickly update the
nodes with information gathered by the forward ants.
6. The routing table and local traffic statistics of each visited node are updated based
on trip times.
7. Once the final node is reached (the forward ant's starting node) the backward ant
dies.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-39-
Carl-Erik J. Herheim Høgskolen i Østfold
AntNet was tested on a set of model networks, SimpleNet (a simple network created to
study some aspects of the algorithm in detail), NSFNET (old US T1 backbone), NTTnet
(major Japanese backbone), and two random networks with a larger amount of nodes than
the previously mentioned networks. The algorithm was compared to several other
relevant routing algorithms, among others the currently used Internet routing algorithm,
OSPF (Open Shortest Path First). The results were measured by throughput, delay
distribution for data packets, and network capacity usage
I
, expressed as the sum of the
used link capacities divided by the total available link capacity. The results of these tests,
(too extensive to present here) show that AntNet gave the best results. [
43
]

An improved version of AntNet, AntNet-FA (Flying Ants) has one major difference.
Forward ants use the same link queues as backward ants. This is to avoid routing tables
being updated with out-of-date data, which could easily occur if the forward ant spent a
long time from origin to destination. Backward ants then update routing tables using a
model which estimates trip times. Comprehensive test results of this algorithm do not
seem to be available at this time, but preliminary results [
43
] on random networks have
shown better results than the standard AntNet algorithm.

Despite these promising results, Di Caro and Dorigo have identified several aspects of the
algorithm that need to be remedied:
• Adaptivity in the scheduling of new ants, and in the definition of the values for
their internal parameters.
• Adaptivity in the decision and updating policy of the single ant in order to cope
with the specificity of local situations.
• Explicit support for the management and allocation of network resources, as
required in the context of connection-oriented and quality-of-service networks.

To deal with these issues, a new version of the algorithm, AntNet++ is currently being
developed. AntNet++ is presented to some extent in Di Caro's thesis [
43
], but test results
from comparisons with other algorithms is apparently not yet available. AntNet++ is


I
Ratio between the bandwith occupied by the routing packets and the total available network bandwith.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-40-
Carl-Erik J. Herheim Høgskolen i Østfold
quite different from the previous AntNet algorithms, most notably is the fact that it uses
three different types of ants:

• Node managers, or colony controllers control the activities of the colony.
• Active perceptions are quite similar to regular ACO ants, and work as scout ants.
• Effector agents, or worker ants, carry out the routine jobs for the colony as issued
by the node managers.

Details of AntNet++ will not be presented here, as it is apparently a work in progress, yet
no further developments have been presented since Di Caro's thesis.


Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-41-
Carl-Erik J. Herheim Høgskolen i Østfold
4. Clustering and Brood Sorting

Deneubourg et al. [
44
] have developed models to simulate the behavior of ants that
cluster their dead in cemeteries or sort their brood, this is called the Basic Model (BM).
The essential concept of the clustering model, is that agents (ants) randomly move around
a grid, where items (corpses) are scattered randomly. The agents pick up items and drop
them at some other location where more items are located. With this first model it is
assumed that there is only one type of item. The probability p
p
of an ant-agent picking up
an item is given by:
p
p
= (k
1
/ (k
1
+ f ) )
2

with f being the perceived fraction of items in the neighborhood of the agent, and k
1
is a
threshold constant. When f is much smaller than k
1
, p
p
is close to 1, which means that the
probability of picking up an item is high when there are few items in the neighboring
area. When p
p
is close to 0 (f >> k
1
) it means the chance of removing an item from a
dense cluster is low. The probability p
d
of an agent dropping an item is given by:
p
d
= ( f / (k
2
+ f ) )
2

where k
2
is another threshold constant. This basically works the opposite way of the
depositing behavior. When an ant is close to a cluster of items, f >>k
2
, which means p
d
is
close to 1, making it probable for the ant to drop the corpse. The actual value of f is based
on the number of items discovered within a certain number of time units. By making
some small adjustments, the same model can be used as a sorting algorithm (comparable
to how ants sort their brood). By replacing f with f
A
and f
B
, the ants will sort A's and B's
in different clusters.

Naturally, the previously mentioned algorithm can only deal with objects that are either
similar or different, either A or B (the distance is binary). A more generalized algorithm
was developed by Lumer and Faieta [
45
] (generally referred to as the LF model) to deal
with objects that have a larger number of attributes, meaning it can be used for numerical
data analysis. The details of the algorithm will not be explained here, but it works by
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-42-
Carl-Erik J. Herheim Høgskolen i Østfold
projecting the space of attributes onto a two-dimensional space (2D-grid) and clustering
objects according to average similarity. Like mentioned earlier, this algorithm was
successfully used on banking data, but did not perform very well in relation to
computation time.

A simplified version of BM, intended to be more easily implemented in hardware, has
been developed by Åge Eide et al. [
46
], and recently presented at the IPSI Stockholm
Conference. In this model, the agents work without "memory" like they do in the BM
(represented there by the f variable), and the probability evaluations used for picking up
or dropping items are replaced by simple rules. The agents pick up the first item they
come across, assuming it does not belong to a cluster. The agent drops the item as soon as
it reaches a cluster, or when a "step-limit" is reached. This step-limit was shown to be the
determining factor of the number of clusters created. As shown in the illustrations below,
an increase in step-limits led to a decrease in the number of clusters. Also, giving each
ant the same step-limit leads to more clusters being formed than if each ant has a different
step limit. (
Fig. 7
-
10
are taken from Åge Eide et al. [
46
] )

Fig. 7 - Distribution of items at startup. Fig. 8 – Clustering results from one agent with step
limit of 1.000 steps


Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-43-
Carl-Erik J. Herheim Høgskolen i Østfold

Fig. 9 – Clustering results from one agent with step Fig. 10 – Clustering results from one agent with
limit of 10.000 steps. step limit of 100.000 steps.

Unlike in the BM, each item can only be picked up once. Once it is dropped, it becomes a
"stacked item", or a cluster, and can not be moved again. This is why more clusters are
created if all the ants have the same step-limit. In the beginning there won't be any
clusters for ants to run in to, so once the ants reach their step-limit, they will each make a
cluster. If one ant can make a cluster before the others, there's a chance the others will run
into this cluster before they reach their step limit, thus fewer clusters will be created. The
number of agents used only had an impact on processing time. A very fascinating aspect
of this method is the way it embraces the core principle of swarm intelligence – advanced
results through simple means.

Another ant clustering algorithm (ACLUSTER [
47
]) has been used in combination with
Linear Genetic Programming (LGP) by Abraham and Ramos [
48
] to perform data mining
operations on web usage data. ACLUSTER is, according to Ramos, a much simpler
algorithm than for instance the LF model, as it avoids short-term memory strategies and
behavioral switches. In Abraham and Ramos' paper, the ACLUSTER algorithm was used
on clean, pre-processed raw-data from the log files to find data clusters. These clusters
were then fed to the LGP model for analysis of the usage trends. The same operations
were performed using other methods and the results were compared. The overall best
method turned out to be i-Miner, a "hybrid evolutionary fuzzy clustering-fuzzy inference
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-44-
Carl-Erik J. Herheim Høgskolen i Østfold
system." However, the ant clustering algorithm proved to be more effective than self-
organizing maps in improving the performance of the LGP model.

A more thorough analysis of the performance of ant-based clustering was recently done
by Handl, Knowles and Dorigo [
49
]. They compare an ant-clustering algorithm
developed by Handl and Meyer [
50
] to two other clustering algorithms, "k-means" and
"average link". The algorithms were tested on three synthetic test sets and three real data
collections from the Machine Learning Repository [
51
]. The real sets were "Iris" (150
items, 4 attributes), "Breast Cancer Wisconsin" (699 items, 10 attributes) and "Yeast"
(1484 items, 8 attributes). The ant-based algorithm performed well on all test cases. On
the synthetic data sets it performed second best in respect to solution quality, and on the
real sets it demonstrated highest performance on three out of four measures. On the
"Yeast" data set it detected a much too low amount of clusters, due to the structure within
the data not being very pronounced. However, bad performance was noted for all the
algorithms on this data set. On the smaller test cases it suffered from high runtimes, but
the algorithm scales linearly which allowed it to outperform the others on large data sets.
Along with its linear scalability, its other strong points are its capacity to work with any
kind of data that can be described in terms of symmetric dissimilarities, and also its
ability to automatically determine the number of clusters within the data. All in all, the
results were quite promising, indicating that ant-based clustering is a viable alternative as
a clustering technique.

There have also been many implementations of swarm clustering in a robotic context.
One of the first projects was done by Beckers et al. [
52
]. They designed simple robots
that were able to move pucks around using a C-shaped grabber. The grabber was also
equipped with a resistance sensor, allowing the robot to measure the resistance of the
object it is pushing. By setting a resistance threshold, the robot could be forced to stop
pushing once it had a certain amount of pucks in its grabber (to avoid already existing
clusters being moved around).
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-45-
Carl-Erik J. Herheim Høgskolen i Østfold

Fig. 11 – Robots clustering frisbees. From Beckers et al. [
52
].

Moving around in an enclosure these robots were able to perform the same clustering
operations as artificial ants. A slightly more advanced version of decentralized robotic
clustering was done by Melhuish et al. [
53
], where the robots were able to distinguish
between red and yellow frisbees that were moved around and clustered according to
color. Neither of these two robotic implementations are necessarily directly useful, but
they do provide an example of how simple robots can work individually in a
decentralized way and yet accomplish a common goal.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-46-
Carl-Erik J. Herheim Høgskolen i Østfold
5. Self-assembly
I


Wasp nests

In the book Swarm Intelligence: From Natural to Artificial Intelligence, a model of self-
assembly inspired by the nest-building of wasps is presented [
3
, ch.6]. The model,
developed by Théraulaz and Bonabeau [
19
] is able to construct complex architectures,
presented as 3D models (see
Fig. 12
below), and with many attributes similar to natural
wasp nests.

Fig. 12 – 3D model of structure made through self-assembly

The model, or algorithm, is unfortunately not explained in detail, so only a simple outline
will be presented here. Agents, move in a three-dimensional discrete space, and create
structures by placing building blocks, referred to as bricks. The placement of these bricks
is based purely on the pre-existing structure, that is, the status of the cells surrounding the
agent determines whether the agent will place a brick at its current location. A
configuration of bricks that makes the agent place a brick is referred to as a microrule,
and a collection of compatible microcules is referred to as an algorithm. All simulations
start with one brick, and no bricks can be removed once they have been placed. From the
viewpoint of the brick, this method of construction can be called self-assembly, because
it is the configuration of the bricks themselves that determine the growth of the structure.
However, an algorithm must be at the base of this, and this is where Bonabeau et al. are a
little unclear. "An algorithm can therefore be characterized by its microrule table, a


I
Not to be confused with molecular self-assembly or self-assembly in nanotechnology.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-47-
Carl-Erik J. Herheim Høgskolen i Østfold
lookup table comprising all its microrules, that is, all stimulating configurations and
associated actions." [
3
, pp.212] Unfortunately the book at this point refers to an
algorithm that is not printed anywhere. Several advanced and interesting looking
structures are however presented, a simple case of which is illustrated in
Fig. 12
.

Swarm Bots

Self-assembly has also been an inspiration in the world of robotics. Early research was
done by Fukuda et al. [
54
], and more recently by Pamecha et al. [
55
] and Hosokawa et
al. [
56
]. Swarm Bots [
57
] is an interesting ongoing project, that has been quite successful
in creating ant-like robots with self-organizing and self-assembling abilities. A swarm bot
is "an artefact composed of a number of simpler, insect-like, robots (s-bots), built out of
relatively cheap components, capable of self-assembling and self-organizing to adapt to
its environment." The swarm bots consist of three main elements: s-bots, simulator, and
swarm-intelligence-based control mechanisms. Each s-bot is a completely autonomous
entity, capable of navigation, perceiving its surrounding environment, and grasping
objects. The s-bot is also capable of communicating with other s-bots, and join them
either rigidly or flexibly. The s-bot moves around on a combination of tracks and wheels.
When rigidly connected, the swarm-bot can cross gaps like shown in
Fig. 13
.

Fig. 13 – Swarm-Bot crossing a gap [
57
]

The s-bots are also equipped with a 'light ring' that can glow in different colors and blink
at various frequencies, this can be used for communication.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-48-
Carl-Erik J. Herheim Høgskolen i Østfold
In order to develop and test the control software for the s-bots without being dependent
on the physical robots a simulator, Swarmbot3D, has been developed. Using this software
the developers are able to:
• Predict accurately both kinematics and dynamics of a single s-bot and of a
swarm-bot in 3D.
• Evaluate hardware design options for different components.
• Design swarmbot experiments in 3D worlds.
• Investigate different distributed control algorithms.

Information about the ongoing swarm bots project is available at the project homepage:
http://www.swarm-bots.org/
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-49-
Carl-Erik J. Herheim Høgskolen i Østfold
D. Machine-learned ant-clustering

Like we have seen, there are many different ways of simulating systems and behaviour in
nature. Sometimes we don’t know how the natural systems actually work, we can only
observe their behaviours and actions, and try to design systems that seemingly act in the
same way. When trying to imitate nature like this, things can easily get too complex and
intricate, so it is a good idea to keep in mind that “simple is beautiful”.

ADATE
A new way of developing an algorithm that could be used in a swarm-based system
would be through automatic programming. Roland Olsson at Østfold College (HiØf) has
developed a unique system for Automatic Design of Algorithms Through Evolution
(ADATE) [
58
]. This system is capable of automatically generating algorithms through a
large scale combinatorial search that employs sophisticated program transformations. By
using ADATE one should be able to generate an algorithm that could work as for
example the brain of an ant in an ant swarm.

ADATE is a very powerful and promising tool that has previously been able to generate
algorithms that are more effective than “man-made” ones. The main drawback of
ADATE is that it requires a large amount of processing power. This means that the
complexity and size of the problem that needs to be solved must be limited. However, as
we know smaller is often better, and keeping in mind that swarm-agents are preferably
quite simple little programs, ADATE seems like the right tool for the job.

When deciding what kind of swarm-related problem to work with the choice soon fell on
ant-clustering, because of the simplicity and purity demonstrated by the ant-clustering
algorithm developed by Åge Eide et al. [
46
] (see page 38). Since the concept and
algorithm used there is so simple, it seemed well suited as a base for further development
with ADATE.
Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-50-
Carl-Erik J. Herheim Høgskolen i Østfold
1. Ant clustering system in Standard-ML

Åge Eide’s ant-clustering algorithm was implemented in Matlab. The language used with
ADATE is a purely functional subset of standard-ML (SML), called ADATE-ML.
Standard-ML is a functional programming language that is both safe and fast, and it is
one of several languages in the ML (Meta Language) family
I
. In order to get a full
understanding of the ant-clustering algorithm, it was first implemented in regular
standard-ML
II
since the author was initially unfamiliar with ADATE-ML.

As explained in chapter C-4, ant-clustering works by simulating an ant that moves around
collecting ant corpses and placing them in clusters. The ant moves around in a two-
dimensional grid, represented in this program as an array of arrays. Each cell in the array
is defined as empty, dead, or cluster. The size of the grid, and status of each cell is
predefined, and the array is given to the program as a parameter. The grid is not a torus
grid, so one can consider the edges of the grid to be walls. Making it this way was not a
conscious decision, but the difference between a torus grid and a
regular grid is not believed to have a significant impact on the
results. Each "turn", the ant moves randomly to one of the
surrounding squares (including diagonals, as shown in the
illustration), or remains in the current square. There is no
advantage in not moving, but for technical reasons it remains a
possibility. This leaves the ant with nine possible options for
movement.

If the ant reaches an empty cell, it will move on. If the cell contains a dead ant, the ant
will pick it up, unless it is already carrying. The ant will then keep moving around until it
reaches a cluster. Once it does, it takes one step, checks if the cell is empty, and drops the
dead ant there if it is. The ant keeps picking up dead ants and clustering them until there


I
Other languages in this family include Lazy ML, CAML, CAML Light, and Ocaml.
II
In developing, Standard-ML of New Jersey (SML/NJ) was used. It is a free compiler and programming
environment for Standard ML.
4 5
1
2
3
6
7
8
9
Fig. 14 -
The nine available
movement options for an ant
located in cell 5.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-51-
Carl-Erik J. Herheim Høgskolen i Østfold
are no dead ants left. A step limit is defined to keep the ant from moving around too long
before making a drop, so once the step limit is reached, the ant drops the dead ant and
makes a cluster. Steps are counted when the ant is carrying, and reset to zero when a drop
is made. As will be seen in the examples below, the step limit has an impact on the
number of clusters created. In Åge Eide’s program, there are initially no clusters, so the
first cluster is made once the ant reaches its step limit after picking up its first dead ant. In
the experiments carried out here, there is one clustered ant placed in the grid when the
program starts. The difference is not significant; primarily it just eliminates the ant’s
initial wandering around before it reaches the step limit. There is also a total step count,
which includes all steps the ant has made, as a measure of how efficient it is.

The goal of an ant-clustering program is naturally to have all the ant corpses placed in
clusters, or groups. Although it may seem obvious that an ideal solution is being left with
one cluster, this is not necessarily correct in all situations. Minimizing the number of
clusters is however used as a measure of effectiveness in these experiments since it in
many ways can be seen as the opposite of a random scattering of single ants.

Swarm Intelligence and
Automatic Development of Algorithms Through Evolution
-52-
Carl-Erik J. Herheim Høgskolen i Østfold
Implementation – explaining the code

The following code is the implementation of the ant-clustering algorithm in Standard-
ML. In order to make the code more easily adaptable to ADATE-ML, a version was
made that isolated the “brain” of the ant in a single function f(), this is the version
presented here. The initial version with some small differences is still included in
appendix A.

First of all, variables are initialized. Cells can be either empty, cluster, or dead. The grid,
or world, is a list of lists, which is converted to an array. Each sub list represents one row
in the grid. In the example below the grid consists of three rows and three coloumns, with
only empty cells. R is a random variable used later when taking a step in a random
direction. The stepMax variable determines the ant’s step-limit.


datatype cell = empty | cluster | dead;
val world = Array2.fromList[
[empty,empty,empty],
[empty,empty,empty],
[empty,empty,empty]
];
val R = Random.rand(0,99);
val stepMax = 1000;



The result function writes the end-result of the clustering to a file. The method is quite