Evolving Neural Networks

cracklegulleyΤεχνίτη Νοημοσύνη και Ρομποτική

19 Οκτ 2013 (πριν από 4 χρόνια και 8 μήνες)

96 εμφανίσεις

Evolving Neural Networks

Learning and Evolution: Their secret
conspiracy to take over the world.


There are two forms of adaptation


Training on a set of examples.

Fitting your behavior to training data.

Minimizing error.


Population based search.

Random Mutation


Fitness Selection

Combining The Two

Combining the strategies of evolutionary
algorithms with learning produces
“evolutionary artificial neural networks”
or EANNs.

This increases the adaptability of both in
a way that neither system could achieve
on their own.

This also can give rise to extremely
complex relationships between the two.

What Can an EANN Do?

Adjust the network weights.

Learning rules.


Build an architecture to fit the problem at

Does it need hidden layers?

Is the propagation delay too large?

Is the environment dynamic?

Perhaps the m
ost lofty goal is evolving
a learning rule for the network.

Evolving The Weights

Why evolve the weights? What’s wrong
with back

Backpropagation is a gradient accent
algorithm. These algorithms can get stuck
on a local maximum or local minimum

Initial weights

Optimal solution

Local Max

Overcoming Local Min/Max

Since evolutionary algorithms are population
based they have no need for gradient

Subsequently they are better in noisy or
complex environments

Though getting stuck on local maxes can be
avoided there is no guarantee that any maxima
will be found.

Population Samples

The Permutation Problem

A gigantic problem that reflects the noisy real
world solution is the permutation or competing
convention problem.

Caused by a many
one mapping from genotype to
phenotype within neural nets.

This problem kills the efficiency and effectiveness of
crossover because effective parents can produce
slacker offspring…..sorry mom and dad.



Does Evolution Beat


D.L. Prados claims that GA
based training algorithms
completed the tests in 3 hours 40 minutes, while
networks using the generalized delta rule finished in
23 hours 40 minutes.

No way man!

H. Kitano claims that when testing a Genetic Algorithm
backpropagation combination, it was at best as
efficient as other backpropagation variants in small
networks, but rather crappy in large networks.

It just goes to show you that the best algorithm
is always problem dependent.

Evolving Network

The architecture is very important to the
information processing capabilities of the
neural network.

Small networks without a hidden layer can

t solve
problems such as XOR, that are not linearly

Large networks can easily overfit a problem to
match the training data, constricting their ability
to generalize a problem set.

Constructing the Network

Constructive algorithms take a minimal
network and build up new layers nodes and
connections during training.

Destructive algorithms take a maximal
network and prunes unnecessary layers nodes
and connections during training.

The network is evaluated based on specific
performance criteria, for example, lowest
training error or lowest network complexity.

The Difficulties Involved

The architectural search space is infinitely large

Changes in nodes and connection can have a
discontinuous effect on network performance.

Mapping from network architecture to behavior
is indirect, dependent on evaluation and
epistatic…don’t ask.

Similar architectures may have highly divergent

Similar performance may be attained by diverse

Evolving The Learning Rule

The optimal learning rules is highly dependent
on the architecture of the network

Designing an optimal learning rule is very hard
when little is known about the architecture,
which is generally the case.

Different rules apply to different problems.

Certain rules make it easier to learn patterns and in
this regard are more efficient.

Less efficient learning rules can learn exceptions to
the patterns.

Which one is better? Depends on who you ask.

Facing Reality

One can see the advantage of having a network
that is capable of learning to learn. Combined
with the ability to adjust it architecture, this sort
of neural net would seem to be approaching

The reality is that the relationship between
learning and evolution is


Subsequently research into evolving learning
rules is really in it’s infant stages.

Lamarckian Evolution.

On a side note Belew McInerney &
Schraudolf state that their findings
suggest a reason why Lamarckian
inheritance cannot be possible.

Due to issues like the permutation
problem it is impossible to transcribe
network behavior into a genomic
encoding since it is possible that there
are infinitely many encodings that will
produce a phenotype.

Can It Be Implemented?

Yes it can.

That’s all I got for this slide. It really seems
like a waste, but at least its not paper..

Playing Neural Nets

Alex Lubberts and Risto Miikkulainen
have created a Co
Evolving Go
Neural Network.

No program play go at any significant
level of experience.

They decided that a parasite host
relationship will foster a competitive
environment for evolution.

The Fight to Survive

The host

Attempt to find an optimal solution for
winning on scaled down 5X5 Go board.

The parasites

Attempt to find and exploit weaknesses
within the host population.

Populations are evaluated one at a time
so each population takes turns being a
host or a parasite.

Tricks of the Trade

Competitive Fitness Sharing

Unusual or special individuals are rewarded.

If an individual whups an opponent that is very tough
even if that individual lost most of it’s other games
that host may still be rewarded.

Shared Sampling

To cut down on the number of games a sample set is
pitted against opponents.

Hall of Fame

Old timers that have competed well are put into a
steel cage match with the tyros to ensure that new
generations are improving.


They found that co
evolution did in fact
increase the playing ability of their networks.

After 40 generations using their tournament
style selection the co
evolved networks had
nearly tripled the number of wins against a
similar network that was evolved without a
host parasite relationship.

EANN Used to Classify

Erick Cantu
Paz & Chandrika Kamath

Attempting to bring automation in the
classification of galaxies using neural

The learning algorithm must be carefully
tuned to the data.

The relevant features of a classification
problem may not be known before building
the network or tuning the learning rule.

How It Worked

Six combinations of GA and NN where

The interesting part is the evolution of feature

GAs where to select what features are important
enough that the NN needs to know them in order to
classify a galaxy.

GAs consistently selected half the features and
supposedly of the half they selected most of the
features where relevant to classification such as
symmetry measures and angles.

Two point crossover was used along with a
fitness bias towards networks that learn quickly.


Several of the evolved networks where
competitive with human designed

The best evolving feature selection.

Identifying Bent
Double Galaxies 92.99%

Identifying Non
Bent 83.65% accurac

Alright! He’s About to Shut

So in conclusion there is a lot of room for
improvement in EANNs and there is a lot
to explore.

So quit what your doing and build an
evolving learning program. Tell your
friends that you have already given it a
body and soon you will make it capable
of replicating itself. If they are gullible
enough you might get to watch them

References, The lazy way.

Richard Belew, John McInerney, Nicol Schraudolph:


Erick Cantu
Paz C. Kamath:
Evolving Neural Networks for the
Classification of Galaxies

Xin Yao
Evolving Artificial Neural Networks


Alex Lubberts,
Risto Miikkulainen Co
Evolving a Go
Neural Network