Evolving Neural Networks

cracklegulleyAI and Robotics

Oct 19, 2013 (3 years and 10 months ago)

83 views

Evolving Neural Networks


Learning and Evolution: Their secret
conspiracy to take over the world.

Adaptation


There are two forms of adaptation


Learning


Training on a set of examples.


Fitting your behavior to training data.


Minimizing error.


Evolution


Population based search.


Random Mutation


Reproduction


Fitness Selection

Combining The Two


Combining the strategies of evolutionary
algorithms with learning produces
“evolutionary artificial neural networks”
or EANNs.


This increases the adaptability of both in
a way that neither system could achieve
on their own.


This also can give rise to extremely
complex relationships between the two.

What Can an EANN Do?


Adjust the network weights.


Learning rules.


Evolution


Build an architecture to fit the problem at
hand


Does it need hidden layers?


Is the propagation delay too large?


Is the environment dynamic?


Perhaps the m
??
ost lofty goal is evolving
a learning rule for the network.

Evolving The Weights


Why evolve the weights? What’s wrong
with back
?
propagation?


Backpropagation is a gradient accent
algorithm. These algorithms can get stuck
on a local maximum or local minimum
solutions.

Initial weights

Optimal solution

Local Max

Overcoming Local Min/Max


Since evolutionary algorithms are population
based they have no need for gradient
information.


Subsequently they are better in noisy or
complex environments


Though getting stuck on local maxes can be
avoided there is no guarantee that any maxima
will be found.

Population Samples

The Permutation Problem


A gigantic problem that reflects the noisy real
world solution is the permutation or competing
convention problem.


Caused by a many
-
to
-
one mapping from genotype to
phenotype within neural nets.


This problem kills the efficiency and effectiveness of
crossover because effective parents can produce
slacker offspring…..sorry mom and dad.










10









10

Does Evolution Beat
Backpropagation?



Absolutely!


D.L. Prados claims that GA
-
based training algorithms
completed the tests in 3 hours 40 minutes, while
networks using the generalized delta rule finished in
23 hours 40 minutes.


No way man!


H. Kitano claims that when testing a Genetic Algorithm
backpropagation combination, it was at best as
efficient as other backpropagation variants in small
networks, but rather crappy in large networks.


It just goes to show you that the best algorithm
is always problem dependent.

Evolving Network
Architectures



The architecture is very important to the
information processing capabilities of the
neural network.


Small networks without a hidden layer can

t solve
problems such as XOR, that are not linearly
separable.


Large networks can easily overfit a problem to
match the training data, constricting their ability
to generalize a problem set.

Constructing the Network



Constructive algorithms take a minimal
network and build up new layers nodes and
connections during training.


Destructive algorithms take a maximal
network and prunes unnecessary layers nodes
and connections during training.



The network is evaluated based on specific
performance criteria, for example, lowest
training error or lowest network complexity.


The Difficulties Involved



The architectural search space is infinitely large


Changes in nodes and connection can have a
discontinuous effect on network performance.


Mapping from network architecture to behavior
is indirect, dependent on evaluation and
epistatic…don’t ask.



Similar architectures may have highly divergent
performance.


Similar performance may be attained by diverse
architectures.


Evolving The Learning Rule


The optimal learning rules is highly dependent
on the architecture of the network


Designing an optimal learning rule is very hard
when little is known about the architecture,
which is generally the case.


Different rules apply to different problems.


Certain rules make it easier to learn patterns and in
this regard are more efficient.


Less efficient learning rules can learn exceptions to
the patterns.


Which one is better? Depends on who you ask.

Facing Reality


One can see the advantage of having a network
that is capable of learning to learn. Combined
with the ability to adjust it architecture, this sort
of neural net would seem to be approaching
intelligence.


The reality is that the relationship between
learning and evolution is
extremely

complex.


Subsequently research into evolving learning
rules is really in it’s infant stages.


Lamarckian Evolution.


On a side note Belew McInerney &
Schraudolf state that their findings
suggest a reason why Lamarckian
inheritance cannot be possible.


Due to issues like the permutation
problem it is impossible to transcribe
network behavior into a genomic
encoding since it is possible that there
are infinitely many encodings that will
produce a phenotype.

Can It Be Implemented?


Yes it can.


That’s all I got for this slide. It really seems
like a waste, but at least its not paper..

Go
-
Playing Neural Nets


Alex Lubberts and Risto Miikkulainen
have created a Co
-
Evolving Go
-
playing
Neural Network.


No program play go at any significant
level of experience.


They decided that a parasite host
relationship will foster a competitive
environment for evolution.

The Fight to Survive


The host


Attempt to find an optimal solution for
winning on scaled down 5X5 Go board.


The parasites


Attempt to find and exploit weaknesses
within the host population.


Populations are evaluated one at a time
so each population takes turns being a
host or a parasite.

Tricks of the Trade


Competitive Fitness Sharing


Unusual or special individuals are rewarded.


If an individual whups an opponent that is very tough
even if that individual lost most of it’s other games
that host may still be rewarded.


Shared Sampling


To cut down on the number of games a sample set is
pitted against opponents.


Hall of Fame


Old timers that have competed well are put into a
steel cage match with the tyros to ensure that new
generations are improving.

Conclusions


They found that co
-
evolution did in fact
increase the playing ability of their networks.


After 40 generations using their tournament
style selection the co
-
evolved networks had
nearly tripled the number of wins against a
similar network that was evolved without a
host parasite relationship.

EANN Used to Classify
Galaxies


Erick Cantu
-
Paz & Chandrika Kamath


Attempting to bring automation in the
classification of galaxies using neural
networks.


The learning algorithm must be carefully
tuned to the data.



The relevant features of a classification
problem may not be known before building
the network or tuning the learning rule.

How It Worked


Six combinations of GA and NN where
compared.


The interesting part is the evolution of feature
selection


GAs where to select what features are important
enough that the NN needs to know them in order to
classify a galaxy.


GAs consistently selected half the features and
supposedly of the half they selected most of the
features where relevant to classification such as
symmetry measures and angles.


Two point crossover was used along with a
fitness bias towards networks that learn quickly.

Findings


Several of the evolved networks where
competitive with human designed
networks.


The best evolving feature selection.
??


Identifying Bent
-
Double Galaxies 92.99%
accuracy.


Identifying Non
-
Bent 83.65% accurac
????
y.

Alright! He’s About to Shut
Up


So in conclusion there is a lot of room for
improvement in EANNs and there is a lot
to explore.


So quit what your doing and build an
evolving learning program. Tell your
friends that you have already given it a
body and soon you will make it capable
of replicating itself. If they are gullible
enough you might get to watch them
squirm.

References, The lazy way.


??
Richard Belew, John McInerney, Nicol Schraudolph:
Evolving
Networks



http://www
-
cse.ucsd.edu/users/rik/papers/alife91/evol
-
net.ps


Erick Cantu
-
Paz C. Kamath:
Evolving Neural Networks for the
Classification of Galaxies
http://www.llnl.gov/CASC/sapphire/pubs/147020.pdf


Xin Yao
Evolving Artificial Neural Networks

http://www.cs.bham.ac.uk/~xin/papers/published_iproc_sep99.pdf


Alex Lubberts,
Risto Miikkulainen Co
-
Evolving a Go
-
Playing
Neural Network
http://nn.cs.utexas.edu/downloads/papers/lubberts.coevolution
-
gecco01.pdf
?