Evolving Connectionist Systems

foulchilianAI and Robotics

Oct 20, 2013 (4 years and 8 months ago)



Evolving Connectionist Systems

Scott Settembre

December 10, 2007

CSE 575 : Introduction to Cognitive Science


Connectionist systems
implemented on computers
usually use artificial neural


to address
problems in

. These systems
are limited



how our mind processes information
, focusing on
mathematical techniques that are
abstractions of the actual signals and
physical interconnections of the
onstruction of our
brain. However, what if the design of our b
rain is but one of many different models that
we could base a connectionist system on? What other possible permutations of
processing unit types, network structure, local and global message passing, and network
learning strategies could have similar behav
ior to an ANN or even our own brain?

using information from the neurosciences as inspiration and evolutionary biology as a
framework for search, I have created a genetic algorithm
and cellular automata
that can evolve various parameters in searc
h of an alternative
implementation strategy

distinctly different from an artificial or biological neural network
. The

that a
independently designed and

constructed brain

can solve problems that other cognitive
creatures on this planet have s
olved, yet have a very different implementation


that implementation is quite irrelevant to the problem of cognition.

Through a
series of three experiments I attempt to give strength to this argument.



Cognition and Implementation


a purely computation level of cognition, the idea that cognition is dependent
on any particular implementation is

for me
to accept. If something is computable,
then it can be computed on a Turing machine. Therefore, if we can prove that a
r machine is Turing machine equivalent
, it should be able to compute any
cognition (or portions of cognition) that is computable. Certainly then, computational
cognition will not be dependent on the device or organ that is used to perform the

But what physical examples of cognitive machines do we have available to decide
this? We have been basing our ideas on our own experience with cognition, or of
animals we have around us, which all use a
neuronal based
brain. Brains
have been for
longest time
our only models, our only machine,
we could examine. P
it is
constructed slight
ly differently between species, but in general only

in physical
structure and composition within a framework that is recognizably a brain.

Since the brain is our only working model, it is natural for us to proceed to
attempt to deconstruct it to figure out how it works. This has been done at an abstract
level with
the original “perceptron” (Rosenblatt 1958:179) and
artificial neural networks

(ANN) and has provided us with

many insights on how a mind can emerge from such
complicated neural interconnections

(Holland 1998: 86,


Now let us consider another approach

What if we had a different type of brain as
a working model of cognition?

An ‘X’
of some sort that exhibited traits usually
attributed to that of cognitive
creatures but bear little resemblance to our own brain
. This
X might be as intricate, as complex, and as mysterious in operation as our own brains, yet
have very cle
ar cognitive abilities. What would this discovery tell us?

Will we be forced
to conclude that implementation is irrelevant or will we discover commonalities in
structure that will be necessary for implementation of a mind?

It is just this type of disco
very I intend to seek out in this programming project,
but instead of searching the universe
for intelligences implemented in various forms of
matter or using various physical structures, I will endeavor to create one right in my
computer! Through the pro
cess of evolution, using genetic algorithms, and the controlled
chaos of cellular automata, I have attempted to vary the structure that cognition will need
to use to solve some simple computer experiments. These experiments,


in computer sc
ience, range from simple stimulus
response to a more complex artificial
life environment.



When initially designing the proposal for this project, I had envisioned a multi
step development strategy of an X brain. All the
, off
shelf components in
my programming toolbox seemed to shout at me to use them. The more “mish” I could

ut in the “mash” the more I thought I could match the mystery of cognition. Perhaps I
felt that I could come up with a mystery just as intense and working just as well as a
human brain if only I spent enough computational time and threw in enough mysterio

The original idea to use genetic algorithms to evolve cellular automata which
would then grow a neural network of varying properties and operation may sound

(and be very close to a

on sentence)

on paper, but it was not compl
necessary to grow all that complexity to get to the point I needed to make. In fact, as I
will show
, at least one of these steps in unnecessary and if done, would actuall
y counter
my goal.

It is not that this approach would be worthless, and may b
e actually useful in
designing better performing neural networks or even neural networks that are capable of
novel behavior not considered before. But my goal is not to make a massive search for
other neural networks that are similar in structure and oper
ation; it is to make a search of
an X brain whose implementation is fundamentally different both in structure and
operation from that of the biological brains on this planet.


Artificial Neural Networks


The modeling of the neuron
in computers
its infancy with the
model of
the perceptron.

Networks of artificial neurons have since been shown to be able
to model many things, from logic gates
(McCulloch 1943:351
and association
tables, to pattern classifiers
(Duda 2001:284)
and pat
tern predictors. No doubt they are
capable of doing more, and in sufficient quantity and interconnectedness, real biological
neural networks are seemingly capable of human cognition and consciousness.

The artificial neuron is an abstraction of our actual

cells. Our brain is made of
around 100 different types of neurons
, where in an ANN we usually employ only one.

neurons follow neuro
chemical rules that propagate voltage spikes from one
neuron to another

after being sufficiently excited over
a threshold

Supporting structures
of cells, that are believed to play no part in cognition, o
utnumber the neurons themselves,
which we conveniently leave out of any ANN that I have ever read about. Any unseen
activity in the biological neuron is lost, t
hereby leaving a gap (if was important) in any
model we could make.

The artificial neural network is a further abstraction of what goes on in our brain.

(Churchland 1990:211

There are parts of the brain where one neuron is directly
connected to over

100,000 other neurons, in contrast to an ANN where the
interconnectedness is measured in the 10’s or 100’s. Activation between cells and cell

rely on timing, in terms of propagation of the signal as well as refractory time
between pulses, but
in ANNs any subtly in the signal is lost through the structure itself
which summarizes the signal in terms of rates and weights

(Thagard 2005:151)

Feedback in an ANN usually takes the form of a learning algorithm that operates over the
network, but in ou
r brain, feedback is the interconnectedness between the layers of
neurons themselves providing both inhibitory and excitatory firings.

Artificial neural networks
, unlike real biological neural networks,

are understood
enough now mathematically to wipe aw
ay their mystery. It has been proven that ANNs
can be configured to be able to categorize any series of input that can be linearly

if learning speed is not an issue

(Duda 2001:312)

The backward
learning algorithm has been shown
to be able to do this in a finite amount of time. It is
that this is how our brain learns
, but this learning algorithm may just be an
abstraction of what really happens.

Abstraction after abstraction, we come up with a mathematical model that ca
implemented on any Turing machine and perform a certain subset of cognition, surely
this is proof enough of the X brain? Why have we as scientists not agreed that cognition
does not require a specifi
c implementation? Perhaps the ANN form
of the X brain

is just
too very similar to that of our own. Even with the continued abstractions of the ANN
model, it looks and operates using principles that were derived from the real thing. If
these are the reasons for continued disagreement
, or at least counter by

reasons for
designing an X brain
, then perhaps
I should discard the ANN model and seek out
a new
ign of the X brain
in cellular automata.



Cellular Automata

Proponents of Artificial Life (A
Life) would claim the moving pixels on the
screen or t
he ever shifting bits in their computer memory chips

are life.

(Levy 1992:340

Life to them consists of basic rules of existence in some form, perhaps
reproduction, and some recognizable behavior. But looking at series of bits shifting
around in an a
rray when the addition operation is being applied and looking at a “

in Conway’s game of life, seem no different to me. No matter how many commonalities
between observations of bacteria in a Petri dish and pixels on the screen writhing in
ble patterns, I find it unlikely that the pixels are alive and find it more likely that
they are reproducing mathematical patterns that have been seen in nature. These patterns
exist in the abstract system of mathematics, and have been realized in the bio
physical world, but only the romantic in me can see much more than that.

Cellular automata

, though used in A
Life experiments routinely, do not
belong to A
Life. Cellular automata can open up a visage of visible phenomena in

realm of chaotic systems.
(Wolfram 2002: 42, 363

Indeed seemingly simple rules,
applied over and over, can create complex systems that can bring a chaotic system to a
single repeatable state that can be difficult to

out of, but then in a blink
the eye
the system
can transition to a new

stable state. This phase transition can happen abruptly
or take much time, either with a little “push”
change in the state of the system

a major change in the inputs to effect change.


In Stephe
n Wolfram’s book,
A New Kind of Science
, he illustrates these points
over and over

again in many of his experiments
An understanding comes to his readers

he shows how
sets of rules and initial conditions can create four classes of
behavior in

cellular automata

and perhaps in any dynamic system

(Wolfram 2002:231)
From a cognitive science standpoint, this may be extremely interesting, since these four
classes of


arise in many of our disciplines

and perhaps they can be traced
to a

simple set of rules as well
. Though it is beyond the scope of this paper to examine
this from the perspective of all the disciplines, I will address each class of behavior from
a general cognitive science standpoint.

Class 1 behavior is where the syste
m settles to a stable state. Our mind, when
coming to a decision, seems to settle into a stable state. This phenomenon can be seen in
PET scans in various experiments where subjects are asked to make a decision, such as
categorization. Before the subjec
t makes a decision, there is a flash of activity across the
relevant brain areas, and after the decision is made the activity reduces. Is this the brain
settling into a stable state?

Class 2 behavior there may be several final states
, or sets of repeati
ng states,
whereby a system transitions between each state or settles into a series of stable states.
Could simple measurements we have been taking from the brain be evidence of this?
Transitions between alpha, beta, delta brain waves, or the constant be
ating pattern of our
hearts may be governed by a class 2 system. The movement of our emotions from one
state to another may be due to a series of simple rules.


Class 3 systems have a more “random” behavior
exhibiting stable structures in no
pattern, yet the structures are there. From a biologist standpoint, most
biological patterns would be readily recognizable. From the seemingly random
branching of the bronchial tubes in our lungs, to the branches on a tree, to the folds in our
brains, cl
ass 3 systems are reproducible and seemingly essential for life as we know it.
Weather patterns
arising from

the well

popularized “Butterfly Effect”and p
cognition itself

(Wolfram 2002: 620
631, 733)

is not immune to requiring class 3

inally, class 4 systems are a mix of all the other classes. The behavior that is
generated by these systems have qualities of order, randomness, and transitions between
the states.
This sort of system can be readily found in al
l the major sciences becaus
e it
centers around the idea that a dynamic system have transitions between “steady states”
and “chaotic regions”(Gleick 198

Everything from population dynamics to
chemical interactions all

behaviors of a class 4 system

appears to
like a chaotic system with a strange attractor

Gleick 1987:140)

Now it is unlikely that these patterns of pixels over time are alive, cognitive, or
constitute anything that could be considered a mind. In fact, this sort of repetitive, simple
actic rule
following behavior is what we would call an automata and not a living
creature. To this I say, “Great!” If we agree on this, then we would have to agree that a
brain X that
is built


cellular automata is not alive. But if that brain X can

perform the same cognitive tasks that a biological brain can perform, then perhaps we can
also agree that computational cognition is independent of implementation.


Genetic Algorithms

For a long time I was mystified at the apparent effectivenes
s of Genetic
Algorithms (GA), seeming to be able to take an array of bits and bring order and structure
to them to produce an algorithmic result. After looking into the science of GA, it is clear
that there is no magic involved, and the process is similar

to a giant statistical search.
Simply put, the stochastic search technique is not guaranteed a result, instead based on
the size of the bit array and the evaluation function (which I will explain below), we can
come up with a probability that a specific
type of problem will generate a result after
performing a certain number of searches

(Koza 1993: 30, 191)

Why are GA
s useful then? If we are not guaranteed a result, then perhaps we
should use other methods that

guarantee a result. The answer is
simple, “time”. A
64 bit array has 2^64 different possible solutions, and to use a brute force technique and
check each one of those solutions, takes a considerable amount of time.

A hill climbing
search technique has an unfortunate consequence of produc
ing solutions that rest at local
maxima, which although can be ameliorated using techniques like simulated annealing,
still may not be likely to even come up with a satisfactory solution. And randomly
generating solutions or using simple mutations (random

bit flipping) on the bit array can
take as long as a brute force search.


Genetic algorithms provide a way to capitalize on the
power of evolution

and afford us a very efficient search technique
. Imagine a landscape of

well no need to imagine,

just look out of your window. Each point on the ground can be
represented by an X, Y coordinate.
The various heights of

different features of the land
represent how well a solution



for a problem, so the
Z =

Height (
equation is what we need to maximize and “Height”


equivalent to

the GA evaluation
function. But we have an additional problem, just as in real life, we can only see so far
into the horizon.
find the highest point on the landscape in a
ll of NY (thus finding the
global maximum in the search space), we could use different techniques to search

Instead of using a brute force search (thereby requiring to test every X,Y
coordinate in all of NY) or a hill climbing search (thereby perhaps ge
tting stuck at the top
of a high, but not really high mountain, the equivalent of a local maximum), we can use
this stochastic search technique. First we randomly generate as many X,Y coordinates
that we can, then we evaluate them by measuring the height
at that point. Based on the

we evaluate the overall fitness of the population of coordinates and then calculate
the proportion that each of the coordinates contributed to the final value. Based on their

ate fitness” we allow each of th
ese coordinates to combine “in some way”,
and produce two additional coordinates for use in the next iteration of the algorithm.
This reproduction operation is called “crossover” and is key to the effectiveness of the
GA. Additional reproduction operator
s, like “mutation”, where we would take a
coordinate and modify a small bit of it randomly, or “replication”, where we just copy the

coordinate into the new set of coordinates to test, can be used but it has been s
hown that
they are not required and are a
few of what Koza labels as “secondary operations” (Koza
1993: 105).

Over time, each population (or in this case each set of coordinates), will get fitter
and fitter (average height will be higher and higher). If we consider each portion of an

a “trait”, then we find that over time, traits that tend to perform well with
, tend to remain together. But what prevents the traits of a local maxima to clump
together thereby giving us a non
optimal solution
, like a hill climbing method mi
ght do
The crossover operation is essential to prevent this issue. By recombining individuals at
random locations in their bit array (or in this case, by recombining

coordinates) the
population will “always” be seeded with seemingly random locations in
the search space.

To continue the analogy, it would be like creating a topological map of NY by
initially randomizing the locations of surveyors to start the mapping, then minute after
minute changing the surveyor locations based on how well the locations

are contributing
to the overall effort of finding the highest point. Areas where there are evidently
mountains will be concentrated on, and areas that are flat, will lose nearly all of their
surveyors over time. Could the surveyor army then miss a high
point in a flat land since
they are concentrating on mountainous regions? Yes, but since there is a crossover
function being applied to their locations,
the probability of this would be greatly reduced.


In general, a single run of a GA does not
give a good


(Koza 1993:
However, the probability that a good result is produced increases the more runs
that are done, unlike a random search which have the same probability of producing a
good result each run. To increase the probability of

producing a good solution, we can
modify different procedures in the genetic algorithm.

The biggest effect on increasing the probability of a solution being found, can be
made by programming an effective evaluation function, one that is able to evaluate
solution either linearly or logarithmically. This can be the hardest task of setting up a GA
and still remains a bit of an art. The more gradual an evaluation curve can be made, the
more likely the population will not prematurely converge

on a local ma

Another way to increase the probability of success is to modify the way members
of the population are selected for reproduction. To prevent premature convergence on a
local maxima, thus preventing the population the ability to jump to a flatter la
techniques of selection like proportionate fitness (discussed above) or “tournament
selection”, where the best of a randomly selected subset of individuals is put into the
crossover selection set, can help.

There are many other techniques to pre
vent premature convergence, the bane of
any search algorithm. Genetic algorithms have a lot of them, all based on what we see in
biology and anthropology, and they are applied quite frequently. By increasing the
population size, using population subgroup
s (as if the populations were separated by an

ocean), using mutation (like the natural mutation rate that we found affects all species of
animals), including mating rituals (like tournament selection), simulating mating fitness
(instead of the stronger we
pick the best evaluated individual), and finally the using of the
technique of evolution itself (big stochastic search), we can generate individuals t
hat can
navigate the search space of mediocre solutions and focus on the better solutions

1993: 91


The Experiments

These experiments
I have


on the goal of producing an X brain that
can perform some cognitive function, but be implemented in a way that is not like our
brain. My hope is that creating such an X brain will be equiv
alent to finding a cognitive
creature on another planet, so completely alien in form and design, that we would be
impelled to accept that

the specific

(of a Turing equivalent machine)
irrelevant to computational cognition.

Since I have
decided to

stay as far away from the design of our own brains, I have
discarded the idea to use ANNs in these experiments. Instead, I

automata as the gears to these artificial creatures’ minds. Through the use of genetic
algorithms, I

the search power of evolution, to guide the development of each
artificial creature.


I have three experiments designed to test three different features of cognitive
creatures. Each experiment has a purpose, but has no ultimate answer. There may

be a
large number of solutions to each problem bounded only by the size of the bit array I give
to each creature. Therefore, a satisfactory solution can be considered a success.

in the wild
can catch every branch perfectly, but
then again
that is not the goal;

since the goal is merely to stay alive and reproduce, and a missed branch here or there
may not prevent that from happening. Thus it is the same for these artificial creatures in
these experiments, though the solution may not be opti
mal, it may be sufficient.


Experiment Design and X Brain Design

The experiment designs have some relevance to the design of the brain. Instead
of letting the GA evolve these necessary parameters, I will define them here

and they will
be used durin
g the initialization phase of each X brain depending on the values given by
the problem space.

Each experiment has a series of sensors and a series of effectors. For simplicity,
this input will be binary: 0 for sensor or effector off, and 1 for sensor o
r effector on.
Since the number of these sensors or effectors is important, each experiment will need to
define the following set {S, E
, EvaluationFunction

S: for sensor count (from 1 to n)

E: for effector count (from 1 to n)

EvaluationFunction: Functi
on used to evaluate an individual for the GA.


The X brain is composed of a bit array. It is broken into fields and each field can
contain a range of values.

There will be always 8 rules, each rule has three inputs and 1
output, for a total of 32 bits. The cell count will be the number of cells in the first row of
the cellular automata
, and the iteration count will be the number of times the rules are
d to the bit array before a response effector is considered triggered.



Must be 0 to
15 (representing 1 to 16)

Iteration count

Must be 0 to 7

(representing 0 to 7)


Experiment #1 : Cat and mouse

In this experiment, I wished to evolve
a solution to a stimulus/response test, one
that often is perform
ed on animals and humans.

In this case, it is in the form of a
cat and a mouse. We are developing the
X brain for the cat, and the mouse

computer controlled and
exhibits varying

paths. There is a visual field

for the cat
; each pixel in the visual field is considered a
sensor. There is also a single effector, which control

the “pounce” action. Pounce

capture the mouse if it is in the center pixel, or the pixel

to the right or the left

of the
center pixel. Each pounce has
penalty if no mouse is captured

or a re
ward if a pounce
gets the mouse, and these penalties are determined by the average random hit and miss
that could occur.


The problem space for experim
ent #1 is:

S: 7

E: 1

Evaluation Function: Pounce=Hit, +1, Pounce=Miss,

Here is a typical screen shot of a
n un
successful pounce and a view into the CA that produced it.

Note that the rules have duplicates, based on the recombination of the bit string. This may make the search
space larger, but it allows for better offspring because the

function may place portions on the
DNA in other locations thus minimizing th
e chance of converging to a local minimum prematurely.


This experiment produced a success in the first run of

the GA. Ideally, out of


would be caught, but in this case only a small percentage
of the mice
. Yet this was enough to
allow the “cat” to survive and reproduce.


Experiment #2 :
“Santa Fe”


artificial ant

In this experiment, I wished to evolve a solution to a
problem often applied to
learning systems or algorithm generating programs. It was originally calle
d the “John
Muir Trail”

(Levy 1992:166),

but the problem’s complexity was increased over time and
eventually was called the Santa Fe trail

(Koza 1992:54)
. The idea behind the problem is
to generate a control system for an artificial ant, to enable it to f
ollow a trail of food.
However, this trail has gaps in it and requires additional search to not

too far
from the food.

The board consists of 32 x 32 spaces with 89 food locations laid out in a windy
path (see figure

). The ant starts in the

upper left corner of the trail and has the
following sensors and effectors:

Sensor 1: one pixel view of the location directly in front of the ant

Effector 1: turn left

Effector 2: move forward

Effector 3: turn right

The problem space for experiment #
2 is:

S: 1

E: 3

Evaluation Function: number of food spaces landed on


e is a typical screen shot of a virtual ant
and a view into the CA that produced it.

Note the complex behavior that can arise using only 3 cells. The boards below are the

start and
stop states of this particular ant’s run.

Board 1 is a start board state

with the green dots representing the Santa Fe trail

Board 2 is a finalized board state, with the red dot

representing the ant and

the grey


representing the p
ath it followed.

This experiment produced an ant that was capable of traversing the trail only to a
small extent, reverting mainly to a sweeping motion across the grid to gain food instead

of using the cues it was given. This problem solution seems to
converge to a local
maximum way too early in the generations, thereby giving me a clue that my evaluation
function curve may be too steep. Perhaps reworking the evaluation function or allowing
the CA to rearrange the effectors/sensors or make them signal
or be signaled by more
than one cell location would help provide the problem space with more opportunity.
There is a third possibility that the actual solution space does not contain an optimal
solution and that may be a limitation of the design of the X
brain or a failure to consider
that such a small subset of the world (in terms of effectors/sensors) create a very steep
hill to climb, and so it is missed quite easily.


Experiment #

Artificial life ecosystem (ALE)

This experiment is designed ar
ound the idea I got from another A
Life experiment
called “AL” (Levy 1992:263). In this world of 100 x 100 squares, plants, herbivores and
carnivores roamed the grid. Living, dying, reproducing took place as the ecos
evolved and not because of any p
evaluation function.
The original AL

ANNs for the creatures, and so
based on my reasons for avoiding ANNs,
I will not be
reproducing the exact experiment.

Instead in a variation of “AL”, which I will call “ALE”, I
creating a

id of 100 x 100 squares and populating the squares with individuals from the
population. Each individual has 8 sensors, 4 for each cardinal direction, signaling if there
is food or another creature beside it. There

also 8 effectors for each creature
as well,

4 for each cardinal directi
on, which will either perform a

move/eating action or a mating
action. Food is randomly generated
across the board each time step, and if a creature
should run out of food in his food store, then that creature no longer

is allowed to
be in
the environment, but is still allowed to mate after each generation

Sensor 1,3,5,7: signals if food is in a N E S W cell, respectively

Sensor 2,4,6,8: signals if an animal is in a N E S W cell, respectively

Effector 1, 3, 5, 7: mov
es/eats in a N E S W direction, respectively

Effector 2,4,6,8: mates in a N E S W direction, respectively

The problem space for experiment #3 is:

S: 8

E: 8

Evaluation Function: # of successful mating actions

Here is a typical screen shot of a

ecosystem after several generations
Note the clustering


the red creatures
, no doubt due to the heavy influence of the mating function due to proximity.

The green dots are randomly generated in abundance and represent food.


This experiment produced a
n ecosystem which supported creatures using the X
brain. There was a lot of movement and a lot of bumping around, and a great deal of

death. Each ecosystem was repopulated every two minutes, using a GA, and in one of
the two runs I made, after an hour th
ere “appeared” to be a lot of directed actions. I often
attributed various goals to some of the creatures, especially the ones that seemed to move
from place to place. Near the end of every run,
there is never

a creature alive that sits
in one


Did these creatures have behaviors, or was it just a finite state
machine clicking from one state to the next? I am unsure, but it haunts me that I may be
dismissing these life
like behaviors due only to their simplicity.



From m
y perspective, there were no failures in the experiments. Each experiment
performed as expected, that is there seemed to be some behavior developed. One could
argue that the quality of the solution
, in this case the behavior of the creature, was not

then again, there was no guarantee that it would be
. In fact, optimal solutions to
the behaviors were

not expected based on the search algorithm (GA) that I had used.

So the question remains, do these brains that were developed in the three
ents constitute real cognitive function or even a subset of human cognitive
function or are they just automata, no better than a finite state machine?

I think the
answer may lay

obscured by the simplicity of these experiments and other experiments

that we

may perform in the future.


running each experiment, it came to me that there may be an additional
conclusion that I could draw.

Should these creatures from the experiments be considered
to have some subset of cognitive function,
and since
we know

them to be capable of
being implemented on a Turing machine,
they are at least Turing machine
equivalent. This being the case, this is further evidence that
other cognitive functions

may be computational.

Many of the techniques and ideas in this p
aper came from the field of computer
science, but let us not forget the various disciplines which inspire those techniques.
Genetic algorithms are based in both the biological sciences (like biology and genetics)
with various influences from anthropology
in developing ideas for increasing search
efficiency and effectiveness (like tournament selection). Cellular automata, championed
by Stephen Wolfram who draws much of his inspiration from biology and mathematics,
performs experiments in his book dealing w
ith language and vision. My own
experiments were inspired by previous computer science implementations of psychology
issues such as planning and stimulus
and loosely have a Behavioralist bent
(since I do not care what is going inside the CA black

box). So I would claim that this
entire effort could be situated quite clearly in a cognitive science
journal with a few more

I had set out to

what I called an “X” brain, or a brain that can exhibit a
behavior and not be designed
in the fashion of our own. My hope was to prove that other



thereby lending credence to the idea that cognition can
be viewed as a computational abstraction.
Though the cognition exhibited by the evolved
creatures in my experiments may only be at the capability of an ant, it does still represent
some level of cognition.
showing that cognition can be implemented in
something other than a neural net

may help

bolster the computational cognition
and cause us to
view cer
tain types of programs that exhibit

cognitive abilities

as being just as alive as you and me.




Koza, John (1980),
Genetic Programming: On the Programming of
Computers by
Means of Natural Selection

(Cambridge, Massachusetts: The MIT Press).


Levy, Steven (1992),
Artificial Life: The Quest for a New Creation

(New York:
Patheon Books).


Holland, John H. (1998),
Emergence: From Chaos to Order

ts: Helix Books).


Wolfram, Stephen (2002),
A New Kind Of Science

Canada: Wolfram Media,


Gleick, James (1987),
Chaos: Making of a New Science

(New York, New York:
Penguin Books).


Duda, Richard O., Hart, Peter E., Stork, David G. (2001)
Pattern Cl

(New York: John Wiley & Sons, Inc.).


Thagard, Paul (2005),
Mind: Introduction to Cognitive Science

Massachusetts: The MIT Press).


Rosenblatt, F. (1958), “The Perceptron: A Probabilistic Model for Information
Storage and Organizat
ion in the Brain”, in Cummins, Robert and Cummins,
Denise Dellarosa (eds.) (2000),
Minds, Brains, and Computers: The Foundations
of Cognitive Science, An Anthology

(Blackwell Publishers: Oxford): 179


McCulloch, Warren S. and Pitts, Walter (1943), “A
Logical Calculus of the Ideas
Immanent in Nervous Activity”, in Cummins, Robert and Cummins, Denise
Dellarosa (eds.) (2000),
Minds, Brains, and Computers: The Foundations of
Cognitive Science, An Anthology

(Blackwell Publishers: Oxford): 351


nd, Paul M. (1990), “Cognitive Activity in Artificial Neural Networks”,
in Cummins, Robert and Cummins, Denise Dellarosa (eds.) (2000),
Brains, and Computers: The Foundations of Cognitive Science, An Anthology

(Blackwell Publishers: Oxford): 198