Opening the black box of connectionist nets: Some lessons from cognitive science

lovethreewayAI and Robotics

Oct 20, 2013 (3 years and 11 months ago)

98 views

Opening the black box of connectionist nets: Some lessons from cognitive science

From Computer Standards
&
I
nterfaces. Impact factor 0.894

Visualization of neural network states

is an important aspect of understanding the operation of the
network. One important benefit of being able to understand neural networks is due to their biological
basis. Understanding the way an artificial neural network operates can help with understan
ding how an
organic neural network operates.

The basic function of a neural network is to transform an input to a specific output. It achieves this
through a learning scheme.


T
he network will learn the appropriate associations necessary to do the
transf
ormation correctly

through a series of learning trials
. There are many different ways to set up an
artificial neural network along with many different learning algorithms but all neural networks have the
same goal, to transform an input into the appropria
te output.

The details of how the network performs
the transformation are abstract

and can be described using very high dimensional math.

This paper represents an early attempt at understanding the internal workings of a neural network by
using visualizat
ion

techniques
.
This paper was published in 1994 and the state of computer hardware at
the time
required networks to remain relatively simple. At the same time
,


available

hardware limited
the complexity of

visualizations. Although this is an early pape
r, it gives an important historical
perspective on the development of visualization techniques used for neural networks and serves as a
useful starting point.

N
eural networking technology

had
advanced

the point that off
-
the
-
shelf software
had been develop
ed

which could be used to build and train
neural

networks.
This allowed groups to
focus more on the functionality of the network rather than understanding the operation of the network.
The black box behavior of these off the shelf networks made it possi
ble to completely ignore the internal
activity of the network which may have contributed to the lack of visualization work being done around
this time.


This image is a representation of a hierarchy of types

that
can be found in a neural network
. This
example shows how inputs of A and B can be represented in the
hidden layer as R1 while a set of inputs A and C can be represented in the hidden layer as R2. Then the
two representations can be represented as a single unit Rtop. In this way the 4 binary i
nputs are
transformed into a single output which is represented by Rtop.



The

figure

on the left

shows a visualization of 3 different networks and the activation strengths of
connections required to produce the given output. This network is as simple a
s can possibly be and if
there were just a few more nodes and connections this representation would become far too cluttered.

The figure on the right shows a decision space representation of the networks. The vertices of the
square show all of the possi
ble inputs and the line through the box shows the location of the correct
output (+1) in the network.


This figure shows a 3 dimensional
version of a decision
space
representation of 3 binary inputs with no hidden layer. The inputs are Bill, laughed, and

Jean. The
function of this network is to check for valid sentences. With those three inputs there are only two
possible correct outputs: Bill laughed or Jean laughed.
What this representation is supposed to
demonstrate is that the top plane which repre
sents the “laughed” plane cannot distinguish between the
different varieties of sentences that contain the word laughed. No matter how you set up the vectors
you would end up with that solution surface which cannot be separated.

The
main contribution

of this paper is presented in this
figure. It shows a decision space diagram of two networks that learned a different way. It shows
visually how the two networks set up their respective networks. As far as network performance is
concerned, the two netw
orks produce the same outputs so the one way to see the internal difference
between the networks is through this decision space diagram. The actual contents of the visualization is
not very clear but the key is that there is a difference that can be seen.


Neural networks are complicated mathematical structures that have become very useful tools in
understanding human cognition. The visualizations presented in this paper represent some of the first
steps to make the internal workings of the network compr
ehensible. The visualizations presented have
the advantage of being very simple but they suffer from failing to convey useful information. These
visualizations can be summed up as better than nothing, but plenty of room for improvement.