Clustering with Spiking Neurons

overratedbeltAI and Robotics

Nov 25, 2013 (3 years and 8 months ago)

83 views

Clustering with Spiking Neurons



A semester project in the inteligent systems labratory

Author:

Yaron Goren

Instructor:

Prof. Yoram Baram








Abstract

This work shows that a network of spiking neurons, which encodes information in spike
times, is cap
able of computing radial basis functions by storing the relevant information in the
neurons' delays. By using a Hebbian learning rule the network can find clusters in realistic
data. Using overlapping receptive fields to encode the data patterns increases
the network’s
clustering capacity. The clustering algorithm that is shown is biologically plausible and is
therefore interesting from a neuroscience point of view as well as from a computer science
perspective.


Introduction

A spiking neuron is a simplifie
d model of the biological neuron, however it is more realistic
then a threshold gate (perceptron) or a sigmoidal gate. One reason for this is that in a network
of spiking neurons the input, output and internal representation of information, which is the
re
lative timing of individual spikes, is more closely related to that of a biological network.
This representation allows for time to be used as a computational resource. It has been shown
in [Mass 97] that such a network is computationally more powerful the
n a network of
threshold gates or sigmoid gates, however learning algorithms for spiking neural networks are
still lacking. This work shows a biologically plausible unsupervised learning rule for
clustering data with spiking neurons. Evidence of neurobiolo
gical processes, which resemble
those described in this work, can be found in [O’Keefe & Reece 93] and in [Markram and
Tsodyks 96]. However this issue will not be discussed in this work. The idea of using spiking
neurons to realize RBF’s was first proposed

in [Hopefield 95] and the algorithms used in this
work were developed in [Natschläger & Ruf 98] and [Bohte et al. 00].


A mathematical model for spiking neurons

Bellow is a model for a simple spiking neuron. For simplicity, features that are not used in
t
his work are omitted. A more detailed description is available in [Mass 96].

The state of neuron j is described by the variable
h
j
(t)
, which models the neuron’s membrane
potential at time t. A spike is generated whenever h
j
(t) crosses the threshold.

F
j
={t
j
1
, t
j
2
,
t
j
3

...} is the set of firing times of neuron j as defined above.

The effect of an incoming spike on the membrane potential (ESPS) is modeled by:










exp(1
-



)


when

>0 and





0

when

<0 (see figure 1).



is the time that passed since th
e beginning of the spike’s effect.



i猠sh攠eim攠e漠oh攠灥慫⁴桥⁰潳t
-
獹湡灴i挠c敡ctio渮


N敵e潮oi⁲散eiv敳⁩湰畴from⁡⁳整敵eo湳慲k敤⁡猠

i


The membrane potential is a weighted sum of EPSP’s caused by neurons from

i


and is
calculated (at

time t) using the following function:

w
ij

models the synaptic efficacy between neurons i and j.

d
ij

models the delay from the occurrence of a spike in neuron j and the beginning of it’s effect
on neuron i. (t
j
k
+d
ij
)

is therefore the time when the k’th sp
ike from neuron j started affecting
neuron i.
The synaptic efficacy is the only learnable parameter in this work.


The network architecture and temporal encoding of data

m
-
dimensional input patterns (
x
1
,...x
m
) are encoded with
sensory neurons

u
1
...u
m

whic
h are
the network’s inputs (see figure 2). Each sensory neuron emits exactly one spike when the
pattern is presented to the network. The spike from
u
j

is delayed by
x
j

ms. For example, the 3
-
dimentional input (3,7,1) will be coded by
u
3

firing after 1 ms,

u
1

firing after 3 ms and
u
2

firing after 7 ms.














h
t
w
t
t
d
i
ij
j
k
ij
t
F
j
j
k
j
i
(
)
(
(
))









Figure 1.

An epsilon function which models the
Excitatory Post Synaptic Potential (ESPS).

Here with time to peak (

) set as 3ms


u
m

v
n

v
j

v
1

...

...

w
ij
(1)
, d=1

w
ij
(2)
, d=2

w
ij
(D)
, d=D

Figure 2.
The network
architecture:

Each connection from a sensory neuron to a RBF
neuron consists of D synapses with delays 1...D.
One such connection (the marked area between
u
i

and
v
j
) is shown in an enlargement bellow.


u
i


u
1

...

...

...

The sensory neurons are connected to a single layer of n spiking neurons (
v
1
...v
n
) which will
be referred to as
RBF neurons
. Between
each

pair of sensory neuron
u
i

and RBF neuron
v
j
,
there is

a set of D independent synapses. The weights of these synapses are

w
ij
(1)
, w
ij
(2)
, ... w
ij
(D)

and the delays of these synapses are 1 ms, 2 ms, ... and D ms respectively.
Taking into account the multiple synapses and constant delays, h
i
(t) is redefined
as:

Inhibiting synapses between all of the RBF neurons (activated when they fire) implement a
winner take all mechanism allowing only one RBF neuron to respond to each pattern.


Realizing RBFs with spiking neurons

A radial basis function is a function whic
h is symmetric around a center. Functions of this
type have been shown to be useful in various neural networks. We’ll see how the spiking
neurons
v
1
...v
n

(called RBF neurons) can realize radial basis functions. The RBF neuron
v
j

is
associated with a m
-
dime
nsional vector

j
c

= (
c
j
1
, c
j
2
, ... , c
j
m
). This is achieved by having
only one active (non
-
zero) synapse from each sensory neuron. The active synapse from
u
i

will
be the one whose delay is
c
j
i
. Formally: w
ij
(d)
=w
max

when d=
c
j
i


and w
ij
(d)
=0 for every o
ther d.

The
center

of such a neuron is defined to be the “mirror image” of

j
c
. For example, if

j
c

=
(1,4,7),
v
j
’s center will be (6,3,0). The delays between spikes in such an input pattern will be
evened out by the delays in the synapses causing
v
j

to r
eact to the all the incoming spikes at
the same time. When comparing the reaction of
v
j

to an input pattern which is at it’s center
and an input pattern which is slightly off it’s center, we can see that the first pattern causes a
higher peak in
v
j
’s membr
ane potential
and

the membrane potential crosses the threshold
earlier (see figure 3). The farther away the input pattern is from the center, the later the
neuron will reach the threshold. When a pattern is very different from the center, the peak will
be
lower then the threshold and no spike will be generated. The neuron realizes a function
whose input is an m
-
dimensional pattern and output is the time till it emits a spike (if at all)


this function is symmetric around the center defined above and theref
ore it is a radial basis
function.




The distance from center vs. firing time function of the RBF neurons was tested. A single
spiking neuron with only one active synapse from each sensory neuron was stimulated with
different input patterns and the tim
e of firing was measured and compared to the distance of
the patterns from the neuron’s center. Figure 4 shows the results of such an experiment.

h
t
w
t
t
d
i
ij
d
j
k
d
D
t
F
j
j
k
j
i
(
)
(
(
))
(
)










1

Figu
re 3.

Membrane potential as a result of four 15
-
dimentional
input patterns. The thick solid line is the response to a
pattern directly at the center of the RBF, the thin dashed
line is the response to an input slightly off the center and
the other two patt
erns are farther away. If the threshold is
set to 10, the neuron will react fastest to the pattern which
is at the center. When presented with the pattern farthest
from the center (thick dashed line) it will not reach the
threshold, and therefore not fire
at all,.

-
1
0
1
2
3
4
5
0
5
10
15
Distance norm of pattern from center
[
ms
]
Spike time
[
ms
]



In the actual model, RBF neurons have more then one active synapse from each
sensory
neuron.

j
c
=
(
c
j
1
, c
j
2
, ... , c
j
m
) is defined by the synaptic weights of the neuron. Each coordinate
in

j
c

is a weighted average of the delays from the matching input coordinate. Formally:
c
j
i
=

d

w
ij
(d)

. The center of such a neuron is (as in the

case of one active synapse) a mirror
image of

j
c

and the response time to input patterns is symmetric around that center


so that
this neuron realizes a RBF.


Learning RBFs and clustering with spiking neurons

The learning goal of the network is to have
one RBF neuron related to each cluster so that
when an input pattern from that cluster is presented to the network, only the related neuron
will fire. Since we have inhibitory synapses between the neurons it is enough that the correct
neuron will fire firs
t. In order to achieve that, the weights of the spiking neurons will be
shifted during learning, so that they realize RBFs (in the manner described above) whose
centers are the centers of the clusters.

The learning rule is a variant of the Hebb law. It i
s applied to the synaptic weights of the
neuron that fired when the input was presented to the network. The weights are changed in a
way that moves the RBF neuron’s center closer to the input pattern. Synapses which
contributed to the neuron’s firing are

strengthened and synapses which did not contribute are
weakened. The synapses that contributed are those who started affecting the postsynaptic
neuron (taking into account the synaptic delay) slightly before the neuron actually crossed the
threshold. The
ones who affected the membrane potential much earlier or much later did not
contribute to the neuron’s firing. Formally, the change in the synaptic weight is given in the
following learning rule:
Δ
w
ij
(d)
=
η
L((t
i
-
d)
-
t
j
)

η

determines the learning rate. t
i

is the time since neuron i fired t
j

is the time since neuron j
fired and d is the synaptic delay therefore (t
i
-
d)
-
t
j

is the time that passed from the
occurrence

of a spike in neuron j to the
occurr
ence

of a spike in neuron i. The learning function L(
Δ
t) is
shown in figure 5.



Using the above model data was successfully clustered with results similar to [Natschläger &
Ruf 98]. However it was impossible to cluster data with significantly more clus
ters than the
input dimension (when it was attempted, several clusters were identified by a single neuron or
Figure 4.

Dependence of firing time of the RBF neurons on the distance
of the input patterns from the center of the RBF. The spike
time is measured relative to the minimal response time. Spike
time
-
1 indicates that the neuron did not fire as a r
esult of that
pattern. This test incorporated receptive fields, however similar
results were achieved without them.


Figure 5.
The learning function:

L(
Δ
t)=(1
-
b)exp(
-
(
Δ
t
-
c)
2
/
β
2
) + b

b is the minimal value (in this case

0.2)

c is the location of the peak (
-
2)

β

is the width (1.5)

A time difference of exactly c ms between
the spikes causes maximal strengthening of
the synaptic weight. up to 1.5 ms higher o
r
lower causes a smaller strengthening and
more then that weakens the synaptic weight.

some clusters were not identified at all). This is, of course, a severe drawback for a clustering
algorithm.


Encoding sensory input with overlappi
ng receptive fields

A solution for the above problem lies in efficient encoding of the input patterns. Sensory
input in live organisms is often encoded with overlapping receptive fields, for example
touching the skin at a certain area may cause several sen
sory neurons to fire at different rates.
This technique is used to encode input patterns so that it is possible to successfully cluster
more complicated data sets.

When using receptive fields, input is not encoded as described above using just one sensory

neuron for each data coordinate. Instead for
each

coordinate, several receptive field neurons
are used to encode the data. Each of the receptive fields has a center, it fires with a short delay
if the value of that coordinate is close to it’s center, with

a longer delay for values farther
from the center, and does not fire at all if the value is too far from the center. The centers of
the receptive fields are evenly distributed within the possible range of values.

When a pattern is shown to the system, eac
h receptive field calculates the value of a gaussian
function on the value of the input at the coordinate it is related to. The gaussian for each
receptive field is given by: exp(
-
(x
i
-
c
j
)
2
/w
2
)

x
i

is the input value at the coordinate.

c
j

is the center of

the receptive field.

w is the width of the receptive field and is calculated with w =
β
rf

∙ M/(m
-
2) with M being the
maximal value of the input, m the number of receptive fields and
β
rf

a parameter.

The value of the gaussian (between 0 and 1) is reduced f
rom 1 and the result is multiplied by
the maximal delay and this is the delay of the receptive field to that input. If the value of the
gaussian is too low no spike will be emitted by that receptive field (see figure 6).



Implementation details:

The mod
el was implemented in C++ using the three classes: ReceptiveFiled, RBFneuron and
Network, and a few auxiliary functions. The code is included in this work and technical
details appear in the internal documentation.

When running the program there are three

options:

1.

Creating a data set file

2.

Testing a previously created data set

3.

Running a distance
-
from
-
center vs. response time test (as in figure 4)

A parameters.txt file with simulation parameters should be created (in a folder called io) for
option 2. Results

will appear in the io folder in the results.xls file and the weights of the RBF
neurons before and after learning will appear in the weights.xls file.



Figure 6.

The gaussian receptive fields.

The vertical line indicates an input which
causes three receptive fields to fire, the central
receptive field will fire almost

immediately and
the ones to it’s left and right will fire after a
longer delay (80% and 90% of the maximal
delay).

Some important issues (details appear in internal documentation):



Maximal weight value


the weights
must have a maximal value, otherwise they will be
increased indefinitely. The maximal weight is calculated so that after learning, the active
synapses that are left have enough weight to cause the RBF neurons to fire. There is also a
minimal weight which i
s set to zero (no inhibitory synapses are allowed).



Saturation function


in order to keep the weights within the legal range [0,w
max
] a
saturation function is used. This function causes the synaptic weight to change at a slower
rate when close to 0 or w
ma
x
.



Initial weight value


weights are initiated randomly but they must be high enough to
cause some neuron to fire for every pattern, and low enough so that all the input spikes are
necessary in order to cause a RBF neuron to fire. If not all input spike

are necessary a
partial pattern will be learned and several clusters (with a common sub
-
pattern) will be
identified by one neuron



The inhibitory connections between the RBF neurons are not implemented directly.
Instead simulation stops after the first neu
ron fires


this achieves the same purpose.


Simulation results:

The above network successfully clustered a variety of data
-
sets. Among them are:



2
-
dimension, 15 clusters, range 0
-
20 ms, std 0.4ms, 1500 patterns (see figure 7)



5
-
dimension, 15 clusters, ran
ge 0
-
20 ms, std 0.5ms, 1500 patterns



20
-
dimension, 15 clusters, range 0
-
20ms, std 0.5ms, 1500 patterns



the iris data set, a realistic 4
-
dimension data set with 150 patterns divided into 3 clusters.

0
2
4
6
8
10
12
14
16
18
20
0
5
10
15
20

In all the simulations 80%
of the data was used for learning. Testing was performed on the
full data sets.

In the artificial data sets 100% correct classification was achieved. In the iris data set over
90% success was achieved.


References

[Bohte et al. 00] S. M. Bohte, A.P. Han a
nd N. K. Kok “ Unsupervised clustering with spiking
neurons by sparse temporal coding and multi
-
layer RBF networks”
Proceedings of the IEEE
-
INNS
-
ENNS International Joint Conference on Neural Networks
.


[Hopefield 95]

J.J. Hopfield. “Pattern recognition com
putation using action potential timing
for stimulus representation”
Nature
, 376:33
-
36. 1995


[Mass 96] W. Mass “Lower bounds for the computational power of networks of spiking
neurons”
Neural Computation

8:1
-
40. 1996


[Mass 97] W. Mass “Networks of spiking

neurons: the third generation of neural network
models”
Neural Networks
, 10(9):1659
-
1671. 1998


[Natschläger & Ruf 98] T. Natschläger and B. Ruf. “Spatial and temporal pattern analysis via
spiking neurons”
Network: Computation in Neural Systems
, 9(3):319
-
332. 1998

Figure 7.

A 2
-
dimensional data set used to test the
network.