Mimicking synaptic plasticity in memristive neuromorphic systems

foulchilianAI and Robotics

Oct 20, 2013 (3 years and 7 months ago)

98 views


1

Mimicking synaptic
plasticity

in memristive neuromorphic
systems

S.W.Keemink
1
*

1. Life Sciences Graduate School, University of Utrecht, Utrecht, Netherlands


Correspondence:

Amorijstraat 6 6815GJ Arnhem Netherlands

sanderkeemink@gmail.com


Date:

7
th

of

August 2012




2

Abstract

Development of artificial intelligence has been disappointing in many aspects, and has been
severely limited by the basic architecture of computers. The new field of neuromorphic
engineering trie
s to tackle this problem by basing circuit design on brain architecture. There are
two features of the brain that people try to implement especially: massive parallelism and
plasticity. Synapse implementations, however, have proven difficult, due to a lack

of inherently
plastic circuit elements. This leads to the need of overly complex circuits to mimic any kind of
plasticity. Recent developments in nanotechnology provide us with an exciting new
opportunity: the memristor. The memristor is basically a resis
tor whose resistance depends on
the amount of current that passed through it: effectively it is a plastic resistor. This is the first
element of its kind and could potentially revolutionize the field of neuromorphic engineering.
This paper will study the v
iability of the memristor as a plastic synapse by reviewing the recent
developments in memristive technologies separately and in combination with known theories
of plasticity in the brain. Memristors turn out to be very powerful for mimicking synaptic
plas
ticity, but current research has focused too much on spiking based learning mechanisms
and not enough experimental work has been done.
It also seems the memristor
-
based learning
rules could potentially improve our understanding of the underlying neuroscien
ce, but little
work has been done on this. Finally, despite promises of memristor
-
based circuitry being able
to match the complexity and scale of the brain, current memristors would use too much
energy.
Future
research

should
focus on these
three
issues.

Keywords:

Neuromorphic engineering, memristance, synapses, plasticity




3

Table of Contents

Abstract

2

1. Introduction

4

2. Synaptic Plasticity

7

3. Memristance

11

4. Synaptic Plasticity and Memristance

14

5. Applying memristive synaptic plasticity

18

6. Discussion

25

7. Bibliography

30





4

1.
Introduction

During the ascent of the silicon computers wild prediction
s

were made: artificial intelligence
would soon surpass the capabilities of our own brain, presenting superior intelligence.
However
,

now, several decades later, this has not
yet been realized.

The
basic architecture all
silicon computer
s used

turned out
to

have fundamental restrictions
. Albeit powerful at
sequential computing, and in that aspect certainly surpassing humans, the von Neumann
architecture prevented computers
was always limited by having to use a central processor
: the
von Neumann b
ottleneck

(Backus, 1978)
.

This architecture is substantially different from
what
we understand of

the workings

of the brain: it is non
-
parallel and has no inherent plasticity
. For
this reason
much effort has been going into developing a
lternative
technologies
,

and most
notable

into
neuromorphic engineering: circuitry with architecture inspired by and mimicking
that
of the brain.


Mimicking

the brain is a daunting task.

It
is
, after all,

arguably the most complex system in
existence
, whi
le also being extremely

efficient. Its storage and computational
abilities are
simply astounding. The brain contains billions of neurons work
ing in parallel, and the number of
plastic synapses connecting them is

several
order
s

of magnitude

larger
. With
over 10
10

synapses
per cm
3

it is a daunting task to
manufacture

any circuit system that works similar
ly

at the same
scale and efficiency.

This is the challenge the field of neuromorphic engineering tries to tackle.


Neuromorphic engineering is interesting

for two important reasons. First, by
designing
circuits

using knowledge about
brain architecture
and function
, it is possible to develop genuinely new
kinds of computers. Second, it might actually serve as a tool to gain more understanding of that
very br
ain architecture it tries to mimic. By using an engineering
-
approach to build similar
structures of the brain we might learn more about its restrictions and advantages in ways
impossible to study with classic neuroscience. One might argue that this kind of

approach is
already accomplished by analog modeling of the brain, in which brain structures are modeled
on existing computer architectures. However,
actually building the circuits instead has at least
two advantages. First, models will never be able to re
plicate all the complexities of the real
world, all the imperfections of physical circuitry might actually be important (after all, a given
biological neuron is also not a perfect cell, but the brain still manages). Second, implementing
models using actual

circuits is more energy efficient than analog modeling
(Poon & Zhou, 2011;
Greg Snider et al., 2011)
.


One of the defining features of the nervous system is the plasticity of the synapses,
the inner
workings of
which we’ve only recently begun to truly under
stand. It not only underlies long and
short term memory
(Abbott & Nelson, 2000)

but is also extremely important for computation
(Abbott & Regehr, 2004)
. The functional and molecular underpinnings are only beginning to be
understood, but we need only concern us with the former.
When trying to make circuitry
mimicking brain function, the goal is
not to reproduce an exact neural syna
ps
e

with all its
molecular complexity, but to reproduce the functional workings. The most well known
functional

mechanism

of synaptic plasticity is known as Hebbian plasticity
(Hebb, 1949)
,

which
is
often summarized as “
C
ells
that
fire together, wire together”

(
attributed to Carla Shatz
)
.

Since

5

the formulation of Hebbian plasticity both c
omp
utational
and experimental
neuros
cience has
provided us with many
more
specific
learning

mechanisms
,
which can roughly be
categorized as

either
spike based
learning or

firing rate based learnin
g. Neuromorphic engineering aims to
apply this knowledge to ac
tual circuits.


A problem in reproducing the functional plast
icity rules in circuit based

synapses has been that
the basic elements used in electronic circuits are
not inherent
ly (or at least not controllably
)
plastic.
The only basic element capable of in
herent plasticity
was the memristor, and only

existed in theory

(
Chua, 1971)
.

However, this element has
finally been

reali
zed in practice in
recent years

by several labs

(Jo et al., 2010; Strukov, Snider, Stewart, & Williams, 2008)
.
The
memristor is basica
lly a resistor, whose resistance depends on how much current has passed
through the element in the past. As such it has memory and is plastic. It is an obvious question
whether or not this element can be used to mimic plastic synapse on a circuit level, an
d how
this can be done.
The science of using memristors as synapses is an emerging and exciting field,
and in this paper I will be reviewing the advances and studying if the memristor really is as
promising as it sounds.


First I will summarize our curren
t understanding of synaptic plasticity, by explaining two of the
best known and successful learning rules: one which changes the connection strength between
two neurons based on their relative firing rate, known as the BCM model
(Bienenstock, Cooper,
& Munro, 1982)
, and one which depends on the relative timing of fired spikes, known as the
Spike
-
Timing
-
Dependent
-
Plasticity (STDP) rule
(Gerstner
, Ritz, & van Hemmen, 1993)
. Next I will
discuss the basic memristor workings: how the memristor was theorized
(
Chua, 1971)

and how
it was finally realized
(Jo et al., 2010; Strukov et al., 2008)
.


Then, having discussed some of the basics of both synaptic plasticity and memrist
ance, I will
show how the two are related.
Soon after the realization of the memristor several groups
began researching the possibilities of using memristors as synapses.
Using memristors in
surprisingly simple circuits automatically leads to associative m
emory

(
Pershin & Di Ventra,
2010)
.
It was quickly realized
that
memristors can be implemented en masse in cr
oss
-
bar
circuits

(Borghetti et al., 2009; Jo, Kim, & Lu, 2009)
.

Most importantly it was found that if you
assume two spiking neurons with specific kinds of spike shapes, connected by a memristor,
various kinds of STDP automatically f
ollow

(Linares
-
Barranco & Serrano
-
Gotarredona, 2009a,
2009b)
.



Having considered
the basic theory of using memristors as plastic synapses, I will summarize
various more complex and applied applications such as maze solving
(
Ventra & Pershin, 2011)

and modeling part of the visual cortex
(Zamarreño
-
Ramos et al., 2011)
.

I will finally discuss
som
e of the current short comings: the extreme focus on using neuron
-
like spiking elements,
the relative lack of realized circuits, the problems with trying to mimi
c a structure we don’t fully
understand yet, the energy use of currently proposed circuits and the lack of learning rules
other than STDP.



6

Nevertheless, considering the relative youth of the field, incredible process has already been
made, and memristors

are definitely more than just a promising technology: there is real
evidence that they could act as plastic synapses and could potentially
revolutionize

neuromorphic engineering and
improve
both the way we understand the role of plasticity in
the brain, a
nd how we could apply this knowledge to actual circuits.




7

2.
Synaptic

Plasticity

Here I will
briefly discuss the current state of knowledge about plasticity mechanisms
. I will
focus on the functional plasticity rules

between pairs of neurons
, rather than

the molecular
underpinnings, a
s we are not trying to build actual biological

synapses. On the single synapse
level I
will divide

the learning rules in two classes: firing rate
based learning

and spike based
learning.
For an
excellent
and more extensive
re
cap of plasticity
rules
see

Abbott & Nelson

(
2000)
.


Although the theory of synaptic plasticity has become more and more
detailed
, the basics
remain
much
the same as
f
irst
postulated by Hebb
:


Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to
induce lasting cellular changes that add to its stability.… When an
ax
on

of cell
A

is near enough
to excite a cell
B

and repeatedly or persistently takes part in firing it, some growth process or
metabolic change takes place in one or both cells such that
A'
s efficiency, as one of the cells
firing
B
, is increased.


(Hebb, 1949)

Or
more simply and intuitive
ly

stated

by Carla Shatz
:
“Cells that fire together, wire
together”

(Markram, Gerstner &
Sjöström
, 2011)
.




Figure
1

Example figure of two connected cells
. The pre
-
synaptic cell (
sends an axon t
o the post
-
synaptic cell
and connects to
one or more of its den
drites. This connection is a synapse, and is thought to be
plastic according to a Hebbian
rule.


This
learning
principle is called Hebbian learning, and is at the root of
most plasticity theories.
Hebbian learning

requires

some kind of plasticity of the connection between two
cells, which
depends on the activity of both cells
.
Before we get into details a few definitions need to be
made
: For every connected cell pair there is a pre
-
synaptic cell and a post
-
synaptic cell.
Th
e
pre
-
synaptic cell sends an axon to the post
-
synaptic cell’s dendrites,
and is connected by a
synapse (or several synapses)
. The
effectiveness

of a particular synapse we will classify as the
“weight” of the synapse.
How this weight changes is the kind of
plasticity studied in this paper.
See Figure 1

for an illustration.
Several molecular mechanisms have been suggested such as
changes in the numbers of messenger mo
lecules in a single synapse or
increasing the number
of synapses. However in this report we a
re concerned with replicating synaptic plasticity in
computer chips, and as such will primarily deal with the functional rules of learning, rather than

8

with the underlying mechanisms. For a more in depth explanation of the neuroscience and
molecular underp
innings see an up to dat
e neuroscience text book (e.g.

Purves et al., 2012)
.


There
are

of course a large number of possible
functional learning

rules that we can define, so

we

shou
ld first ask the question: what requirements should these rules meet

that lead to
network with the

capability of memory formation and/or
computation?
It is hard to define
what these are but in this review

we will consider the following.

In a networ
k of connected
cells

two prime featur
es are thought to be necessary

(Dayan & Abbott, 2003)
. First, a

Hebbia
n type
of learnin
g rule. If two connected cells activate together, their connection should grow
stronger. Second, there has to be

some kind of

competition between connections. If
several
pre
-
synaptic cells are connected to a post
-
synaptic cell, and a particular pre
-
synapti
c cell’s
connection

grows stronger,
future activity of the other pre
-
synaptic cells should influence their
connection less
. Now, what learning rules have been proposed and how do they follow these
requirements?


I will first consider firing rate based learning. The assumption here is that the
individual spikes
don’t matter so much, but rather the sustained activity over time. A big advantage is that there
is no need to model a spiking mechanism, only the firing ra
te is needed. In most firing rate
models, how the post
-
synaptic

cell’s activity depends on the pre
-
synaptic activity, and how the
connection weights depend on pre
-

and post
-
synaptic activity
,
can be written in its most
general form
as:



































Here



is a vector of pre
-
synaptic firing rates,



the post
-
synaptic firing rate and


a
vector with all synaptic weights.
The function


describes how the
post
-
synaptic

acti
vity
depends o
n this activity itself, the pre
-
synaptic
firing rates

and the
synaptic weight
s
.
In this
paper we will not concern ourselves too much with this function
, and

assume that the post
-
synaptic

firing rate is directly dependent on the pre
-
synaptic f
iring rate as follows
:










Which is
simply
the sum of all presynaptic firing rates multiplied by their weights.
The function


describes how the synaptic weight
s change

over time based on the current weight
s

and the
firing rates

of the pre
-

and
post
-
synaptic

neurons.
This is the function which could inform how
memristors could act as plastic synapses. To implement Hebbian learning t
he
first and
simplest
assumption

one can make is the following function
:















This

simple rule satisfies Hebbian l
earning, as the connection between two cells grows stronger
the more the two cells fire together.
However,
this system
of learning
is unstable as
all weights

9

grow stronger independent of other connection streng
ths.

In fact, the higher the post
-
synaptic
firing rate
is

the stronger all the weights get.


Several adjustments have been made to this simple model, the details of which I will skip, but
they have ultimately lead to the BCM model

(Bienenstock et al., 1982)
. For a detailed treatise
see also
Dayan & Abbott

(
2003)
. First of all to prevent excessive growing of weights, a threshold
mechanism is added
as follows:





















Here
we’ve

added the term








. This decreases the rate of change of weights as the
post
-
synaptic

activity comes closer to some threshold.

For activity
smaller

than


synapses are
weakened
, for activity larger than


synapses are strengthened
.

The threshold itself also
evolves according to:













Where


is some time con
stant. As



grows, this also results in the threshold shifting. As
long as the threshold grows faster than


, the system remains stable

(i.e. the weight values
don’t explode)
.
The

sliding threshold

also

results in competition
between synapses. When

particul
ar pre
-
synaptic cells contribute most to the post
-
synaptic firing rate, the threshold slides
up
accordingly. When less cont
ributing pre
-
synaptic cells are
active
,

however, the post
-
synaptic
firing rate is lower. The heightened threshold then results in the
ir weights being decreased
.
BCM learning has been shown to lead to several features of learning observed in the brain, such
as orientation selectivity
(Blais & Cooper, 2008)
.


Firing rate modes, albeit very powerful, miss a
potentially
crucial feature of actual
neuronal

systems: a spiking mechanism
. In the last decade experimental work

(Bi & Poo, 1998, 2001)

and
theoretical work

(Gerstner et al., 1993)

h
as suggested a strong dependence of synaptic
plasticity on the relative spike timing of the pre and
post
-
synaptic

cells. The resulting learning
mechanism

is called Spike Timing Dependent Plasticity (STDP). In this model
each spike pair
fired at both cells changes the synaptic weight between the cells. The timing of the spikes is
what specifies how the weight changes exactly.
Several known
forms

of this dep
endence
have
been measured in different places in the brain,

but
for now

I

will
only consider

the most studied
and well known

form
, which is shown in Figure 2.
In this particular STDP learning function s
mall
timing differences result in high changes in pla
sticity
, and vice versa for large timing differences
.
Furthermore, the order of spikes is also extremely important.
If the
pre
-
synaptic

cell fires first, a
positive change

in weight

is the results
.
However, if the post
-
synaptic cell fires first, the weight

is decreased. This reflects the fact that spikes fired at the post
-
synaptic cell first, could never
have been fired due to activity at the pre
-
synaptic cell.






10

This particular learning rule is often approximated as:




{




(








)












(







)










Where


and



describe the timing a given
post
-
synaptic

spike,




and



describe the
heights and



and



describe the time constants. These functions d
escribe the change in
weight due to a positive timing difference and a negative timing difference respectively, and are
plotted in Figure 2.




Figure
2

Experimental and theoretical STDP curves.

On the x
-
axis the timing difference between a pre and a
post
-
synaptic

spike
is shown. On the y
-
axis the resulting change in synaptic weight is shown. The dots are experimental data points, the lines are
exponential fitted curves.










.
Reproduced

from
Zamarreño
-
Ramos et al.

(
2011)
.


The powerful thing about STDP is that this simple mechanism automatically results in both
competition and stable Hebbian learning
(Song, Miller, & Abbott, 2000; van Rossum, Bi, &
Turrigiano, 2000)
, as well as vari
ous specific types of learning and computation in networks
such as timing/coincidence detection
(Gerstner et al., 1993)
, sequence learning

(Abbott & Blum,
1996; Minai & Levy, 1993; Roberts, 1999)
, path learning/navigation
(Blum & Abbott, 1996;
Mehta, Quirk, & Wilson, 2000)

and direction selectivity in visual responses
(Mehta et al., 2000;
Rao & Sejnowski, 2000)
.

If STDP could be replicated using memristors, all these things could
theoretically also quite easily
be achieved by circuits using memristors.




11

3.
Memristance

This section will briefly
recap how memristors were theorized, and how they have been
realized. I
will briefly go into the theoretical proof
, show what the basic learning rules for
memristors
are
and how they have been realized.


Electrical circui
ts

contain three passive basic elements: resistors, capacitors and inductors.
These elements are have fixed values, meaning they’re non
-
plastic
. However, Leon Chua
theorized an extra element with plastici
ty in 1971, the memristor

(
Chua, 1971)
. The memristor
acted like a resistor, by relating the voltage over the

element

and the current through it as
follows
:











The memristance



thus acts the same as a resistance, except that it depends on a parameter

, which in Chua’s derivations was either the charge


or the flux

.
In what follows
I

will
conside
r the
charge

case.

Since the
charge

and
current

are related
as follows
:









depends on the complete history of
current passing through

the element,
which makes the
memristor
a
ct

like a
resistor with memory

for current
.
Chua later showed that memristors are
part of a broader class of
systems called
memristive systems
(
Chua & Kang, 1976)

described by:























Where


can be any controllable property, and


is some function.
The function


can be called
the equivalent learning ru
le of the memristor, analogous to the learning rules in the synapse
models discussed earlier.



Figure
3

Illustrati
o
n of
the basic mechanism for the
memristor realization
s. When current flows through the device, the
boundary between the regions of different resistivities shifts changing the overall resistance. Reproduced from
Zamarreño
-
Ramos et al. (
2011)
.

The memristor has only been realized as an actual circuit element during the last few years
.
One of the reasons it had taken this long i
s that it only works on
at nano

scale levels

(Strukov et
al., 2008)
. The technique involves building an element with two regions with different
resistances
,



and


. If
constru
cted correctly
, the boundary between the regions will shift

due to applied voltages or currents
, resulting in a net ch
ange of resistance. See Figure 3

for an
illustration.
In these systems the memristance is described by:




















12


And
the learning rule of


is given to a linear approximation

by:










Where


is some constant
dependent

on
device

properties.
How the memristor works is
illustrated in Figure 4. Here a sinusoidal voltage is applied (blue), resulting in a current passing
through the memristor (green). Initial applied voltage results in a low current but also a shift in
w. The shift in w changes th
e memristance, and during the next voltage period the current is
larger. This is illustrated by the labels 1, 2 and 3 in the top and bottom graphs. And vice versa
for negative voltages, as illustrated by labels 4, 5 and 6.



Figure
3

Memristor

workings illustrated
.
Top: applied sinusoidal voltage (blue) and resulting current (green) plotted over time.
Middle: plastic parameter w plotted over time. D is a scaling factor dependent on the device length. Bottom:
Voltage VS current
plot with hysteresis loops. The numbers in the top plot correspond to specific loops in the bottom plot. As positive voltage
is
applied, w shifts changing the memristance resulting in later applied voltages in having a larger effect. Neg
ative voltages next
have the opposite effect. Reproduced from
Strukov, et al. (2008
)
.



13

Two labs have actually realized an element like this; using a region with oxygen vacancies which
move due to an applied electric fie
ld
(Strukov et al., 2008)
, and using Ag
-
rich versus Ag
-
poor
regions which shift in much the same way
(Jo et al., 2010)
.
These elements behave as the ideal
equations written
above

in a linear region of operation, and as memristive systems outside
this
region
.
Furthermore it was found that a certain threshold voltage was nee
ded before any
change in memristance occurred.


A theoretical simplification of the whole memristor system implementing the threshold

and
nonlinear region

was proposed by

(Linares
-
Barranco & Serrano
-
Gotarredona, 2009
b
)
.

In this
model there is a
dead zone

where nothing changes, while


changes exponentially outside this
region:



























|

|





Where




describes the functioning threshold
, and


and



are parameters determining
slope
.
This learning rule is
illustrated in Figure 5.



Figure
4

Learning rule for proposed memristors. ADD: ideal memristors.
Reproduced from
Zamarreño
-
Ramos et al. (
2011)
.

The memristor offers many new advantages. It allows for analog based data storage, rather than
0’s and 1’s, among other things. Furthermore, it

can easily be implemented in existing circuits
based on the basic elements.
But how relevant is it for neuromorphic engineering, especially as a
synaptic device? I
t
definitely
has inherent plasticity, with clear learning rules based on the
current that ha
s passed through the device.
Co
nnecting two nodes in a network

will result in
plasticity automatically.
But how do memristors compare to plastic synapses found in the brain?
Would the plasticity mechanism

meet the Hebbian learning and competition requireme
nts?



14

4.
Synaptic Plasticity and Memristance

In this section I will explore how we can combine synaptic plasticity and memristance on the
lowest level. That is,
given two connected nodes, how can we relate the two?

Research into this
has focuses on reprod
ucing the SPDT mechanism, which I will recap here.


Soon after the development of actual memristors several labs started looking into using them in
neuromorphic systems, focusing on spike
-
based systems. The most central result
was achieved
by Linares
-
Barr
anco and Serrano
-
Gotarredona

(Linares
-
Barranco & Serrano
-
Gotarredona,
2009a, 2009b)
.

They showed that if

you connect two spiking neuron
-
like system

with a
memristor
,
and
a
ssume a few basic things,
STDP
-
based
plasticity automatically follows.
The
basic system is illustrated in Figure 6.



Figure
5

Two spiking neurons connected by a memristor

Vpre refers to the
pre
-
synaptic

voltage, and Vpost to the post synaptic voltage. For the purpose
of the proof no exact spiking mechanism is needed, only a specific spike shape. If a spike
happens at either node, the voltage at that node follows the spike shape. The spike shape is
assume
d to be as follows

(
Linares
-
Barranco & Serrano
-
Gotarredona,
2009b)
:


Here






describes the positive and negative height respectively and



and



the exponential
rise and decay time scales
.
The shape is illustrated in Figure 7

and replicates
the main

features of
actual spikes
: a sharp initial peak, followed by a slow recovery to equilibrium
.


Vpre

s?‰?}?•?š


15


Figure
6

Basic spike shapes assumed

for the SPDT memristor analysis. Reproduced from
Linares
-
Barranco & Serrano
-
Gotarredona
(
2009b)
.

Furthermore, t
hey used
a l
earning rule as in Figure 5 for their theoretical memristor
.
If

these
equations
are used
to find the voltage over the memristor for different
spike timing
combinations,
the memristance (or “synaptic weight”)

changes for these combinations can be
found
. The

voltage over the memristor

in Figure 6

is given by
:




















Where



and



are the post and
pre
-
synaptic

voltage respectively.
Now given
a known

learning rule:










where





is

as

illustrated in Figure 5
,
it is possible to find the

weight ch
ange due to

a
spike
time difference
:









(






)



where


is the spike timing difference.
By using a range of different spike time combinations,
this function can be found
. This

process is illustrated in Figure 8. The r
ed colored area shows
what voltages are above threshold and contribute to




, and thus the change in w. For positive


, w increases,
and
for negative


w decreases.

For spikes too far apart or for spikes
at the
same time nothing changes.

Since





is an exponential function the increase in w will be
bigger the smaller


is

(unless it’s 0)
.


16


Figure
7

Finding weight functions for different spike pairs illustrated.

Reproduced from
Linares
-
Barranco & Serrano
-
Gotarredona
(
2009b)
.

In Figure 9

we can see the resulting function




. The curve is equivalent to the original
SPDT
curve, showing that the memristor learning rule given two connected spiking mechanisms
automatically leads to
a SPDT like learning rule.

It should be noted that the “weight”


is not
actually what is used as the synaptic weight in the above model, the
con
ductance

is. The latter
is a function of

. So rather than using w for the learning function, one related to

the
memristance



should be used. This results in multiplicative SPDT of the form

(
Zamarreño
-
Ramos et al.,
2011)
:
















Where


is

the
memristive
conductance



.
This
is
a different kind of learning where the change
in synaptic strength is dependent on the current synaptic strength as well.





Figure
8

Or
iginal SPDT function

(left) and m
emri
stance derived SPDT function (right)
.
Reproduced from
Linares
-
Barranco &
Serrano
-
Gotarredona
(2009a
)
.


17

Since SPDT like learning automatically follows from the described setup, one could thus expect
many of the learning and
computational advantages SPDT learning offers in a network of
memristive connected spiking nodes, including competition between synapses and Hebbian
learning.




18

5.
Applying
memristive
synaptic plasticity

Until

now we’ve mostly discussed the theoretical sid
e of synaptic plasticity and memristance,
and how they’re related. In this section we will go more into applying the
technologies

discussed so far in two ways: theoretical ideas of how to build large circuits for specific
purposes, and
various

actually rea
lized circuits.

I will start with the demonstration of
some
simple operations and learning

in simple circuits, followed by more large scale network learning
and finally two proposed circuits for actual tasks: maze solving and a visual cortex inspired
imagi
ng network.


Simple
arithmetic operations

The possible computational power of circuits using memristors was demonstrated by
Farnood
Merrikh
-
Baya
t & Shoura
ki

(
2011a)
. They showed how simple circuits are capable of basic
arithmetic operations using just a few elements. In conventional circuits the voltage and
currents was used for the calculation, but they instead proposed to use the memristance
value
itself resulting in simpler and faster operations while using less chip
-
area, in particular for
multiplication and dividing. Their proposed circuits are shown in Figure 16. While not directly a
neuromorphic application, the simplicity of these circui
ts could allow them to be combined with
neuromorphic structures

and they are worth mentioning
.


Figure
16.
Simple memristor based circuit example capable of basic arithmetic operations. Reproduced from
Fa
rnood Merrikh
-
Bayat & Shouraki (
2011b)





19

Associative learning

Memristor circuits have successfully been used to model many forms of learning

including

experimentally found

features of amoeba learning

(
Pershin, Fontaine, & Di Ventra, 2009)
.

Surprisingly small circuits of spiking neurons can perform
associative

memory, as shown in

Pershin & Di Ventra

(
2010)
. The
proposed circuit is
shown in Figure 11A
.

The circuit was actually
built, but with memristors constructed from regular compo
nents and digital controllers.

In this
setup circuits with a spiking mechanism representing
pre
-
synaptic
neurons (N1 and N2) receive
inputs, associated with

different kinds of inputs (in this case

“sight of food” and “sound”
)
.
T
hese neurons are connected

to a

post
-
synaptic

neuron N3 by memristors S1 and S2. S1 has
low res
istance initially, and
S2 high
, r
esulting in only the “sight of food”
(N1)
s
ignal being passed
successfully
leading
to

N3 spiking.



Figure
11

A
.
Sim
ple associative memory circuit
N1, N2 and N3 are spiking neuron circuits, connected by memristors S1 and S2.
B.

Associative memory demonstrated. At first the output neuron only fires if there is a “sight of food” signal, not if there is
only
a “sound” sign
al. However after getting both the “sight of food” signal and the “sound” signal at the same time the system
learned to associate the two, and the output neuron fires both when there is a “sound” signal and a “sight of food” signal.
Reproduced from
Y
Pershin & Di
Ventra (
2010)
.

The learning mechanism is sim
ilar to that explained in Section 4. When N2 fires no current
flows between N2 and N3 due to the high memristance of S2. If N3 fires spikes at the same
time, however, the memristance is
lowered, resulting in N2 also being more strongly connected
to S3.
N3
only fires if N1 is firing, resulting in the connection only strengthening if N1 and N2
fire at the same time.
As such, the circuit p
ortrays associative learning.



20

How well the circuit works is illustrated in Figure 11B.
Here to the two input signals are shown
(green and red curves) and the output spikes (black).
Before
a learning period during which
N1
and N2 spike at the same time, only N1 spikes result in N3 spiking.
N2 spikes have no effect.
After a learning period dur
ing which N1 and N2 spike at the same time,
either N1 or N
2
spikes
result in N3 spiking.


Network learning

Learning on the level of just a few neurons is conceptually and computationally simple, but
does this principle easily translate to larger networks?

After all,
we originally set out to find out
if memristors can be used to reproduce brain
-
like structures
, which is involves numbers like
10
10

synapses per cm
3
. Can we build large networks of neuron
-
like structures connected by
memristors that are still t
ractable and useful?


Theoretically several networks have been proposed, but they all
have
the cross
-
bar
architecture
in common
(
e.g.
Jo et al., 2010, 2009;
Zamarreño
-
Ramos et al., 2011)
, which is illustrated in
Figure 12.
The proposed structure consists of pre
-

and post
-
synaptic neurons, each with their
own electrode.
These electrodes are arranged in a cross
-
bar structu
re, as illustrated in Figure
12A
, whe
re every electrode is connected by a memristor synap
se, as illustrated in Figure 12B
.
All pre
-
neurons are thus connected to all post neurons. The idea is to have the memristors
encode weights between the pre
-

and post
-
neurons, performing some transformatio
n on input
to the pre
-
neurons which is read out by the post neurons.



Figure
12

Example cross
-
bar circuit illustrated, as proposed in
Jo et al. (
2010)

A
.

The proposed cross
-
bar circuit. Ev
er
y pre
-

and
post
-
neuron has it
s own electrode, and every electrode is connected to every other electrode, creating an all
-
to
-
all connectivity
scheme between the pre
-

and post
-
neurons which maximizes connection density

B.

A single synapse illustrated. Two el
ectrodes
are connected by a memristive connection, in this case by one based
on

Ag rich and an Ag poor region (see Memristor section).

Jo et al. (2009) actually built a simple memory crossbar circuit, capable of storing information
by changing the connecti
on w
eights between electrodes, letter encoding capabilities
of which
are illustrated in Figure 13.


This particular circuit example, h
owever, is an explicit memory
encoding
-
read out example
. Another failing of the circuit is that it is based on on/off swit
ching,
while one of the main theoretical advanta
ges of memristors is analog storage
(Strukov et al.,
2008)
.

Nevertheless, for an initial circuit using a new technology these are hopeful results.



21

Figure
1
3

A simple crossbar realization, with memory storage and retrieval capabilities.

A
.

The word “crossbar” stored and
retrieved in
the circuit.

B.

SEM image of the simple 16X16 circuit.
Reproduced from

Jo et al.

(
2009)
.

One group realized an early exploration
using
the more plastic advantages of memristor circuits
in the form of a self programming circuit

(Borghetti et al., 2009
)
.

This circuit was capable

of
AND/OR logic operations after being self programmed. An example run can be seen in Figure
14.

This is an example of self
-
p
rogrammed AND
-
logic, where only overlapping input elicit a
postsynaptic response.


Figure
14

AND/O
R logic illustrated for a crossbar circuit after
self
-
programming
. The red lines were input voltages, the blue
curve the measured output voltage.
A.

Non overlapping puts shows very small output
.
B.

Overlapping inputs show large output,
indicating an AND
operation.

Reproduced from

Borghetti et al.

(
2009)
.

Even further theore
tical advances have been achieved by Howard et al. They developed a
memristor based circuit
capable of learning using evolutionary rules
(Howard, Gale, Bull, de
Lacy Costello, & Adamatzky, 2011)
.
They showed

that memristive properties of the network
increase
efficiency

of th
e learning compared to conventional
evolving

circuitry. They compared
several types of theoretical memristors and showed how they impacted performance.




22

More complex tasks

So far I’ve

mostly discussed circuits tested on very simple task
s
, they mostly served as “proof
-
of
-
concepts”. Several circuits have in fact been proposed with more useful tasks in mind,
two

of
which I will highlight
next
.


The first is maze solving. Mazes, and more generally graph problems, can be hard to solve
computa
tionally. Shortest
-
path algorithms exist but for large mazes they can be quite slow.
Using memristors Di Ventra an
d

Pershin proposed a circuit that drastically improves on this
(Di

Ventra & Pershin, 2011)
. The propos
ed circuit is shown in Figure 15
. Every node is connected by
a memristor and a sw
itch. If that particular path is open, the switch is turned on and vice
-
versa.


Figure
15
.

Basic maze solving circuit. The nodes (path crossings) of the maze are mapped to a circuit of memristors in series
with switches. If a path is closed, the switch i
s turned off, and vice versa for open paths.

Reproduced from

Ventra & Pershin
(2011
).

The circuit is initialized by applying a voltage over the start
-

and endpoints of the maze. This will
result in a current flow through the “open” paths,
changing resistance v
alues everywhere along
the path. These changed resistances

can be read out afterwards to find the possible paths
through the maze. The resulting path solving
capabilities are shown Figure 16
, including the
time needed to solve the maze
. The mechanism works m
uch faster than any existing
mechanism, due to
massive parallelism
: a
ll units simultaneously involved in computation.
This
circuit, although very specific to this task, is
a good example of how circuit theory and brain
inspired archi
tecture can work together to result in new and
well

working circuitry.



23


Figure
16
.
Solutions to a multiple solutions maze using the proposed circuit
, encoded in the resistance of the memristors

(Figure 15
).

The colors of the dots indicates the resistanc
e of said memristor, a major resistance difference can be seen at the
path
-
split, corresponding to a different size path.

This solution can be found after only one iteration of the network
, in 0.047
seconds, vastly outperforming any other method of solving

this type of problem, thanks to the simultaneous cooperation of the
small units
.

Reproduced from
M. D. Ventra & Pershin

(
2011)
.

V1 imitation

Finally I will consider
a brain structure
-
informed
self
-
learning
c
ircuit reproducing

feature
recognizing properties of the
V1 area in the
brain’s visual cortex, proposed in

Zamarreño
-
Ramos et al. (
2011)
. Neurons in this area

have two properties that need to be reproduced. They

typically have s
pecific r
eceptive fields

and

portray orientation selectivity; s
pecific kinds of edges
are detected within the receptive fields

(Hubel & Wiesel, 1959)
.
Zamarreño
-
Ram
os et al. based
their network on

known V1
-
architecture
using receptive fields. They started the network with
random memristance values
and trained it with realistic spiking data, corresponding to signals
coming from the retina.
The resulting images produces with this network
can be seen
in Figure
1
7
.
The evolution of the weights in the receptive fields can be found in Figure
18
, showing how
orientation selectivity arises during training.
This not only shows that memristor networks work
well as computational circuits, but also reconfirms t
hat memristance

like structures result in
brain
-
like learning automatically

when using STDP
.

A similar circuit was proposed in
(
Snider,
2008)
.



24


Figure
1
7

A.
Illustrating the effect of a rotating dot and a physical scene on the V1
-
simulating
self
-
learning

network.
The 3D
-
graph plot edge detections in space and time. Blu
e dots represent dark to light changes, red dots represent light to dark
changes.
B.
Network reproduction of natural scene
, events collected during a 20ms video of two people walking
.

White and
black dots correspond to the blue and red dots in A respectiv
ely.
For details see image source

(
Zamarreño
-
Ramos et al.,
20
11)
.


Figure
18

Receptive field training resulting in orientation selectivity illustrated. Reproduced from
Zamarreño
-
Ramos et al.
(
2011)
.


25

6.
Discussion

In this paper I reviewed the current knowledge on synaptic plasticity in the brain and
memristive theory and realization. Next I showed how memristors can be used as plastic
synapses that portray Spike Timing Dependent Plasticity (STDP), and finally I show
ed how
various authors have proposed or already realized
circuitry

based on this able to perform, basic
arithmetic operations, associative learning, simple logic operations, maze solving and V1
-
like
edge detection.

These are all very promising results, but

there are still various points of
discussion.


STDP

The most

important result for using memristors as plastic synapses
are
without a doubt
the
STDP results.
Networks

consisting of spiking neuron
-
like structures

connected by memristors
are a
lmost automatic
ally

capable of timing and coincidence detection, sequence learning, path
learning and navigation, direction selectivity in visual responses and the emergence of
orientation selectivity, many of which have already been demonstrated in memristive circuits.
A
big weakness of the presented STDP theory is the dependence on a specific learning rule of the
memristor: an exponential learning rule (see Figure 5). This is a very idealized function, and it is
not clear if this shape can be represented

well

by a physi
cal device. It is likely that when
implementing STDP in actual circuits results will not be as
clean as presented in the theoretical
papers, and the STDP function might

be different.
Another point of concern is the fact that even
with spiking structures, m
emristors don’t portray pure STDP, but multiplicative
STDP, a less well
studied form. The effects of this definitely need to be studied more.


Different spike shapes

One weakness if wanting to implement
the specific
STDP
learning function presented earlie
r,
is
the specific spike shapes necessary. If you change the spike shape, the learning function also
changes. Some preliminary studies for this have already been done, and are
shown

in Figure 18
(Zamarreño
-
Ramos et al., 2011)
. This fact could actually be used as an advantage, as the
different
SPDT

functions allow for a higher variety of learning and thus applications. In fact,
many

of the learn
ing functions shown in Figure 19

bear resemblance to learning functions
which are also found in some brain areas
, some examples of

which can be found in Figure 20
.
Perhaps in the experimental cases there is also a relationship between spike sh
ape and learning
function, which would offer strong support for a memristive theory of plasticity in the brain.
Even if they don’t, however, at least their functional role can be replicated in circuitry by
choosing the spike shapes well.




26


Figure
1
9

Influence of different spike shapes on learning functions. Reproduced from
Zamarreño
-
Ramos et al.
(
2011)
.


Figure
20

Different experimentally found SPDT functions. Reproduced from
(Abbott & Nelson, 2000)
.


27

Firing rate based learning

A big gap in the current explorations of applying
synaptic plasticity to memristance structures is
the reliance on spiking. This puts a circuit designer at a disadvantage, as circuits mimicking
spiking behavior need to be implemented, which requires extra space and energy. As discussed
earlier in this rev
iew, there is another class of functional learning theory, which is based on
firing rates. This learning has zero need for spiking, and still portrays many properties of
learning observed in the brain. No memristor studies, to my knowledge, have explored t
he
possibilities of applying that theory to memristor

based synapses
. This is surprising, as even the
basic memristor function is very similar to firing rate based learning. To reiterate the basic firing
rate learning equations:



























Where v and u correspond to
post
-
synaptic

and
pre
-
synaptic

firing rates, and w to the
connection weight. Very similarly, a basic memristor is described by:
























Where in this case


and


correspond to th
e current through and the voltage over the
memristor, and w to some memristor property governing the memristor’s resistance. It is clear
that these are the
similar

function classes
, with the current and voltage being analogous to the
pre
-

and
post
-
synaptic

firing rates. If we can built a circuit in which the memristor functions
imitate known firing rate functions, this circuit can be used to employ learning as used in firing
rate models. Below I will show that the simplest firing rate learning functions and

memristor
functions are already extremely similar.
To reiterate the simplest firing rate
-
based functions
:














In this setup the
post
-
synaptic

firing rate depends on the
pre
-
synaptic

firing rates. The
connection weight get stronger dep
endent on both the pre and
post
-
synaptic

firing rates.
Meanwhile, the simplest theoretical memristor functions, meanwhile, are:















Where the resistance as a function of w is given by:





















The time derivative of

which is:


28



















If


and


are analogous to the post
-

and presynaptic firing rates respectively,


would
correspond to the connection weights. So for a simple memristor with a specifiable input
current, the voltage depends on the amount of input current. Meanwhile, the “connection
weight”


would increase dependent on the input current. While this is
not equivalent to the
firing rate learning model, it’s already very close, and probably relatively simple circuits can be
built to mimic the firing rate function more, or even the more complicated learning functions
like the BCM functions (see synaptic pla
sticity section of this review), and future researchers
would do well to explore these possibilities.
Perhaps by combining the circuits designed to do
arithmetic operations
something like BCM learning can be achieved
(Merrikh
-
Bayat & Shouraki,
2011a)
.



Other types of plasticity


This paper focused mostly on Hebbian and more specifically STDP type learning. However these
are not the only kinds of learning theories, nor are they the only mechanism observed in th
e
brain. In fact, several proposed network models of learning implement various types of learning
to work correctly
(
e.g.
Lazar, Pipa, & Triesch, 2009; Tetzlaff, Kolodziejski, Timme, & Wörgötter,
2011)
. I will briefly discuss the most important ones and their impact on memristor based
synaptic plasticity.


The STDP learning we discussed in this paper was p
air
-
based: it described how pairs of spikes
influenced the synaptic strength between two neurons. However, experimental evidence
(Froemke & Dan, 2002)

suggests that triplets of spikes also have a separate influence on the
plasticity
, which

are not explained by the pair
-
based STDP learning rule
.
A computational model
next how a triplet
-
based STDP learning rule could explain the experiments
(Pfister & Gerstner,
2006)
. In fact, a memristive model for triplet STDP learning has already been proposed
(Cai,
Tetzlaff, & Ellinger, 2011)
.



Although STDP is a very powerful learning strategy, i
t does not guarantee stable circuits in a
realistic setting
(Watt & Desai, 2010)
. Complementary learning methods have been suggested
to constrain firing rates and synaptic weights in a network; synaptic scaling and intrinsic
plasticity being the most
well known

methods. Synaptic scaling

basically puts a limit on the sum
of all weights. Intrinsic plasticity meanwhile changes a single cell’s physiology and thus the way
it fires based on long term global network activity. Implementing these kinds of processes in
parallel of STDP might turn
out to be necessary to build viable and effective circuits. An
example where synaptic scaling and intrinsic plasticity are already applied is in the BCM rule, a
firing rate based model. If a memristive BCM model could be designed and combined with a
STDP r
ule, synaptic scaling and intrinsic plasticity could potentially be applied in circuitry.






29

Plasticity

Much of the research presented here tried to
apply

knowledge obtained from the brain to
circuitry. In particular, theories about the role of synaptic p
lasticity
were

applied to learning
networks. A major
potential

problem

is that there is actually very little understanding of the link
between synaptic plasticity and memory or computation in the brain.
Some basic functional
mechanisms of functional learni
ng are known, and some theoretical advantages of these
mechanisms have been studied, but on a network level we know very little.
Is it actually
justified to start building circuits based on all this?

First of all, we know enough of the low
-
level
mechanisms

to start applying them to actual circuits. Secondly, trying to implement synaptic
activity in circuit goes both ways. It doesn’t only lead to

better

and new circuit architectures
(
e.g.

for extremely quick maze solving), but also to a potentially better un
derstanding of the
very mechanisms we are trying to mimic.


Integrated circuitry

One of the many cited advantages of the memristor technology is that it is completely
compatible with existing circuitry. It can be used to make networks with plastic synapse
s,
connected to conventional circuits

(e.g.
Greg Snider et al.
,
2011)
. However, when claiming that
this technology could lead to structures with brain
-
level complexity and computing power there
is a
possible point of discussion
. Although not mu
ch is known about exact computation
mechanisms in the brain, one could argue that its strength is the fact that it
consists only of
massively parallel connected plastic units, without a specific “interpreter” or some such
structure
. When trying to mimic br
ain
-
like processes, it might not be easy to actually combine
these structures with existing circuitry. How do you read out thousands to millions of
independent processes, and make sense of it while passing them through a single processor

without having the

same problem as before
?
For example, in the maze solving circuit, the
circuit solves the maze really

quickly by a human’s eye, b
ut how would a conventional
computer read out and use the solution?
Again, large scale computations are not well
understood in
the brain, and by building and testing memristor circuits we could actually begin
to study some of these questions.


Ideal modeling problems

A major problem with the current st
ate of the memristor
field, is that most major results have
only been achieved
in simulated networks, and even then often rely on very simplified
memristors. Real life circuits will be
noisier
, and the memristors will not be as ideal as
simulated. It is not clear yet how well the theoretical results will carry ov
er to actual circuitr
y. It
is

possible that actual circuitry might perform better than the simulated circuits,
or at least
close to a biology
-
like system,
especially if
something like
evolution
-
based

training is used. For
example, an increasing body of literature is suggest
ing

that the noisiness of the brain could
actually be one of its major strengths, for example by encoding the reliability of sensory
evidence
(Knill & Pouget, 2004)
. For memristors to really prove their worth as plastic synapse
s
,
more experimental results are absolutely vital.





30

Energy use of
circuits


Related to the absence of experimental results is the energy use necessary for proposed
circuitry. If one would want to match the complexity of the brain about 10
3
-
10
4

synapses per
neuron would be needed. With current day technology it is supposedly possible to maintain
that ratio and have 10
6

simulated neurons/cm
2
, which is quite substantial. However, energy
wise there is still a major problem
, following the calculatio
ns from
Zamarreño
-
Ramos et al.

(
2011)
. Current memristor resistance values range from kOhm

to MOhm scale.
Zamarreño
-
Ramos et al.

assumed

1MOhm
memristors

and neurons providing enough current to maintain a
1
V potential over the memristors.

If the neurons would
fire spikes at a 10Hz average
,

the
memristors

alone

would dissipate
2kW/cm
2
. This amount of power dissipation is unrealistic

and
would melt any real circuitry. The only way to bring down these power requirements is by
increasing the maximum resistance valu
es by about a 100 fold.
Before circuits of brain
-
like
complexity can be built this milestone will have to be reached. Alread
y progress is being made
here, the most recent memristor
-
like elements are operable with currents as low as
2 μA, as
opposed to the 10 mA needed for the theoretical memristors
assumed

above
(Mehonic et al.,
2012)
.


Other small devices
with
similar properties

The memristor is not the only circuit element capable of plasticity. There are also capacitor and
inductor alternatives: memcapacitors and meminductors
(
Di Ventra, Pershin, & Chua, 2009)
.
These allow fo
r even more complicated circuitry portraying plasticity related memory and
computations.
Future work should incorporate these circuit elements in parallel or series with
the memristor.
With these elements it might be especially easy to mimic firing rate ba
sed
plasticity rules. Moving away from memristance based circuitry, nano
-
scale transistor devices
have also been shown to portray plasticity and spiking
-
like mechanisms (e.g.
Alibart et al.,
2010)
.
Understanding h
ow these devices relate to memristors and synaptic plasticity could lead
to better understanding o
f

both, and
potentially

lead to

combined circuitry.


Fi
nal remarks

Memristors have been realized
only
very recently, and considering the youth of the field
extraordinary advances have already been made.
Memristors

are more than a promising
technology
:

they could potentially revolutionize neuromorphic engineeri
ng and pioneer the use
of plasticity in actual circuits.
The focus is currently mostly on using spiking
structures
con
nected by memristors. These networks can be very powerful,

since STDP is almost
auto
matically portrayed
.

However, there are
various

proble
ms w
ith the current approach, as
described above. I would suggest
three central

future research directions.


The similarity of STDP rules seen in memristors and between actual neurons is extremely
interesting, and possibly points towards a memristive
explanation of STDP in the brain (Linares
-
Barranco & Serrano
-
Gotarredona, 2009a).

For

memristive model of STDP spike shape
are
extremely

important for the learning mechanism
,

but no work has

has been done on relat
ing
this to
neuroscience.
Future work in th
is direction could potentially increase our understanding
of the underlying neuroscience as well.



31

Although the current focus in on spiking structures, this high degree of b
iological realism
is
not
necessary. Firing rate based models exist which can portr
ay complicated learning mechanisms,
and at a first glance these could possibly be reproduced in memristor based circuits. By
developing circuits like this the need for spiking structures could potentially be removed, or
more complicated learning rules than

STDP could be implemented by combining spiking and
firing rate based learning rules.


Finally, despite the claims that memristor based circuits could potentially rival the complexity of
the brain, energy dissipation with current technology does not allow

for it.
Before circuits
operating
on a

scale similar to the

brain could be developed,
this bottleneck needs to be
addressed.



32

7.
Bibliography

Abbott, L. F., & Blum, K. I. (1996). Functional significance of lon
g
-
term potentiation for
sequence learning and prediction.
Cerebral cortex (New York, N.Y. : 1991)
,
6
(3), 406
-
16.
Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/8670667

Abbott, L. F., & Nelson, S. B. (2000). Synaptic plasticity: taming the beast.
Nature
neuroscience
,
3 Suppl
(november), 1178
-
83. doi:10.1038/81453

Abbott, L. F., & Regehr, W. G. (2004). Synaptic computation.
Nature
,
431
(7010), 796
-
803.
doi:10.1038/nature03010

Alibart, F., Pleutin, S., Guérin, D., Novembre, C., Lenfant, S., Lmimouni, K., Gamr
at, C., et al.
(2010). An Organic Nanoparticle Transistor Behaving as a Biological Spiking Synapse.
Advanced Functional Materials
,
20
(2), 330
-
337. doi:10.1002/adfm.200901335

Backus, J. (1978). Can programming be liberated from the von Neumann style?: a fun
ctional
style and its algebra of programs.
Communications of the ACM
,
21
(8), 613
-
641.
doi:10.1145/359576.359579

Bi, G., & Poo, M. (1998). Synaptic modifications in cultured hippocampal neurons: dependence
on spike timing, synaptic strength, and postsynapti
c cell type.
The Journal of Neuroscience
,
18
(24), 10464
-
10472. Retrieved from http://www.jneurosci.org/content/18/24/10464.short

Bi, G., & Poo, M. (2001). Synaptic modification by correlated activity: Hebb’s postulate
revisited.
Annual review of neuroscien
ce
,
24
, 139
-
166. Retrieved from
http://www.annualreviews.org/doi/pdf/10.1146/annurev.neuro.24.1.139

Bienenstock, E. L., Cooper, L. N., &

Munro, P. W. (1982). Theory for the development of
neuron selectivity: orientation specificity and binocular interaction in visual cortex.
The
Journal of neuroscience : the official journal of the Society for Neuroscience
,
2
(1), 32
-
48.
Retrieved from
http://www.ncbi.nlm.nih.gov/pubmed/7054394

Blais, B. S., & Cooper, L. (2008). BCM theory.
Scholarpedia
. Retrieved from
http://www.scholarpedia.org/article/BCM_theory#BCM_and_scaling

Blum, K. I., & Abbott, L. F. (1996). A model of spatial map formation in t
he hippocampus of the
rat.
Neural computation
,
8
(1), 85
-
93. Retrieved from
http://www.ncbi.nlm.nih.gov/pubmed/8564805

Borghetti, J., Li, Z., Straznicky, J., Li, X., Ohlberg, D. a a, Wu, W., Stewart, D. R., et al. (2009).
A hybrid nanomemristor/transistor l
ogic circuit capable of self
-
programming.
Proceedings
of the National Academy of Sciences of the United States of America
,
106
(6), 1699
-
703.
doi:10.1073/pnas.0806642106


33

Cai, W., Tetzlaff, R., & Ellinger, F. (2011).
A Memristive Model Compatible with Triple
t Rule
for Spike
-
Timing
-
Dependent
-
Plasticity.
Arxiv preprint arXiv:1108.4299
, 1
-
6. Retrieved
from http://arxiv.org/abs/1108.4299

Chua, L. (1971). Memristor
-
the missing circuit element.
Circuit Theory, IEEE Transactions on
,
c
(5). Retrieved from http://ieeex
plore.ieee.org/xpls/abs_all.jsp?arnumber=1083337

Chua, LO, & Kang, S. (1976). Memristive devices and systems.
Proceedings of the IEEE
,
64
(2).
Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1454361

Dayan, P., & Abbott, L. F. (2003).
The
oretical Neuroscience

(1st ed.). MIT Press.

Di Ventra, M., Pershin, Y., & Chua, L. (2009). Circuit elements with memory: memristors,
memcapacitors, and meminductors.
Proceedings of the IEEE
, (1), 1
-
6. Retrieved from
http://ieeexplore.ieee.org/xpls/abs_all.
jsp?arnumber=5247127

Froemke, R. C., & Dan, Y. (2002).
Spike
-
timing
-
dependent synaptic modification induced by
natural spike trains.
Nature
,
416
(6879), 433
-
8. doi:10.1038/416433a

Gerstner, W., Ritz, R., & van Hemmen, J. L. (1993).
Why spikes? Hebbian learn
ing and retrieval
of time
-
resolved excitation patterns.
Biological cybernetics
,
69
(5
-
6), 503
-
15. Retrieved
from http://www.ncbi.nlm.nih.gov/pubmed/7903867

Hebb, D. O. (1949).
The Organization of Behavior
. New York: Wiley & Sons.

Howard, G., Gale, E., Bull,

L., de Lacy Costello, B., & Adamatzky, A. (2011). Towards evolving
spiking networks with memristive synapses.
2011 IEEE Symposium on Artificial Life
(ALIFE)
, 14
-
21. Ieee. doi:10.1109/ALIFE.2011.5954655

Hubel, D., & Wiesel, T. (1959). Receptive fields of s
ingle neurones in the cat’s striate cortex.
The Journal of physiology
,
148
, 574
-
591. Retrieved from
http://jp.physoc.org/content/148/3/574.full.pdf

Jo, S. H., Chang, T., Ebong, I., Bhadviya, B. B., Mazumder, P., & Lu, W. (2010). Nanoscale
memristor device
as synapse in neuromorphic systems.
Nano letters
,
10
(4), 1297
-
301.
doi:10.1021/nl904092h

Jo, S. H., Kim, K.
-
H., & Lu, W. (2009). High
-
density crossbar arrays based on a Si memristive
system.
Nano letters
,
9
(2), 870
-
4. doi:10.1021/nl8037689

Knill, D. C., &
Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding
and computation.
Trends in neurosciences
,
27
(12), 712
-
9. doi:10.1016/j.tins.2004.10.007

Lazar, A., Pipa, G., & Triesch, J. (2009). SORN: a self
-
organizing recurrent neural netw
ork.
Frontiers in computational neuroscience
,
3
(October), 23. doi:10.3389/neuro.10.023.2009


34

Linares
-
Barranco, B., & Serrano
-
Gotarredona, T. (2009a). Exploiting memristance in adaptive
asynchronous spiking neuromorphic nanotechnology systems.
IEEE Nanotechn
ology,
2009.
,
8
, 601
-
604. Retrieved from
http://onlinelibrary.wiley.com/doi/10.1002/cbdv.200490137/abstract

Linares
-
Barranco, B., & Serrano
-
Gotarredona, T. (2009b). Memristance can explain spike
-
time
-
dependent
-
plasticity in neural synapses.
Nature Proc
, (1
), 2
-
5. Retrieved from
http://ini.ethz.ch/capo/raw
-
attachment/wiki/2010/memris10/npre20093010
-
1.pdf

Markram,
H.,
Ger
stner, W., & Sjöström, P.

J. (2011)
.

A history of spike
-
timing
-
dependent
plasticity
.

Frontiers in Synaptic Neuroscience
, 3(4
),
doi:
10.3389/fnsyn.2011.00004
.

Mehonic, A., Cueff, S., Wojdak, M., Hudziak, S., Jambois, O., Labbe, C., Garrido, B., et al.
(2012). Resistive switching in silicon suboxide films.
Journal of Applied Physics
,
111
(7),
074507
-
074509.

Mehta, M. R., Quirk, M. C., & W
ilson, M. a. (2000). Experience
-
dependent asymmetric shape of
hippocampal receptive fields.
Neuron
,
25
(3), 707
-
15. Retrieved from
http://www.ncbi.nlm.nih.gov/pubmed/10774737

Merrikh
-
Bayat, F., & Shouraki, S. B. (2011a). Memristor
-
based circuits for perform
ing basic
arithmetic operations.
Procedia Computer Science
,
3
, 128
-
132.
doi:10.1016/j.procs.2010.12.022

Merrikh
-
Bayat, F., & Shouraki, S. B. (2011b). Memristor
-
based circuits for performing basic
arithmetic operations.
Procedia Computer Science
,
3
, 128
-
132
.
doi:10.1016/j.procs.2010.12.022

Minai, A., & Levy, W. (1993). Sequence learning in a single trial.
INNS World Congr. Neural
Netw
. Retrieved from http://secs.ceas.uc.edu/~aminai/papers/minai_wcnn93.pdf

Pershin, Y., Fontaine, S. L., &

Di Ventra, M. (2009). Memristive model of amoeba learning.
Physical Review E
, 1
-
6. Retrieved from http://pre.aps.org/abstract/PRE/v80/i2/e021926

Pershin, Y. V., & Di Ventra, M. (2010). Experimental demonstration of associative memory with
memristive neura
l networks.
Neural networks : the official journal of the International
Neural Network Society
,
23
(7), 881
-
6.
Elsevier Ltd. doi:10.1016/j.neunet.2010.05.001

Pfister, J.
-
P., & Gerstner, W. (2006).
Triplets of spikes in a model of spike timing
-
dependent
plas
ticity.
The Journal of neuroscience : the official journal of the Society for Neuroscience
,
26
(38), 9673
-
82. doi:10.1523/JNEUROSCI.1425
-
06.2006

Poon, C.
-
S., & Zhou, K. (2011).
Neuromorphic silicon neurons and large
-
scale neural networks:
challenges and opp
ortunities.
Frontiers in neuroscience
,
5
(September), 108.
doi:10.3389/fnins.2011.00108


35

Purves, D., Augstine, G. J., Fitzpatrick, D., Hall, W. C., Lamantia, A., & White, L. E. (2012).
Neuroscience

(5th ed.). Sinauer.

Rao, R., & Sejnowski, T. (2000). Predict
ive sequence learning in recurrent neocortical circuits.
advances in neural information processing systems
,
12
, 164
-
170. Retrieved from
http://books.google.com/books?hl=en&lr=&id=A6J8kzUhCcAC&oi=fnd&a
mp;pg=PA164&dq=Predictive+Sequence+Learn
ing+in+Recurrent+Neocortical+Circu
its&ots=K2iY6
-
hA5n&sig=_osJaEfTvYaOahS7u0Z3K2_DmPY

Roberts, P. D. (1999). Computational consequences of temporally asymmetric learning rules: I.
Differential hebbian learning.
Journal of computational neuroscience
,

7
(3), 235
-
46.
Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/10596835

Snider, Greg, Amerson, R., Carter, D., Abdalla, H., Qureshi, M. S., Leveille, A., Versace, M., et
al. (2011). From synapses to circuitry: Using memristive memory to explore the elect
ronic
brain.
IEEE Computer Society
, (February), 21
-
28. Retrieved from
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5713299

Snider, GS. (2008). Spike
-
timing
-
dependent learning in memristive nanodevices.
Nanoscale
Architectures, 2008. NANOARCH
,
85
-
92. Retrieved from
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4585796

Song, S., Miller, K. D., & Abbott, L. F. (2000). Competitive Hebbian learning through spike
-
timing
-
dependent synaptic plasticity.
Nature neuroscience
,
3
(9), 919
-
26. doi:10.1
038/78829

Strukov, D. B., Snider, G. S., Stewart, D. R., & Williams, R. S. (2008). The missing memristor
found.
Nature
,
453
(7191), 80
-
3. doi:10.1038/nature06932

Tetzlaff, C., Kolodziejski, C., Timme, M., & Wörgötter, F. (2011). Synaptic scaling in
combinat
ion with many generic plasticity mechanisms stabilizes circuit connectivity.
Frontiers in computational neuroscience
,
5
(November), 47. doi:10.3389/fncom.2011.00047

Ventra, M. D., & Pershin, Y. V. (2011). Biologically
-
Inspired Electronics with Memory Circui
t
Elements.
Arxiv preprint arXiv:1112.4987
. Retrieved from http://arxiv.org/abs/1112.4987

Watt, A. J., & Desai, N. S. (2010). Homeostatic Plasticity and STDP: Keeping a Neuron’s Cool
in a Fluctuating World.
Frontiers in synaptic neuroscience
,
2
(June), 5.
d
oi:10.3389/fnsyn.2010.00005

Zamarreño
-
Ramos, C., Camuñas
-
Mesa, L. a, Pérez
-
Carrasco, J. a, Masquelier, T., Serrano
-
Gotarredona, T., & Linares
-
Barranco, B. (2011). On spike
-
timing
-
dependent
-
plasticity,
memristive devices, and building a self
-
learning visual

cortex.
Frontiers in neuroscience
,
5
(March), 26. doi:10.3389/fnins.2011.00026

van Rossum, M. C., Bi, G. Q., & Turrigiano, G. G. (2000). Stable Hebbian learning from spike
timing
-
dependent plasticity.
The Journal of neuroscience : the official journal of t
he Society

36

for Neuroscience
,
20
(23), 8812
-
21. Retrieved from
http://www.ncbi.nlm.nih.gov/pubmed/16711840