in memristive neuromorphic
1. Life Sciences Graduate School, University of Utrecht, Utrecht, Netherlands
Amorijstraat 6 6815GJ Arnhem Netherlands
Development of artificial intelligence has been disappointing in many aspects, and has been
severely limited by the basic architecture of computers. The new field of neuromorphic
s to tackle this problem by basing circuit design on brain architecture. There are
two features of the brain that people try to implement especially: massive parallelism and
plasticity. Synapse implementations, however, have proven difficult, due to a lack
plastic circuit elements. This leads to the need of overly complex circuits to mimic any kind of
plasticity. Recent developments in nanotechnology provide us with an exciting new
opportunity: the memristor. The memristor is basically a resis
tor whose resistance depends on
the amount of current that passed through it: effectively it is a plastic resistor. This is the first
element of its kind and could potentially revolutionize the field of neuromorphic engineering.
This paper will study the v
iability of the memristor as a plastic synapse by reviewing the recent
developments in memristive technologies separately and in combination with known theories
of plasticity in the brain. Memristors turn out to be very powerful for mimicking synaptic
ticity, but current research has focused too much on spiking based learning mechanisms
and not enough experimental work has been done.
It also seems the memristor
rules could potentially improve our understanding of the underlying neuroscien
ce, but little
work has been done on this. Finally, despite promises of memristor
based circuitry being able
to match the complexity and scale of the brain, current memristors would use too much
focus on these
Neuromorphic engineering, memristance, synapses, plasticity
Table of Contents
2. Synaptic Plasticity
4. Synaptic Plasticity and Memristance
5. Applying memristive synaptic plasticity
During the ascent of the silicon computers wild prediction
were made: artificial intelligence
would soon surpass the capabilities of our own brain, presenting superior intelligence.
now, several decades later, this has not
yet been realized.
basic architecture all
have fundamental restrictions
. Albeit powerful at
sequential computing, and in that aspect certainly surpassing humans, the von Neumann
architecture prevented computers
was always limited by having to use a central processor
von Neumann b
This architecture is substantially different from
we understand of
of the brain: it is non
parallel and has no inherent plasticity
much effort has been going into developing a
neuromorphic engineering: circuitry with architecture inspired by and mimicking
of the brain.
the brain is a daunting task.
, after all,
arguably the most complex system in
le also being extremely
efficient. Its storage and computational
simply astounding. The brain contains billions of neurons work
ing in parallel, and the number of
plastic synapses connecting them is
it is a daunting task to
any circuit system that works similar
at the same
scale and efficiency.
This is the challenge the field of neuromorphic engineering tries to tackle.
Neuromorphic engineering is interesting
for two important reasons. First, by
using knowledge about
, it is possible to develop genuinely new
kinds of computers. Second, it might actually serve as a tool to gain more understanding of that
ain architecture it tries to mimic. By using an engineering
approach to build similar
structures of the brain we might learn more about its restrictions and advantages in ways
impossible to study with classic neuroscience. One might argue that this kind of
already accomplished by analog modeling of the brain, in which brain structures are modeled
on existing computer architectures. However,
actually building the circuits instead has at least
two advantages. First, models will never be able to re
plicate all the complexities of the real
world, all the imperfections of physical circuitry might actually be important (after all, a given
biological neuron is also not a perfect cell, but the brain still manages). Second, implementing
models using actual
circuits is more energy efficient than analog modeling
(Poon & Zhou, 2011;
Greg Snider et al., 2011)
One of the defining features of the nervous system is the plasticity of the synapses,
which we’ve only recently begun to truly under
stand. It not only underlies long and
short term memory
(Abbott & Nelson, 2000)
but is also extremely important for computation
(Abbott & Regehr, 2004)
. The functional and molecular underpinnings are only beginning to be
understood, but we need only concern us with the former.
When trying to make circuitry
mimicking brain function, the goal is
not to reproduce an exact neural syna
with all its
molecular complexity, but to reproduce the functional workings. The most well known
of synaptic plasticity is known as Hebbian plasticity
often summarized as “
fire together, wire together”
attributed to Carla Shatz
the formulation of Hebbian plasticity both c
provided us with many
which can roughly be
firing rate based learnin
g. Neuromorphic engineering aims to
apply this knowledge to ac
A problem in reproducing the functional plast
icity rules in circuit based
synapses has been that
the basic elements used in electronic circuits are
ly (or at least not controllably
The only basic element capable of in
was the memristor, and only
existed in theory
However, this element has
zed in practice in
by several labs
(Jo et al., 2010; Strukov, Snider, Stewart, & Williams, 2008)
memristor is basica
lly a resistor, whose resistance depends on how much current has passed
through the element in the past. As such it has memory and is plastic. It is an obvious question
whether or not this element can be used to mimic plastic synapse on a circuit level, an
this can be done.
The science of using memristors as synapses is an emerging and exciting field,
and in this paper I will be reviewing the advances and studying if the memristor really is as
promising as it sounds.
First I will summarize our curren
t understanding of synaptic plasticity, by explaining two of the
best known and successful learning rules: one which changes the connection strength between
two neurons based on their relative firing rate, known as the BCM model
& Munro, 1982)
, and one which depends on the relative timing of fired spikes, known as the
Plasticity (STDP) rule
, Ritz, & van Hemmen, 1993)
. Next I will
discuss the basic memristor workings: how the memristor was theorized
it was finally realized
(Jo et al., 2010; Strukov et al., 2008)
Then, having discussed some of the basics of both synaptic plasticity and memrist
ance, I will
show how the two are related.
Soon after the realization of the memristor several groups
began researching the possibilities of using memristors as synapses.
Using memristors in
surprisingly simple circuits automatically leads to associative m
Pershin & Di Ventra,
It was quickly realized
memristors can be implemented en masse in cr
(Borghetti et al., 2009; Jo, Kim, & Lu, 2009)
Most importantly it was found that if you
assume two spiking neurons with specific kinds of spike shapes, connected by a memristor,
various kinds of STDP automatically f
Barranco & Serrano
the basic theory of using memristors as plastic synapses, I will summarize
various more complex and applied applications such as maze solving
Ventra & Pershin, 2011)
and modeling part of the visual cortex
Ramos et al., 2011)
I will finally discuss
e of the current short comings: the extreme focus on using neuron
like spiking elements,
the relative lack of realized circuits, the problems with trying to mimi
c a structure we don’t fully
understand yet, the energy use of currently proposed circuits and the lack of learning rules
other than STDP.
Nevertheless, considering the relative youth of the field, incredible process has already been
made, and memristors
are definitely more than just a promising technology: there is real
evidence that they could act as plastic synapses and could potentially
neuromorphic engineering and
both the way we understand the role of plasticity in
the brain, a
nd how we could apply this knowledge to actual circuits.
Here I will
briefly discuss the current state of knowledge about plasticity mechanisms
. I will
focus on the functional plasticity rules
between pairs of neurons
, rather than
s we are not trying to build actual biological
synapses. On the single synapse
the learning rules in two classes: firing rate
and spike based
and more extensive
cap of plasticity
Abbott & Nelson
Although the theory of synaptic plasticity has become more and more
, the basics
the same as
postulated by Hebb
Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to
induce lasting cellular changes that add to its stability.… When an
is near enough
to excite a cell
and repeatedly or persistently takes part in firing it, some growth process or
metabolic change takes place in one or both cells such that
s efficiency, as one of the cells
, is increased.
more simply and intuitive
by Carla Shatz
“Cells that fire together, wire
(Markram, Gerstner &
Example figure of two connected cells
. The pre
synaptic cell (
sends an axon t
o the post
and connects to
one or more of its den
drites. This connection is a synapse, and is thought to be
plastic according to a Hebbian
principle is called Hebbian learning, and is at the root of
most plasticity theories.
some kind of plasticity of the connection between two
depends on the activity of both cells
Before we get into details a few definitions need to be
: For every connected cell pair there is a pre
synaptic cell and a post
synaptic cell sends an axon to the post
synaptic cell’s dendrites,
and is connected by a
synapse (or several synapses)
of a particular synapse we will classify as the
“weight” of the synapse.
How this weight changes is the kind of
plasticity studied in this paper.
See Figure 1
for an illustration.
Several molecular mechanisms have been suggested such as
changes in the numbers of messenger mo
lecules in a single synapse or
increasing the number
of synapses. However in this report we a
re concerned with replicating synaptic plasticity in
computer chips, and as such will primarily deal with the functional rules of learning, rather than
with the underlying mechanisms. For a more in depth explanation of the neuroscience and
innings see an up to dat
e neuroscience text book (e.g.
Purves et al., 2012)
of course a large number of possible
rules that we can define, so
ld first ask the question: what requirements should these rules meet
that lead to
network with the
capability of memory formation and/or
It is hard to define
what these are but in this review
we will consider the following.
In a networ
k of connected
two prime featur
es are thought to be necessary
(Dayan & Abbott, 2003)
. First, a
g rule. If two connected cells activate together, their connection should grow
stronger. Second, there has to be
some kind of
competition between connections. If
synaptic cells are connected to a post
synaptic cell, and a particular pre
future activity of the other pre
synaptic cells should influence their
. Now, what learning rules have been proposed and how do they follow these
I will first consider firing rate based learning. The assumption here is that the
don’t matter so much, but rather the sustained activity over time. A big advantage is that there
is no need to model a spiking mechanism, only the firing ra
te is needed. In most firing rate
models, how the post
cell’s activity depends on the pre
synaptic activity, and how the
connection weights depend on pre
can be written in its most
is a vector of pre
synaptic firing rates,
synaptic firing rate and
vector with all synaptic weights.
describes how the
n this activity itself, the pre
paper we will not concern ourselves too much with this function
assume that the post
firing rate is directly dependent on the pre
iring rate as follows
the sum of all presynaptic firing rates multiplied by their weights.
describes how the synaptic weight
over time based on the current weight
of the pre
This is the function which could inform how
memristors could act as plastic synapses. To implement Hebbian learning t
one can make is the following function
simple rule satisfies Hebbian l
earning, as the connection between two cells grows stronger
the more the two cells fire together.
is unstable as
grow stronger independent of other connection streng
In fact, the higher the post
the stronger all the weights get.
Several adjustments have been made to this simple model, the details of which I will skip, but
they have ultimately lead to the BCM model
(Bienenstock et al., 1982)
. For a detailed treatise
Dayan & Abbott
. First of all to prevent excessive growing of weights, a threshold
mechanism is added
added the term
. This decreases the rate of change of weights as the
activity comes closer to some threshold.
, for activity larger than
synapses are strengthened
The threshold itself also
evolves according to:
is some time con
grows, this also results in the threshold shifting. As
long as the threshold grows faster than
, the system remains stable
(i.e. the weight values
results in competition
between synapses. When
synaptic cells contribute most to the post
synaptic firing rate, the threshold slides
accordingly. When less cont
synaptic cells are
however, the post
firing rate is lower. The heightened threshold then results in the
ir weights being decreased
BCM learning has been shown to lead to several features of learning observed in the brain, such
as orientation selectivity
(Blais & Cooper, 2008)
Firing rate modes, albeit very powerful, miss a
crucial feature of actual
systems: a spiking mechanism
. In the last decade experimental work
(Bi & Poo, 1998, 2001)
(Gerstner et al., 1993)
as suggested a strong dependence of synaptic
plasticity on the relative spike timing of the pre and
cells. The resulting learning
is called Spike Timing Dependent Plasticity (STDP). In this model
each spike pair
fired at both cells changes the synaptic weight between the cells. The timing of the spikes is
what specifies how the weight changes exactly.
of this dep
been measured in different places in the brain,
the most studied
and well known
, which is shown in Figure 2.
In this particular STDP learning function s
timing differences result in high changes in pla
, and vice versa for large timing differences
Furthermore, the order of spikes is also extremely important.
cell fires first, a
is the results
However, if the post
synaptic cell fires first, the weight
is decreased. This reflects the fact that spikes fired at the post
synaptic cell first, could never
have been fired due to activity at the pre
This particular learning rule is often approximated as:
describe the timing a given
describe the time constants. These functions d
escribe the change in
weight due to a positive timing difference and a negative timing difference respectively, and are
plotted in Figure 2.
Experimental and theoretical STDP curves.
On the x
axis the timing difference between a pre and a
is shown. On the y
axis the resulting change in synaptic weight is shown. The dots are experimental data points, the lines are
exponential fitted curves.
Ramos et al.
The powerful thing about STDP is that this simple mechanism automatically results in both
competition and stable Hebbian learning
(Song, Miller, & Abbott, 2000; van Rossum, Bi, &
, as well as vari
ous specific types of learning and computation in networks
such as timing/coincidence detection
(Gerstner et al., 1993)
, sequence learning
(Abbott & Blum,
1996; Minai & Levy, 1993; Roberts, 1999)
, path learning/navigation
(Blum & Abbott, 1996;
Mehta, Quirk, & Wilson, 2000)
and direction selectivity in visual responses
(Mehta et al., 2000;
Rao & Sejnowski, 2000)
If STDP could be replicated using memristors, all these things could
theoretically also quite easily
be achieved by circuits using memristors.
This section will briefly
recap how memristors were theorized, and how they have been
will briefly go into the theoretical proof
, show what the basic learning rules for
and how they have been realized.
contain three passive basic elements: resistors, capacitors and inductors.
These elements are have fixed values, meaning they’re non
. However, Leon Chua
theorized an extra element with plastici
ty in 1971, the memristor
. The memristor
acted like a resistor, by relating the voltage over the
and the current through it as
thus acts the same as a resistance, except that it depends on a parameter
, which in Chua’s derivations was either the charge
or the flux
In what follows
depends on the complete history of
current passing through
which makes the
resistor with memory
Chua later showed that memristors are
part of a broader class of
Chua & Kang, 1976)
can be any controllable property, and
is some function.
can be called
the equivalent learning ru
le of the memristor, analogous to the learning rules in the synapse
models discussed earlier.
the basic mechanism for the
s. When current flows through the device, the
boundary between the regions of different resistivities shifts changing the overall resistance. Reproduced from
Ramos et al. (
The memristor has only been realized as an actual circuit element during the last few years
One of the reasons it had taken this long i
s that it only works on
. The technique involves building an element with two regions with different
, the boundary between the regions will shift
due to applied voltages or currents
, resulting in a net ch
ange of resistance. See Figure 3
In these systems the memristance is described by:
the learning rule of
is given to a linear approximation
is some constant
How the memristor works is
illustrated in Figure 4. Here a sinusoidal voltage is applied (blue), resulting in a current passing
through the memristor (green). Initial applied voltage results in a low current but also a shift in
w. The shift in w changes th
e memristance, and during the next voltage period the current is
larger. This is illustrated by the labels 1, 2 and 3 in the top and bottom graphs. And vice versa
for negative voltages, as illustrated by labels 4, 5 and 6.
Top: applied sinusoidal voltage (blue) and resulting current (green) plotted over time.
Middle: plastic parameter w plotted over time. D is a scaling factor dependent on the device length. Bottom:
Voltage VS current
plot with hysteresis loops. The numbers in the top plot correspond to specific loops in the bottom plot. As positive voltage
applied, w shifts changing the memristance resulting in later applied voltages in having a larger effect. Neg
ative voltages next
have the opposite effect. Reproduced from
Strukov, et al. (2008
Two labs have actually realized an element like this; using a region with oxygen vacancies which
move due to an applied electric fie
(Strukov et al., 2008)
, and using Ag
rich versus Ag
regions which shift in much the same way
(Jo et al., 2010)
These elements behave as the ideal
in a linear region of operation, and as memristive systems outside
Furthermore it was found that a certain threshold voltage was nee
ded before any
change in memristance occurred.
A theoretical simplification of the whole memristor system implementing the threshold
was proposed by
Barranco & Serrano
model there is a
where nothing changes, while
changes exponentially outside this
describes the functioning threshold
are parameters determining
This learning rule is
illustrated in Figure 5.
Learning rule for proposed memristors. ADD: ideal memristors.
Ramos et al. (
The memristor offers many new advantages. It allows for analog based data storage, rather than
0’s and 1’s, among other things. Furthermore, it
can easily be implemented in existing circuits
based on the basic elements.
But how relevant is it for neuromorphic engineering, especially as a
synaptic device? I
has inherent plasticity, with clear learning rules based on the
current that ha
s passed through the device.
nnecting two nodes in a network
will result in
But how do memristors compare to plastic synapses found in the brain?
Would the plasticity mechanism
meet the Hebbian learning and competition requireme
Synaptic Plasticity and Memristance
In this section I will explore how we can combine synaptic plasticity and memristance on the
lowest level. That is,
given two connected nodes, how can we relate the two?
Research into this
has focuses on reprod
ucing the SPDT mechanism, which I will recap here.
Soon after the development of actual memristors several labs started looking into using them in
neuromorphic systems, focusing on spike
based systems. The most central result
anco and Serrano
Barranco & Serrano
They showed that if
you connect two spiking neuron
ssume a few basic things,
plasticity automatically follows.
basic system is illustrated in Figure 6.
Two spiking neurons connected by a memristor
Vpre refers to the
voltage, and Vpost to the post synaptic voltage. For the purpose
of the proof no exact spiking mechanism is needed, only a specific spike shape. If a spike
happens at either node, the voltage at that node follows the spike shape. The spike shape is
d to be as follows
Barranco & Serrano
describes the positive and negative height respectively and
rise and decay time scales
The shape is illustrated in Figure 7
: a sharp initial peak, followed by a slow recovery to equilibrium
Basic spike shapes assumed
for the SPDT memristor analysis. Reproduced from
Barranco & Serrano
earning rule as in Figure 5 for their theoretical memristor
to find the voltage over the memristor for different
the memristance (or “synaptic weight”)
changes for these combinations can be
voltage over the memristor
in Figure 6
is given by
are the post and
illustrated in Figure 5
it is possible to find the
ange due to
is the spike timing difference.
By using a range of different spike time combinations,
this function can be found
process is illustrated in Figure 8. The r
ed colored area shows
what voltages are above threshold and contribute to
, and thus the change in w. For positive
, w increases,
For spikes too far apart or for spikes
same time nothing changes.
is an exponential function the increase in w will be
bigger the smaller
(unless it’s 0)
Finding weight functions for different spike pairs illustrated.
Barranco & Serrano
In Figure 9
we can see the resulting function
. The curve is equivalent to the original
curve, showing that the memristor learning rule given two connected spiking mechanisms
automatically leads to
a SPDT like learning rule.
It should be noted that the “weight”
actually what is used as the synaptic weight in the above model, the
is. The latter
is a function of
. So rather than using w for the learning function, one related to
should be used. This results in multiplicative SPDT of the form
Ramos et al.,
a different kind of learning where the change
in synaptic strength is dependent on the current synaptic strength as well.
iginal SPDT function
(left) and m
stance derived SPDT function (right)
Since SPDT like learning automatically follows from the described setup, one could thus expect
many of the learning and
computational advantages SPDT learning offers in a network of
memristive connected spiking nodes, including competition between synapses and Hebbian
now we’ve mostly discussed the theoretical sid
e of synaptic plasticity and memristance,
and how they’re related. In this section we will go more into applying the
discussed so far in two ways: theoretical ideas of how to build large circuits for specific
I will start with the demonstration of
simple operations and learning
in simple circuits, followed by more large scale network learning
and finally two proposed circuits for actual tasks: maze solving and a visual cortex inspired
The possible computational power of circuits using memristors was demonstrated by
t & Shoura
. They showed how simple circuits are capable of basic
arithmetic operations using just a few elements. In conventional circuits the voltage and
currents was used for the calculation, but they instead proposed to use the memristance
itself resulting in simpler and faster operations while using less chip
area, in particular for
multiplication and dividing. Their proposed circuits are shown in Figure 16. While not directly a
neuromorphic application, the simplicity of these circui
ts could allow them to be combined with
and they are worth mentioning
Simple memristor based circuit example capable of basic arithmetic operations. Reproduced from
Bayat & Shouraki (
Memristor circuits have successfully been used to model many forms of learning
features of amoeba learning
Pershin, Fontaine, & Di Ventra, 2009)
Surprisingly small circuits of spiking neurons can perform
memory, as shown in
Pershin & Di Ventra
proposed circuit is
shown in Figure 11A
The circuit was actually
built, but with memristors constructed from regular compo
nents and digital controllers.
setup circuits with a spiking mechanism representing
neurons (N1 and N2) receive
inputs, associated with
different kinds of inputs (in this case
“sight of food” and “sound”
hese neurons are connected
neuron N3 by memristors S1 and S2. S1 has
istance initially, and
esulting in only the “sight of food”
ignal being passed
ple associative memory circuit
N1, N2 and N3 are spiking neuron circuits, connected by memristors S1 and S2.
Associative memory demonstrated. At first the output neuron only fires if there is a “sight of food” signal, not if there is
a “sound” sign
al. However after getting both the “sight of food” signal and the “sound” signal at the same time the system
learned to associate the two, and the output neuron fires both when there is a “sound” signal and a “sight of food” signal.
Pershin & Di
The learning mechanism is sim
ilar to that explained in Section 4. When N2 fires no current
flows between N2 and N3 due to the high memristance of S2. If N3 fires spikes at the same
time, however, the memristance is
lowered, resulting in N2 also being more strongly connected
only fires if N1 is firing, resulting in the connection only strengthening if N1 and N2
fire at the same time.
As such, the circuit p
ortrays associative learning.
How well the circuit works is illustrated in Figure 11B.
Here to the two input signals are shown
(green and red curves) and the output spikes (black).
a learning period during which
and N2 spike at the same time, only N1 spikes result in N3 spiking.
N2 spikes have no effect.
After a learning period dur
ing which N1 and N2 spike at the same time,
either N1 or N
result in N3 spiking.
Learning on the level of just a few neurons is conceptually and computationally simple, but
does this principle easily translate to larger networks?
we originally set out to find out
if memristors can be used to reproduce brain
, which is involves numbers like
synapses per cm
. Can we build large networks of neuron
like structures connected by
memristors that are still t
ractable and useful?
Theoretically several networks have been proposed, but they all
Jo et al., 2010, 2009;
Ramos et al., 2011)
, which is illustrated in
The proposed structure consists of pre
synaptic neurons, each with their
These electrodes are arranged in a cross
re, as illustrated in Figure
re every electrode is connected by a memristor synap
se, as illustrated in Figure 12B
neurons are thus connected to all post neurons. The idea is to have the memristors
encode weights between the pre
neurons, performing some transformatio
n on input
to the pre
neurons which is read out by the post neurons.
bar circuit illustrated, as proposed in
Jo et al. (
The proposed cross
bar circuit. Ev
neuron has it
s own electrode, and every electrode is connected to every other electrode, creating an all
scheme between the pre
neurons which maximizes connection density
A single synapse illustrated. Two el
are connected by a memristive connection, in this case by one based
Ag rich and an Ag poor region (see Memristor section).
Jo et al. (2009) actually built a simple memory crossbar circuit, capable of storing information
by changing the connecti
eights between electrodes, letter encoding capabilities
are illustrated in Figure 13.
This particular circuit example, h
owever, is an explicit memory
read out example
. Another failing of the circuit is that it is based on on/off swit
while one of the main theoretical advanta
ges of memristors is analog storage
(Strukov et al.,
Nevertheless, for an initial circuit using a new technology these are hopeful results.
A simple crossbar realization, with memory storage and retrieval capabilities.
The word “crossbar” stored and
SEM image of the simple 16X16 circuit.
Jo et al.
One group realized an early exploration
the more plastic advantages of memristor circuits
in the form of a self programming circuit
(Borghetti et al., 2009
This circuit was capable
AND/OR logic operations after being self programmed. An example run can be seen in Figure
This is an example of self
logic, where only overlapping input elicit a
R logic illustrated for a crossbar circuit after
. The red lines were input voltages, the blue
curve the measured output voltage.
Non overlapping puts shows very small output
Overlapping inputs show large output,
indicating an AND
Borghetti et al.
Even further theore
tical advances have been achieved by Howard et al. They developed a
memristor based circuit
capable of learning using evolutionary rules
(Howard, Gale, Bull, de
Lacy Costello, & Adamatzky, 2011)
that memristive properties of the network
e learning compared to conventional
circuitry. They compared
several types of theoretical memristors and showed how they impacted performance.
More complex tasks
So far I’ve
mostly discussed circuits tested on very simple task
, they mostly served as “proof
concepts”. Several circuits have in fact been proposed with more useful tasks in mind,
which I will highlight
The first is maze solving. Mazes, and more generally graph problems, can be hard to solve
path algorithms exist but for large mazes they can be quite slow.
Using memristors Di Ventra an
Pershin proposed a circuit that drastically improves on this
Ventra & Pershin, 2011)
. The propos
ed circuit is shown in Figure 15
. Every node is connected by
a memristor and a sw
itch. If that particular path is open, the switch is turned on and vice
Basic maze solving circuit. The nodes (path crossings) of the maze are mapped to a circuit of memristors in series
with switches. If a path is closed, the switch i
s turned off, and vice versa for open paths.
Ventra & Pershin
The circuit is initialized by applying a voltage over the start
and endpoints of the maze. This will
result in a current flow through the “open” paths,
changing resistance v
alues everywhere along
the path. These changed resistances
can be read out afterwards to find the possible paths
through the maze. The resulting path solving
capabilities are shown Figure 16
, including the
time needed to solve the maze
. The mechanism works m
uch faster than any existing
mechanism, due to
ll units simultaneously involved in computation.
circuit, although very specific to this task, is
a good example of how circuit theory and brain
tecture can work together to result in new and
Solutions to a multiple solutions maze using the proposed circuit
, encoded in the resistance of the memristors
The colors of the dots indicates the resistanc
e of said memristor, a major resistance difference can be seen at the
split, corresponding to a different size path.
This solution can be found after only one iteration of the network
, in 0.047
seconds, vastly outperforming any other method of solving
this type of problem, thanks to the simultaneous cooperation of the
M. D. Ventra & Pershin
Finally I will consider
a brain structure
recognizing properties of the
V1 area in the
brain’s visual cortex, proposed in
Ramos et al. (
. Neurons in this area
have two properties that need to be reproduced. They
typically have s
portray orientation selectivity; s
pecific kinds of edges
are detected within the receptive fields
(Hubel & Wiesel, 1959)
os et al. based
their network on
using receptive fields. They started the network with
random memristance values
and trained it with realistic spiking data, corresponding to signals
coming from the retina.
The resulting images produces with this network
can be seen
The evolution of the weights in the receptive fields can be found in Figure
, showing how
orientation selectivity arises during training.
This not only shows that memristor networks work
well as computational circuits, but also reconfirms t
like structures result in
like learning automatically
when using STDP
A similar circuit was proposed in
Illustrating the effect of a rotating dot and a physical scene on the V1
graph plot edge detections in space and time. Blu
e dots represent dark to light changes, red dots represent light to dark
Network reproduction of natural scene
, events collected during a 20ms video of two people walking
black dots correspond to the blue and red dots in A respectiv
For details see image source
Ramos et al.,
Receptive field training resulting in orientation selectivity illustrated. Reproduced from
Ramos et al.
In this paper I reviewed the current knowledge on synaptic plasticity in the brain and
memristive theory and realization. Next I showed how memristors can be used as plastic
synapses that portray Spike Timing Dependent Plasticity (STDP), and finally I show
various authors have proposed or already realized
based on this able to perform, basic
arithmetic operations, associative learning, simple logic operations, maze solving and V1
These are all very promising results, but
there are still various points of
important result for using memristors as plastic synapses
without a doubt
consisting of spiking neuron
connected by memristors
capable of timing and coincidence detection, sequence learning, path
learning and navigation, direction selectivity in visual responses and the emergence of
orientation selectivity, many of which have already been demonstrated in memristive circuits.
big weakness of the presented STDP theory is the dependence on a specific learning rule of the
memristor: an exponential learning rule (see Figure 5). This is a very idealized function, and it is
not clear if this shape can be represented
by a physi
cal device. It is likely that when
implementing STDP in actual circuits results will not be as
clean as presented in the theoretical
papers, and the STDP function might
Another point of concern is the fact that even
with spiking structures, m
emristors don’t portray pure STDP, but multiplicative
STDP, a less well
studied form. The effects of this definitely need to be studied more.
Different spike shapes
One weakness if wanting to implement
learning function presented earlie
the specific spike shapes necessary. If you change the spike shape, the learning function also
changes. Some preliminary studies for this have already been done, and are
in Figure 18
Ramos et al., 2011)
. This fact could actually be used as an advantage, as the
functions allow for a higher variety of learning and thus applications. In fact,
of the learn
ing functions shown in Figure 19
bear resemblance to learning functions
which are also found in some brain areas
, some examples of
which can be found in Figure 20
Perhaps in the experimental cases there is also a relationship between spike sh
ape and learning
function, which would offer strong support for a memristive theory of plasticity in the brain.
Even if they don’t, however, at least their functional role can be replicated in circuitry by
choosing the spike shapes well.
Influence of different spike shapes on learning functions. Reproduced from
Ramos et al.
Different experimentally found SPDT functions. Reproduced from
(Abbott & Nelson, 2000)
Firing rate based learning
A big gap in the current explorations of applying
synaptic plasticity to memristance structures is
the reliance on spiking. This puts a circuit designer at a disadvantage, as circuits mimicking
spiking behavior need to be implemented, which requires extra space and energy. As discussed
earlier in this rev
iew, there is another class of functional learning theory, which is based on
firing rates. This learning has zero need for spiking, and still portrays many properties of
learning observed in the brain. No memristor studies, to my knowledge, have explored t
possibilities of applying that theory to memristor
. This is surprising, as even the
basic memristor function is very similar to firing rate based learning. To reiterate the basic firing
rate learning equations:
Where v and u correspond to
firing rates, and w to the
connection weight. Very similarly, a basic memristor is described by:
Where in this case
correspond to th
e current through and the voltage over the
memristor, and w to some memristor property governing the memristor’s resistance. It is clear
that these are the
, with the current and voltage being analogous to the
firing rates. If we can built a circuit in which the memristor functions
imitate known firing rate functions, this circuit can be used to employ learning as used in firing
rate models. Below I will show that the simplest firing rate learning functions and
functions are already extremely similar.
To reiterate the simplest firing rate
In this setup the
firing rate depends on the
firing rates. The
connection weight get stronger dep
endent on both the pre and
Meanwhile, the simplest theoretical memristor functions, meanwhile, are:
Where the resistance as a function of w is given by:
The time derivative of
are analogous to the post
and presynaptic firing rates respectively,
correspond to the connection weights. So for a simple memristor with a specifiable input
current, the voltage depends on the amount of input current. Meanwhile, the “connection
would increase dependent on the input current. While this is
not equivalent to the
firing rate learning model, it’s already very close, and probably relatively simple circuits can be
built to mimic the firing rate function more, or even the more complicated learning functions
like the BCM functions (see synaptic pla
sticity section of this review), and future researchers
would do well to explore these possibilities.
Perhaps by combining the circuits designed to do
something like BCM learning can be achieved
Bayat & Shouraki,
Other types of plasticity
This paper focused mostly on Hebbian and more specifically STDP type learning. However these
are not the only kinds of learning theories, nor are they the only mechanism observed in th
brain. In fact, several proposed network models of learning implement various types of learning
to work correctly
Lazar, Pipa, & Triesch, 2009; Tetzlaff, Kolodziejski, Timme, & Wörgötter,
. I will briefly discuss the most important ones and their impact on memristor based
The STDP learning we discussed in this paper was p
based: it described how pairs of spikes
influenced the synaptic strength between two neurons. However, experimental evidence
(Froemke & Dan, 2002)
suggests that triplets of spikes also have a separate influence on the
are not explained by the pair
based STDP learning rule
A computational model
next how a triplet
based STDP learning rule could explain the experiments
(Pfister & Gerstner,
. In fact, a memristive model for triplet STDP learning has already been proposed
Tetzlaff, & Ellinger, 2011)
Although STDP is a very powerful learning strategy, i
t does not guarantee stable circuits in a
(Watt & Desai, 2010)
. Complementary learning methods have been suggested
to constrain firing rates and synaptic weights in a network; synaptic scaling and intrinsic
plasticity being the most
methods. Synaptic scaling
basically puts a limit on the sum
of all weights. Intrinsic plasticity meanwhile changes a single cell’s physiology and thus the way
it fires based on long term global network activity. Implementing these kinds of processes in
parallel of STDP might turn
out to be necessary to build viable and effective circuits. An
example where synaptic scaling and intrinsic plasticity are already applied is in the BCM rule, a
firing rate based model. If a memristive BCM model could be designed and combined with a
ule, synaptic scaling and intrinsic plasticity could potentially be applied in circuitry.
Much of the research presented here tried to
knowledge obtained from the brain to
circuitry. In particular, theories about the role of synaptic p
applied to learning
networks. A major
is that there is actually very little understanding of the link
between synaptic plasticity and memory or computation in the brain.
Some basic functional
mechanisms of functional learni
ng are known, and some theoretical advantages of these
mechanisms have been studied, but on a network level we know very little.
Is it actually
justified to start building circuits based on all this?
First of all, we know enough of the low
to start applying them to actual circuits. Secondly, trying to implement synaptic
activity in circuit goes both ways. It doesn’t only lead to
and new circuit architectures
for extremely quick maze solving), but also to a potentially better un
derstanding of the
very mechanisms we are trying to mimic.
One of the many cited advantages of the memristor technology is that it is completely
compatible with existing circuitry. It can be used to make networks with plastic synapse
connected to conventional circuits
Greg Snider et al.
. However, when claiming that
this technology could lead to structures with brain
level complexity and computing power there
possible point of discussion
. Although not mu
ch is known about exact computation
mechanisms in the brain, one could argue that its strength is the fact that it
consists only of
massively parallel connected plastic units, without a specific “interpreter” or some such
. When trying to mimic br
like processes, it might not be easy to actually combine
these structures with existing circuitry. How do you read out thousands to millions of
independent processes, and make sense of it while passing them through a single processor
without having the
same problem as before
For example, in the maze solving circuit, the
circuit solves the maze really
quickly by a human’s eye, b
ut how would a conventional
computer read out and use the solution?
Again, large scale computations are not well
the brain, and by building and testing memristor circuits we could actually begin
to study some of these questions.
Ideal modeling problems
A major problem with the current st
ate of the memristor
field, is that most major results have
only been achieved
in simulated networks, and even then often rely on very simplified
memristors. Real life circuits will be
, and the memristors will not be as ideal as
simulated. It is not clear yet how well the theoretical results will carry ov
er to actual circuitr
possible that actual circuitry might perform better than the simulated circuits,
or at least
close to a biology
training is used. For
example, an increasing body of literature is suggest
that the noisiness of the brain could
actually be one of its major strengths, for example by encoding the reliability of sensory
(Knill & Pouget, 2004)
. For memristors to really prove their worth as plastic synapse
more experimental results are absolutely vital.
Energy use of
Related to the absence of experimental results is the energy use necessary for proposed
circuitry. If one would want to match the complexity of the brain about 10
neuron would be needed. With current day technology it is supposedly possible to maintain
that ratio and have 10
, which is quite substantial. However, energy
wise there is still a major problem
, following the calculatio
Ramos et al.
. Current memristor resistance values range from kOhm
to MOhm scale.
Ramos et al.
and neurons providing enough current to maintain a
V potential over the memristors.
If the neurons would
fire spikes at a 10Hz average
. This amount of power dissipation is unrealistic
would melt any real circuitry. The only way to bring down these power requirements is by
increasing the maximum resistance valu
es by about a 100 fold.
Before circuits of brain
complexity can be built this milestone will have to be reached. Alread
y progress is being made
here, the most recent memristor
like elements are operable with currents as low as
2 μA, as
opposed to the 10 mA needed for the theoretical memristors
(Mehonic et al.,
Other small devices
The memristor is not the only circuit element capable of plasticity. There are also capacitor and
inductor alternatives: memcapacitors and meminductors
Di Ventra, Pershin, & Chua, 2009)
These allow fo
r even more complicated circuitry portraying plasticity related memory and
Future work should incorporate these circuit elements in parallel or series with
With these elements it might be especially easy to mimic firing rate ba
plasticity rules. Moving away from memristance based circuitry, nano
scale transistor devices
have also been shown to portray plasticity and spiking
like mechanisms (e.g.
Alibart et al.,
ow these devices relate to memristors and synaptic plasticity could lead
to better understanding o
Memristors have been realized
very recently, and considering the youth of the field
extraordinary advances have already been made.
are more than a promising
they could potentially revolutionize neuromorphic engineeri
ng and pioneer the use
of plasticity in actual circuits.
The focus is currently mostly on using spiking
nected by memristors. These networks can be very powerful,
since STDP is almost
However, there are
ith the current approach, as
described above. I would suggest
future research directions.
The similarity of STDP rules seen in memristors and between actual neurons is extremely
interesting, and possibly points towards a memristive
explanation of STDP in the brain (Linares
Barranco & Serrano
memristive model of STDP spike shape
important for the learning mechanism
but no work has
has been done on relat
Future work in th
is direction could potentially increase our understanding
of the underlying neuroscience as well.
Although the current focus in on spiking structures, this high degree of b
necessary. Firing rate based models exist which can portr
ay complicated learning mechanisms,
and at a first glance these could possibly be reproduced in memristor based circuits. By
developing circuits like this the need for spiking structures could potentially be removed, or
more complicated learning rules than
STDP could be implemented by combining spiking and
firing rate based learning rules.
Finally, despite the claims that memristor based circuits could potentially rival the complexity of
the brain, energy dissipation with current technology does not allow
scale similar to the
brain could be developed,
this bottleneck needs to be
Abbott, L. F., & Blum, K. I. (1996). Functional significance of lon
term potentiation for
sequence learning and prediction.
Cerebral cortex (New York, N.Y. : 1991)
Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/8670667
Abbott, L. F., & Nelson, S. B. (2000). Synaptic plasticity: taming the beast.
Abbott, L. F., & Regehr, W. G. (2004). Synaptic computation.
Alibart, F., Pleutin, S., Guérin, D., Novembre, C., Lenfant, S., Lmimouni, K., Gamr
at, C., et al.
(2010). An Organic Nanoparticle Transistor Behaving as a Biological Spiking Synapse.
Advanced Functional Materials
Backus, J. (1978). Can programming be liberated from the von Neumann style?: a fun
style and its algebra of programs.
Communications of the ACM
Bi, G., & Poo, M. (1998). Synaptic modifications in cultured hippocampal neurons: dependence
on spike timing, synaptic strength, and postsynapti
c cell type.
The Journal of Neuroscience
10472. Retrieved from http://www.jneurosci.org/content/18/24/10464.short
Bi, G., & Poo, M. (2001). Synaptic modification by correlated activity: Hebb’s postulate
Annual review of neuroscien
166. Retrieved from
Bienenstock, E. L., Cooper, L. N., &
Munro, P. W. (1982). Theory for the development of
neuron selectivity: orientation specificity and binocular interaction in visual cortex.
Journal of neuroscience : the official journal of the Society for Neuroscience
Blais, B. S., & Cooper, L. (2008). BCM theory.
. Retrieved from
Blum, K. I., & Abbott, L. F. (1996). A model of spatial map formation in t
he hippocampus of the
93. Retrieved from
Borghetti, J., Li, Z., Straznicky, J., Li, X., Ohlberg, D. a a, Wu, W., Stewart, D. R., et al. (2009).
A hybrid nanomemristor/transistor l
ogic circuit capable of self
of the National Academy of Sciences of the United States of America
Cai, W., Tetzlaff, R., & Ellinger, F. (2011).
A Memristive Model Compatible with Triple
Arxiv preprint arXiv:1108.4299
Chua, L. (1971). Memristor
the missing circuit element.
Circuit Theory, IEEE Transactions on
(5). Retrieved from http://ieeex
Chua, LO, & Kang, S. (1976). Memristive devices and systems.
Proceedings of the IEEE
Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1454361
Dayan, P., & Abbott, L. F. (2003).
(1st ed.). MIT Press.
Di Ventra, M., Pershin, Y., & Chua, L. (2009). Circuit elements with memory: memristors,
memcapacitors, and meminductors.
Proceedings of the IEEE
, (1), 1
6. Retrieved from
Froemke, R. C., & Dan, Y. (2002).
dependent synaptic modification induced by
natural spike trains.
Gerstner, W., Ritz, R., & van Hemmen, J. L. (1993).
Why spikes? Hebbian learn
ing and retrieval
resolved excitation patterns.
Hebb, D. O. (1949).
The Organization of Behavior
. New York: Wiley & Sons.
Howard, G., Gale, E., Bull,
L., de Lacy Costello, B., & Adamatzky, A. (2011). Towards evolving
spiking networks with memristive synapses.
2011 IEEE Symposium on Artificial Life
21. Ieee. doi:10.1109/ALIFE.2011.5954655
Hubel, D., & Wiesel, T. (1959). Receptive fields of s
ingle neurones in the cat’s striate cortex.
The Journal of physiology
591. Retrieved from
Jo, S. H., Chang, T., Ebong, I., Bhadviya, B. B., Mazumder, P., & Lu, W. (2010). Nanoscale
as synapse in neuromorphic systems.
Jo, S. H., Kim, K.
H., & Lu, W. (2009). High
density crossbar arrays based on a Si memristive
Knill, D. C., &
Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding
Trends in neurosciences
Lazar, A., Pipa, G., & Triesch, J. (2009). SORN: a self
organizing recurrent neural netw
Frontiers in computational neuroscience
(October), 23. doi:10.3389/neuro.10.023.2009
Barranco, B., & Serrano
Gotarredona, T. (2009a). Exploiting memristance in adaptive
asynchronous spiking neuromorphic nanotechnology systems.
604. Retrieved from
Barranco, B., & Serrano
Gotarredona, T. (2009b). Memristance can explain spike
plasticity in neural synapses.
5. Retrieved from
stner, W., & Sjöström, P.
A history of spike
Frontiers in Synaptic Neuroscience
Mehonic, A., Cueff, S., Wojdak, M., Hudziak, S., Jambois, O., Labbe, C., Garrido, B., et al.
(2012). Resistive switching in silicon suboxide films.
Journal of Applied Physics
Mehta, M. R., Quirk, M. C., & W
ilson, M. a. (2000). Experience
dependent asymmetric shape of
hippocampal receptive fields.
15. Retrieved from
Bayat, F., & Shouraki, S. B. (2011a). Memristor
based circuits for perform
Procedia Computer Science
Bayat, F., & Shouraki, S. B. (2011b). Memristor
based circuits for performing basic
Procedia Computer Science
Minai, A., & Levy, W. (1993). Sequence learning in a single trial.
INNS World Congr. Neural
. Retrieved from http://secs.ceas.uc.edu/~aminai/papers/minai_wcnn93.pdf
Pershin, Y., Fontaine, S. L., &
Di Ventra, M. (2009). Memristive model of amoeba learning.
Physical Review E
6. Retrieved from http://pre.aps.org/abstract/PRE/v80/i2/e021926
Pershin, Y. V., & Di Ventra, M. (2010). Experimental demonstration of associative memory with
Neural networks : the official journal of the International
Neural Network Society
Elsevier Ltd. doi:10.1016/j.neunet.2010.05.001
P., & Gerstner, W. (2006).
Triplets of spikes in a model of spike timing
The Journal of neuroscience : the official journal of the Society for Neuroscience
S., & Zhou, K. (2011).
Neuromorphic silicon neurons and large
scale neural networks:
challenges and opp
Frontiers in neuroscience
Purves, D., Augstine, G. J., Fitzpatrick, D., Hall, W. C., Lamantia, A., & White, L. E. (2012).
(5th ed.). Sinauer.
Rao, R., & Sejnowski, T. (2000). Predict
ive sequence learning in recurrent neocortical circuits.
advances in neural information processing systems
170. Retrieved from
Roberts, P. D. (1999). Computational consequences of temporally asymmetric learning rules: I.
Differential hebbian learning.
Journal of computational neuroscience
Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/10596835
Snider, Greg, Amerson, R., Carter, D., Abdalla, H., Qureshi, M. S., Leveille, A., Versace, M., et
al. (2011). From synapses to circuitry: Using memristive memory to explore the elect
IEEE Computer Society
, (February), 21
28. Retrieved from
Snider, GS. (2008). Spike
dependent learning in memristive nanodevices.
Architectures, 2008. NANOARCH
92. Retrieved from
Song, S., Miller, K. D., & Abbott, L. F. (2000). Competitive Hebbian learning through spike
dependent synaptic plasticity.
Strukov, D. B., Snider, G. S., Stewart, D. R., & Williams, R. S. (2008). The missing memristor
Tetzlaff, C., Kolodziejski, C., Timme, M., & Wörgötter, F. (2011). Synaptic scaling in
ion with many generic plasticity mechanisms stabilizes circuit connectivity.
Frontiers in computational neuroscience
(November), 47. doi:10.3389/fncom.2011.00047
Ventra, M. D., & Pershin, Y. V. (2011). Biologically
Inspired Electronics with Memory Circui
Arxiv preprint arXiv:1112.4987
. Retrieved from http://arxiv.org/abs/1112.4987
Watt, A. J., & Desai, N. S. (2010). Homeostatic Plasticity and STDP: Keeping a Neuron’s Cool
in a Fluctuating World.
Frontiers in synaptic neuroscience
Ramos, C., Camuñas
Mesa, L. a, Pérez
Carrasco, J. a, Masquelier, T., Serrano
Gotarredona, T., & Linares
Barranco, B. (2011). On spike
memristive devices, and building a self
Frontiers in neuroscience
(March), 26. doi:10.3389/fnins.2011.00026
van Rossum, M. C., Bi, G. Q., & Turrigiano, G. G. (2000). Stable Hebbian learning from spike
The Journal of neuroscience : the official journal of t
21. Retrieved from