Modeling Cross-Episodic Migration of Memory Using Neural Networks

cracklegulleyAI and Robotics

Oct 19, 2013 (4 years and 6 months ago)


Modeling Cross
Episodic Migration of Memory

Using Neural Networks

by, Adam Britt


Episodic Memory

Memory of a specific event, combination of experiences into a schema

Episodic Migration of Memory

Memory error confusing different schemas

Neural Network

iologically inspired model of computation that can be trained on how to
respond to given input, store memory similar to the way we think our brain does or both.

Sample Neural Network Learning

Neural Network with no learning. It starts with randomized
weights on the edges.

Same Network after learning two different patterns.


This Project was designed to take neural networks and use them to mimic the results of a
study done by one of St. Lawrence’s psychology professors Dr. Sharon Hannigan. The
study was very specific in episodic memory, discerning between two types of episodic
memory errors that all humans make. This is something that has not been done before. It
is important because once an accurate model has been made, similar studies can be
made on a computer, which is cheaper and faster, rather than to perform another
empirical study. If data from the model indicates new knowledge, then a new empirical
study can be made using the parameters found from the model, furthering our knowledge.


Since something of this specific nature had not been done before, there were some
difficulties in how to implement this exactly. There are several different types of neural
networks, and deciding which would be best to use took trial and error. Once an
acceptable neural network structure was created, interpreting the results became the
most difficult task. Without any precedent it was difficult to design an interpretation
algorithm that could accurately explain the data in terms of the desired results. What
kept coming up was “These results look good but what does it mean?” The problem was
that all of our testing criteria was arbitrary and it was through trial and error that we come
up with better explain the data


Really Big Neural Networks. The output interpretation algorithm I came up with did not yield the same results as
seen in Dr. Hannigan’s study, although they showed some similarities that gives hope that, given the right
interpretation algorithm, the neural networks can accurately model episodic memory migration.

Data Used

The data was taken from different restaurants and groceries around the area with
each restaurant or grocery representing an episode. Each episode consisted of 27
different attributes that could be either true or false. Some of the attributes were
specific only to grocery (does it have a produce section?), some only to restaurant
(does it have a bar?) and there were overlapping attributes (is it large or small?). This
data did not change throughout the testing phase of the program.

How does a Neural Network work?

One can easily calculate the values of a neural network. The first layer is the input, which is a
given. The weights on each edge are randomized so that’s another given. On the first pass
through the network, start at the input layer, pick a node and follow an edge to a node in the
middle layer. The partial value of the node in the middle layer is the value of the node going to it
times the weight of the edge. The whole value will be the sum of all edges coming into that node.
Do this for all nodes in the middle layer. Now you can move on to the output layer. Do the same
thing for the output layer.

Once you have done this for all the output nodes you take the actual results and compare them
to the desired results. If the results meet a predetermined threshold, you accept the network

The retrieval changes nothing with the neural network. It
just gives the saves the values of the nodes of any given

Mentor: Dr. Ed Harcourt

Cognition Advisor: Dr. Sharon Hannigan

Image from: