The Research Paper - Lancaster University

hundredcarriageΛογισμικό & κατασκευή λογ/κού

3 Νοε 2013 (πριν από 3 χρόνια και 10 μήνες)

164 εμφανίσεις








Tomasz Frątczak


Emotional Arousal
-
Based Visual Cues as Digital
Memory Aids



M.Sci. Computer Science In
n
ovations

25 March 2011












"I certify that the material contained in this dissertation is my own work and does not contain
unreferenced
or unacknowledged material. I also warrant that the above statement applies to
the implementation of the project and all associated documentation. Regarding the
electronically submitted version of this submitted work, I consent to this being stored
electro
nically and copied for assessment purposes, including the Department's use of
plagiarism detection systems in order to check the integrity of assessed work. I agree to my
dissertation being placed in the public domain, with my name explicitly included as t
he
author of the work."


Date:

Signed:








E
motional Arousal
-
Based Visual Cues as Digital
Memory Aids


Tomasz Frątczak


School of Computing and Communications, Lancaster University

t.fratczak@lancs.ac.uk


Abstract


This document describes novel method of providing high
quality visual memory cues for people with episodic
memory problems such as dementia or anterograde
amnesia. For the purpose of this study a software
solution is created in order to analyze and integra
te the
data gathered from SenseCam cameras and SenseWear
sensors. Two field experiments are performed. The first
one was performed with two participants for a duration
of 7 days, and was aimed at finding the relationships
between the readings from the Sens
eWear sensor and
emotional episodes of the subjects in order to create an
appropriate algorithm to identify these events in the
sensor data. The second user study was performed by 4
participants for one 6 hour session each in order to
provide insight into
the level of functionality of the
solution.


1.

Introduction


Alzheimer’s disease is a very serious and treatable, but
not yet curable, condition. It affects 10% of the
population aged 65 and above and is a primary source of
dementia in as much as 35% of peop
le aged over 85 [
3
].
As it becomes an increasing problem it is necessary to
provide solutions which would allow to at least ease the
symptoms. Alongside the medical research many
psychologists and also computer scientists attempt to
improve the living sta
ndards of those affected, possibly
slowing the progression and severity of the symptoms.

This particular study is inspired by research
done by team of scientists at Cambridge University

[
4
].
The listed study proposes a solution improving the
quality of
memory recall in Alzheimer’s patients by
creating photo diaries as a form of memory prosthesis. It
uses a custom designed wearable camera equipped with
light and noise sensors for improved automation of the
camera shooter on top of time interval based and
manual
shooting modes. However even with those measures the
device worn for a whole day can produce a vast number
of pictures, reaching over 600 pictures at times. This is
an overwhelming number, even for patients who have
careers to support them and help
them review and
interpret the pictures. Still, the visual stimuli is the most
significant type in terms of episodic memory creation,
with more than 80% of memories being purely
constructed of images [
2
]


so an improvement to the
initial SenseCam solution
would be most welcome.

The solution to having too many pictures to
review is simply to delete a number of pictures. The
difficult part however is how to identify the relevance of
those pictures and the quality of the visual stimuli they
represent. A clear

criteria must be applied in order to
improve the performance of this life
-
logging technique,
or else there might not be much use in limiting the
number of pictures.

Recent research in the field of psychology
involving episodic memory suggests a solution
to this
problem. Episodic memory can be manipulated and, most
importantly, enhanced
[
1
]
.

Human emotions are
associated with activity of an area of a brain called
Amygdala. It is documented that activation of Amygdala
is associated with increase in performa
nce of the episodic
memory [
8
].

Since emotions improve episodic memory
performance, the level of emotional arousal would work
well as a criteria for evaluating the quality of memory
cues, such as pictures taken by the SenseCam
. Pictures
which are selected because of high level of emotional
arousal at the time the picture was captured should in
theory not only be of higher quality in terms of what they
present (i.e. the context should be richer, possibly
capturing objects which
caused the emotional arousal),
but because of the Amygdala activity the memories
associated with these pictures should be easier to recall.

In day to day activities there are trivial things
that happen every day: preparing food, washing dishes,
doing shop
ping etc. As a result of wearing the SenseCam
a person would normally get a great numbers of pictures
related to those routine activities, which have a relatively
low significance, which makes the photo diary harder to
review as it takes more time to go th
rough all of the
pictures. Filtering them by the level of emotional arousal
associated with them would be a good way to highlight
which pictures really matter and have a higher chance of
triggering the memory recall in the user. Emotions
should be triggere
d by events which are less ordinary,
and emotions like stress or excitement are definitely
indicators of events worth remembering.

Amygdala activation, and in turn emotional
arousal can be measured in form of galvanic skin
response [
1
]. It is fairly easy
to measure person’s GSR
using even off the shelf biosensors. For the purpose of
this study a biosensor by BodyMedia called SenseWear
was used to generate users’ GSR profile [
11
].

Based on the above information this research is
aimed at validating the foll
owing hypothesis: visual
memory cues created while a person is in a state of
emotional arousal
-

as measured by galvanic skin
response
-

are considered better quality and allow for
more successful recall of events associated with them.

This study was divi
ded into two major stages.
The first
stage consisted of testing the capabilities of the
SenseWear and trying to find a way to synchronize the
data obtained from it with the pictures taken by the
SenseCam. This resulted in an experiment performed on
two sub
jects which served as calibration

(refer to Section
3
)

and helped to find the correct algorithm

(Section
3
.1)

which would mark the appropriate pi
ctures as



Fig. 1.
Commercial version of SenseCam camera

by
Vicon

Motion Systems Ltd
.

emotionally stamped.
The second stage was the final
study

(Section 5)
, to test whether the algorithm
correctly
maps the adequate GSR readings with the pictures taken
and how s
ignificant
the relationship

is

between the
changes in the GSR readings and the recall performance
o
f the subjects.


2.

Related Work



The most important related study is
“SenseCam: A Retrospective Memory Aid” [
4
] . The
encouraging results of that project were a motivation to
conduct this study, and commercial version of their
product


the SenseCam,
was used in the both the studies
described by this paper.

The SenseCam (
F
ig. 1
) is a ubiquitous
wearable lifelogging device. It allows for creation of
photo diary by taking a series of still pictures. The wide
-
angle camera doesn't have a viewfinder, and t
he pictures
are taken automatically every 30 seconds or if triggered
by one of the integrated sensors, such as light, infrared or
sound sensor, which indicate change in environment. The
camera does have a shooter button, in case the user wants
to make sure

a picture of an important event is taken.

The researchers working on the SenseCam had
engaged their device in a 12 months long clinical study
testing the performance of their solution as a digital
memory aid. The experiment was conducted on a 63
-
year
old

female


Mrs. B, a patient with brain damage in her
hippocampus, the part of the brain critical for episodic
memory [
7
]. The study was performed in the user's
natural environment, where Mrs. B was asked to use the
SenseCam to record any significant event
s that she would
want to remember. The subject's husband would assist
her in this experiment. Every proceeding day after an
event was recorded he would ask Mrs. B about what she
remembers from the previous day, then go through the
pictures from the SenseCa
m with her and repeat the
procedure every second day for two a two week period.

The experiment procedure is somewhat
informal and the metrics are very basic, taking in to
account only the simple fact as to whether Mrs. B
remembered the events or not, and
also relying on Mr.
B’s ability to record the data. It is also unclear to how the
level of detail remembered by the subject in regard to
each event was recorded.

The scientists however claim that the subject's recall of
the events recorded with the SenseC
am had tripled after
the period of two weeks, and in the final test which took
place up to 11 months after the event she scored 76%
average recall for all of the events.

The study however does not touch on the
subject of the performance of picture reviewing and
inconvenience of this task, which if used by a person
with a retrospective memory defects would be a daily
ritual. If the SenseCam was to be used on a daily basis
t
he amount of pictures generated would be simply
enormous


6 hours of usage produces around 1200
pictures.

Undoubtedly SenseCam has benefits versus
other solutions, such as shows relatively high
performance, but the next step, which was not taken in to
ac
count by the research team at Cambridge University
[
4
] was the actual quality of the visual memory cues
which the SenseCam provides. A document describing
this problem by Matthew L. Lee and Andin K. Dey [
6
]
gives some insight in to the more practical side
of using
life
-
logging technology for supporting people with
episodic memory impairment. It clearly points out that
caregivers’ biggest problem with the digital solutions
such as SenseCam is the high data volumes they produce,
and sorting through over thous
and pictures on daily basis
is not an easy task. The document puts pressure on the
importance of good quality memory cues for successful
recall, especially in Alzheimer’s patients. Complex
cognitive processes such as trying to remember past
events are also

proven to slow down the progression of
dementia caused by Alzheimer's Disease [
10
]. Good
visual memory cues must be representative of events and
experiences that are personally significant to the viewer
[
6
]. That is what this research is trying to achieve
.

A study by E.A. Phelps [
8
] discusses the topic of
interactions between brain structures responsible for
reacting to emotional situations, and encoding and
storage of episodic memories, in order: the amygdala and
hyppocampus. It suggests that amygdala ca
n stimulate
encoding of data stored in the hyppocampus, making
selected memories richer by adding emotional
significance values to them if emotions occur during the
encoding period. Such emotionally enhanced memories
are meant to be more vivid and easier t
o retrieve,
improving the memorability of the related events [
8
].



3.

Study One


The first study presented in this paper is aimed at
calibrating the SenseWear

sensor data, trying to find a
measurable relation between the GSR readings, and
timing and intensity of emotional events described by the
user. Such correlation will enable creation of appropriate
algorithm able to define whether visial cues are
emotional
ly stamped or otherwise.

The biosensor used for this study is, as mentioned in
the introduction, SenseWear by BodyMedia [
11
]

(
F
ig. 2
).
The sensor was created as a tool for clinical research,
allowing to collect continuous data about the users
during they
normal day to day activities. The reason for
choosing SenseWear as a tool for this experiment is its
capability to record galvanic skin response with a
reasonably high precision and high frequency. The
device also records heat fluctuation, 3
-
axis accelerom
eter
data and skin temperature, which are not utilised during
this study; for possibilities of their applications please
refer to the discussion section. SenseWear takes


Fig.
2.

SenseWear biosensor used in the study
.



Fig.
3.

This is an example graph showing mapping of user described events with emotion scale (at the bottom of each ver
tical
line: p


positive, n


negative) and the GSR data logged by the SenseWear biosensor. The vertical axis represents GSR
values saved by the SenseWear, and horizontal axis represents each consecutive reading. The start of the session value is
marked on

the 0 point of the x axis

measurements every 60 seconds, with occasional
readings covering up to about 3 minute
periods of time.
This frequency of measurements is appropriately dense,
as the SenseCam takes two pictures per minute

on
average, which gives a good mapping between the data
from both the devices.

3.1

Setup of Study One


The data has been collected from 2 male subjects, aged
18 and 20 years old with no memory impairment.
Gathering female subjects for the experiment proved
very hard, as recording emotions raised privacy issues.
The subjects were asked t
o wear the SenseWear sensor
for a duration of one week. The sessions the users wore
the biosensors for lasted 1 to 8 hour.


.


While the SenseWear

sensor was worn both subjects
were asked to write a diary. Subjects were provided with
diary templates to help create consistent data set. They
were required to fill in the following fields: time of
event, event name or description, valence, intensity and

description of the emotion. All fields were mandatory
except the last one, as describing emotions verbally is not
an activity people do on daily basis, and so forcing the
users to do so would most likely produce inconsistent
answers.

The study was
designe
d

to be naturalistic,
conducted in the subject’s natural environment. This has
major advantages over a lab based study, as it enhances
the real
ism of the events the subjects encounter. It serves
better in defining and overcoming challenges a possible
produ
ct based on this study would come across.


All the time measuring devices (wrist watches, mobile
phones) the subjects referred to during the experiment
were synced with the SenseWear's internal clock
manually before the experiment with the accuracy of up
t
o one minute. The subjects were asked to make note of
the time to the nearest minute. After one week’s time 12
sessions were gathered from each of the subjects.


3.2

Results of Study One


Data from the diar
ie
s analysed with the software solution
(described in

section 4) showed a rough mapping of
emotions described by the subjects and increased GSR
readings given by the sensor. The subjects could use a
scale of 1 to 10 to describe the strength of the emotions
they perceive (described on the user input forms as
valence), however they very rarely scored them as any
higher than 4. This might be because they were asked to
wear it in a non
-
controlled environment and thus
performing their normal, or even routine, day to day
activities, which adds realism to the experi
ment, as this is
what an end user of a possible product based on this
technology would use it for. Around 40% of the
emotional situations as described by the users occurred
while the GSR values were rising to form a peak, 69% of
them lie within 1 minute aw
ay and only 31% seemed to
have up to a 5 minute delay or were totally missed (
F
ig
.

4
). There are obvious varieties and minor inconsistencies,
but some of these problems can be associated with the
inaccuracy of data stamps noted by the users (i.e. the user
writes down the time of an event some time after it
happens, not knowing the exact time and introducing
some errors to the data).

The data also presents a large number of high
peaks, which are not described by any user noted events.
This is due to the fact that emotional arousal is not the
only factor contributing to changes in GSR values. There


Fig.
4
.

This diagram shows emotional events noted by
both the subjects mapped on a timeline matched with the
nearest peak in GSR data which could be associated with
emotional arousal.





Name: 00012702.JPG

Date: 19 Mar 2011 15:03:30 GMT

GSR: 166

Edge: 6.0 distance: 0

Note:
boring


Fig.
5
. Example picture from
taken from one of the
experiments, with all the values
created by the
software in data integration process.

are also ot
her biological and external sources of data
peaks. Galvanic skin response is directly related to the
skin’s pore size and sweat gland activity [
9
].

There are 3 main sources of peaks non
-
related to
emotions which can occur and introduce noise and error
to d
ata:

-

Physical activity and exercise induces changes
in the pore skin sizes and production of sweat
for the purpose of cooling the body down,
which largely increases the GSR readings. This
could be visible in the diary documents, with
good examples being ru
nning and climbing
hills.

-

Sneezing, coughing and similar bodily actions.

-

The GSR values keep steadily raising over the
time of wearing which cannot be associated
with any initial or basic value to produce a
basic base line. This might be due to the
sweat/
steam accumulating on the device’s
touch points in contact with skin, which would
increase the conductivity [
9
].


In summation
-

the data collected through the
calibration process provides evidence to think that an
algorithm can be applied which would be able to detect
emotional arousal through the analysis of just the GSR
readings from the SenseWear biosensor. It has

to be
noted that describing in words and magnitude scale one’s
emotional state can be difficult, as it is not something
that people do on a daily basis. This may and very likely
has lead to minor inconsistencies in the data, however the
mapping with the G
SR readings is relatively good. The
analysis shows that 69% of situations marked by users fit
within 1 minute before or after the GSR peak and only
just above 13% of times the situations would be not
related to any peak in GSR reading, which is a solid
res
ult.

The other concern is whether the increased GSR
caused by the physical activity will induce a large
amount of error. For the purpose of this study, the above
mentioned issue is not going to be resolved, and the
proposed system will be tested in its exi
sting
configuration. For further, possible solutions and ways to
augment and improve it by applying more complex
solutions, refer to the discussion section of this
document.


4.

Software Design


The design of the software solution is split into two
stages, cr
eating a technique for integration of data from
multiple sources (refer to data integration section), and
designing a simple graphical user interface (refer to user
interface section) . Strong emphasis is put on the
efficiency of the algorithms and the dat
a integration part.
The performance of data manipulation is crucial for this
study, as the algorithms are responsible for calculating
the emotional arousal levels and identifying associated
visual memory cues, necessary for the second study.

The software is written in Java programming
language, as its object oriented
nature and flexibility
allows for easy data manipulation and production of
simple yet efficient algorithms for data analysis. Java is
highly portable and allows for deployment on multiple
platforms. It is also accessible, allowing the code


once
published



to be used to the advantage of other research
groups interested in making progress in this area.


4.1

Data Integration


Data integration is a process where tools for
manipulating and analysing data from multiple sources,
the SenseCam, SenseWear

and user input is combined
and converted into useful information.

The software created to support the data
analysis was designed with an experimenter in mind, and
was aimed at providing efficient ways of analysing the
raw data from both the SenseCam and SenseWear
senso
rs. The SenseWear sensor provides data in form of
a XML file, so an XML parser was written to decompose


Fig.
6
. The main view of the user interface
, allowing to view and
manipulate the data created in data integration process.

the data file and retrieve the GSR readings. The dates and
values of each GSR reading from the XML file are
converted into vectors and stored in a objec
t instance of
the Java program. This is done in the same way with the
pictures from the SenseCam, where file creation dates
and physical paths of all the pictures from the last session
are gathered by the program and stored in form of a
vector in another o
bject along with the file paths for each
of the photos.

Both SenseCam and SenseWear allow for time
syncing with a personal computer, so once this is done it
is possible to generate a list of pictures with appropriate
GSR readings attached. A simple algori
thm looks at the
creation date of images taken by the SenseCam, and
looks through a vector of GSR readings from the
SenseWear, looking at dates the readings had been taken.
The algorithm assigns the appropriate GSR reading to a
picture if the data the pict
ure was taken at is after the
currently read GSR reading and the next consecutive one.
Similar procedure is applied in case of assignment of
emotional arousal values, however analysis of the GSR
data is performed beforehand.

The algorithm used by the prog
ram to identify
peaks in GSR data and give scores to the pictures taken is
simple being of complexity O(1). It finds chains of
consecutive readings where all of them have increasing
values compared to their predecessor reading, and creates
a peak object de
scribing all the features of the peak (start
and end times, height, differences between readings and
the score). The score is calculated by dividing the overall
relative height of the peak by the number of readings it is
constructed from.














Where:

x


valence of the peak indicating emotional
arousal

y


GSR reading stored in the vector of
readings belonging to
the currently analysed peak

n


number of GSR readings included in the
peak


This allows us to give high scores to peaks where the
GSR values increase rapidly over a short period of time,
which is likely to be associated with an emotion
experienced by the subject, and give low scores for slow
and steady raising peaks, addressing th
e problem of the
trend of continuous increase of GSR value measured by
the sensor over time. An example of a score given is in
Fig.

5
,

where the
“edge” value is the output of the
program based on the proposed algorithm (6.0 is valence
of arousal based on G
SR data, distance is delay in
minutes to the closest data peak indicating the given
arousal level). An argument can be made that some
relatively high peaks can be hidden in rows of readings
raising by minimal values. The algorithm is designed to
be flexibl
e


even if the scenario occurs, the peaks which
are “flat” (only raise by minimal amount at each reading)
simply get scores of x = 1.0, and if a peak is long and flat
but contains a hidden sharp edge


the x will score above
1.0, and will be assigned to a

high number of pictures in
a row (referring back to number of pictures per GSR
reading), which would be easily visible in the GUI, and
can be looked into through referring to the graph
provided by the graph creator class.

Using a simple algorithm also ha
s advantages


data
analysis is fast and efficient. The algorithm could easily
be run on a mobile device and used to calculate the
significance of possible emotional arousal of the user on
the fly, which is important if this solution would ever be
applied
on a platform with such requirements. Choice of
Java as the programming language also helps the
portability in terms of exporting the code to mobile
devices, with the primary example being the Google
Android platform. For more details please refer to the
d
iscussion section.


4.2

User Interface


The software was designed to be used primarily by
researchers, but also features a simple photo preview
feature, which can be used to show pictures to a subject
during experiments. It was created as a research tool to
provide means of testing the hypothesi
s set in this study,
for possible user applications of the software solution
refer to discussion section. It is written using the Java
Swing toolkit, supporting creation of user friendly GUIs.


Views:

-

Initial View

The first view visible to the user. It i
s it is plain and
simple. The program requires 2 inputs


the XML file
holding the biosensor data, and a folder where all the
picture from a given SenseCam usage session are kept.
The user can then go to the data view, which is designed
solely for the rese
archer's usage, or straight to the slide
view, which essentially allows users to view the pictures
as a slide show.




Fig.
7
.

This diagram shows how the different views are interconnected and where the data integration procedure is applied.

-

Session View

Session view is a special case scenario view, which is
displayed instead of data view if the XML file chosen in
the initial
view contains more than one session. It simply
holds a list window so the user can choose a session of
interest for the experiment, and proceed to the data view
using the selected set of data.


-

Data View

This is the most important GUI element from the
per
spective of a researcher. It displays information
created in process of data integration, and allows find out
which pictures are emotionally stamped as well as
manipulate the data. The program also presents the data
visually in form of a graph. This i
s d
one using Google
Chart API [12]
A graph creator class is constructed,
taking a vector of GSR readings as an input. The class
creates an array of values corresponding to the GSR
readings formatted in accordance to the Google Chart
API (in this case as percen
tages of the highest value of
the set), and returns them in the form of a hyperlink,
which allows a simple graph to be requested from a
Google server if the computer the program runs on is
connected to the Internet, and then downloaded onto the
host machin
e being used by the user. All the pictures are
listed along with their GSR values and calculated
emotional arousal level scorings. The data view gives
access to the picture export view for any chosen picture,
and allows the user to export all the data the
program has
processed so far to a CSV file for further mathematical
analysis.


-

Picture Export View

In this view specification text files corresponding to each
picture can be exported, to allow easy evaluation during
the experimental procedures.


-

Slide
View

This view is created to perform the tests on the subjects
using the picture sets created by the experimenter in the
data
view. It shows the pictures to the subject in form of
a slideshow, where the speed the pictures change at can
be manipulated, or the user could simply have manual
control over the picture changing.


5.

Study Two


The purpose of this experiment is to measure t
he memory
recall performance of test subjects based on visual cues
in form of pictures from the SenseCam in relation to the
data gathered from the SenseWear biosensor. The
gathered data is also presented in terms of users’
awareness of their own emotions
and processed to check
for correlations with emotional arousal level.


5.1

Setup of Study Two


The subjects were young adults, aged 19 to 21, 2 males,
2 females. Each of the subjects had to perform an
identical, two
-
staged task. The first stage involved
weari
ng the SenseCam and SenseWear devices for a
single six hour session. The experiment was to be
conducted in the subjects' natural environment, during
their normal daily activities. This naturalistic approach
enables emulation of situations the solution woul
d be
applied in, if used by people with episodic memory
impairments, logging their day to day life. The advantage
of this technique is substantial, as creating artificial
experiment scenarios in order to induce emotions in
subjects may sure provide insight

in to the quality of
emotion
-
enhanced visual memory cues, but will not be
representative and will not allow to determine the
solution’s usefulness for real applications.

The second stage of the procedure is a short, 15
minute interview, taking place on t
he same day but a few
hours after the data gathering session. A list of ten
pictures is prepared for each subject to view in an
according setup: 4 low or non
-
emotionally stamped, 4
high emotionally stamped, and lastly 2 with relatively
low emotional valuin
g, which were the increased GSR
values could be associated either with emotions or
general noise and variations in data. Each subject is


h1

h2

h3

h4

sum

average



scores

total

scores

total

scores

total

scores

total

S1

1,1,0,0,0

2

0,0,1,0,1

2

0,0,1,0,1

2

0,0,1,0,0

1

7

1.75

S2

0,1,1,0,1

3

0,1,1,0,0

2

0,1,1,0,1

3

1,0,1,0,0

2

10

2.5

S3

0,0,1,1,1

3

1,0,1,0,1

3

1,0,1,0,1

3

1,0,1,1,1

4

13

3.25

S4

0,1,1,0,0

3

0,0,1,0,0

1

0,1,1,0,1

3

1,1,1,0,1

4

11

2.75



l1

l2

l3

l4

sum

average



scores

total

scores

total

scores

total

scores

total

S1

0,1,1,0,0

2

0,1,0,0,0

1

0,0,0,0,0

0

0,0,1,0,0

1

4

1

S2

0,1,1,0,0

2

0,1,1,0,0

2

0,0,1,0,0

1

1,1,1,0,1

4

9

2.25

S3

1,1,1,0,0

3

1,1,1,0,0

3

1,1,1,0,0

3

1,0,1,0,0

2

11

2.75

S4

0,1,1,0,1

3

0,0,1,0,1

2

1,1,1,0,0

3

0,0,1,0,0

1

9

2.25



Fig.
8
.

The table shows metrics of the depth of detail the subjects described each picture with, cases h1 to h4 correspond
to high arousal, l1 to l4 low arousal. The way to interpret the binary scores is as follows
:

(place, time, action, thoughts,
emotions)
. Sco
re of 1 is given if subject description of the past events based on the memory cue includes a certain topic,
i.e. if person only mentions when the picture was taken but did not recall any other kind of detail the score would be:
0,1,0,0,0.


asked to recall details of the situations presented on the
photographs shown to the best of their ability. The
quality
of their answer is taken as an indicator of quality
of the memory cue.

The interviews
had been recorded using sound
recording device, and later turned into transcripts for
more precise analysis. Short notes will be made
describing the interviewer’s personal opinion about the
quality of answers given by the interviewed. The notes
contain the

absolute GSR value and the emotional
arousal score calculated by the algorithm applied by the
program in order to perform an analysis of the data
gathered through the interviews.

Simple metrics was created to quantify the
interview recordings in terms of how detailed the
subject's statements were. For each picture they talked
about one point was assigned for any of the following
topics making appearance: place, time, activity, thou
ghts
and feelings. This allows the researchers to assess in an
easy way how many different types of details the subject
is able to recall and if the subject is actually willing to
talk about them, i.e. whether the subject thinks that this
information are i
mportant. So the minimum score would
be zero points if the person did not recognise what he or
she was doing, where the event the picture represented
was happening and so on. Maximum of five points would
in turn be awarded if the subject mentioned where an
d
when the photograph was taken, during what activity,
what he or she was thinking and if any emotions were
present.


The data is formatted to perform a paired sample t
-
test in
the according setup:


Independent Variables:

-

Emotional arousal level associate
d to the
picture


two levels: high and low.

Dependant Variables:

-

Level of detail mentioned by the subject.


Hypothesis:

-

H0: there is no significant difference in the
amount of detail the subjects are able to recall
between the two different levels of emot
ional
arousal.

-

HA1: there is a significant increase in the
amount of detail the subjects are able to
describe advantaging the high level of
emotional arousal.

-

HA2: there is a significant decrease in the
amount of detail the subjects are able to
describe a
dvantaging the high level of
emotional arousal.


The tests are run as within subject design, with a single
group of 4 subjects.


A separate data set was created based on the user
interviews to create statistics describing the correlation
between results
given by the emotion arousal detection
algorithm and the indicators in the subject's statements
that the events shown to them were actually associated
with emotions. The transcripts were analysed to discover
if people mentioned emotions in their answers fo
r each
visual memory cue, and if so


what were the emotions
described. As a metric LEAS scale [
5
] was used to define
the significance of user’s awareness of self emotional
response. LEAS (levels of emotional awareness scale)
was proposed by Richard D. Lan
e et.al (1990) [
5
] for
quantitative description of one’s level of understanding
of self and other’s emotions. The LEAS is adequate for
this study, as this experiment’s goal is to check when the
subjects were aware of their emotions and whether it
relates t
o increase in the quality of recall. The scale is
again 0 to 5, where zero means there are no emotions
present, one indicates weak emotional cues, four and five
is reserved for cases of multiple emotions appearing
together in combinations that elevate thei
r significance.



h1

h2

h3

h4

sum

average

s1

0

3

3

0

6

1.5

s2

3

0

3

0

6

1.5

s3

3

2

3

3

11

2.75

s4

0

0

3

3

6

1.5



l1

l2

l3

l4

sum

average

s1

0

0

0

0

0

0

s2

0

0

0

3

3

0.75

s3

0

0

0

0

0

0

s4

3

3

0

0

6

1.5



Fig.
9
.

The table shows LEAS scores for each picture for the 4 subjects, cases h1 to h4 correspond to high arousal, l1
to l4


low arousal. For details on interpreting the table please refer to Section 6.1.



Fig.
10

The amount of detail the subjects were able
to recall on average for emotionally stamped versus
non
-
emotionally stamped pictures.


For this data set the data is again formatted to perform

a
paired sample t
-
test in the according setup:


Independent Variables:

-

Emotional arousal level associated to the
picture


two levels: high and low.

Dependant Variables:

-

Level of emotional awareness described in
LEAS scale.

Hypothesis:

-

H0: there is no sig
nificant difference in LEAS
scores between pictures of both emotional
arousal level.

-

HA1: There is a significant increase of LEAS
scores for pictures related to high emotional
arousal versus low emotional arousal

-

HA2: There is a significant decrease in LEAS
scores for pictures related to high emotional
arousal versus low emotional arou
sal


The tests are run as within subject design, with a single
group of 4 subjects.


5.2

Results of Study Two


The first set of results shows how well the algorithm is
able do identify the emotionally stamped pictures. The
LEAS ratings [
5
] allowed for direct
comparison of the
two picture sets


the emotionally stamped and non
emotionally stamped with respect to the intensity of the
emotions.

The data shows that over 76% of emotion
related descriptions used by the subjects were related to
the memory cues with
emotional stamps, and only
around 23% were associated with pictures described by
the algorithm as holding no emotional arousal. Also, as
presented on the graph (
F
ig.
11
) users
in general showed
higher emotional awareness while recalling events from
memory
cues that algorithm defined as emotionally
meaningful, however the gain is not always consistent.
This could be due to the range of activities the subjects
were performing during the test (i.e. subject four was
bored almost for the duration of the whole te
st while
doing homework, for details refer to the appendix),
however the actual reason is not known.

A paired sample t
-
test was performed to
compare the level of emotional awareness of the users for
recall supported by high emotional arousal related visual

cues and low emotional arousal related visual cues. There
appears to be no significant difference in scores between
high emotional arousal (M=1.8125, SD=0.62500) and
low emotional arousal (M=0.5625, SD=0.71807);
t(3)=2.132, p=0.123. The results of the t
-
t
est are in
favour of the null hypothesis, which means that the
subjects were not more aware of their emotions if their
emotional arousal was higher at the time of creating the
visual cues with the SenseCam. As the results of the
below described experiment
validate the usefulness of
the algorithm defining emotional arousal based on GSR,
it can be stated that it is able to recognise emotionally
arousing events even if the users are not fully aware of
them.

The final findings on the quality of memory
cues are
obtained by calculating the significance of
correlation between the two groups


emotionally
stamped pictures and non emotionally stamped pictures,


Fig.
11

The graph presents average emotional
arousal rating of emotionally stamped versus non
-
emotionally stamped pictures.


and measuring the statistical advantage the emotionally
stamped pictures have if any.

A paired sample t
-
tes
t was performed to compare the
level of detail recalled by the subjects in conditions of
high emotional arousal and low emotional arousal. There
appears to be a significant difference in scores between
high emotional arousal (M=2.5625, SD=0.62500) and
low
emotional arousal (M=2.0625, SD=0.74562);
t(3)=4.899, p=0.016. This supports the hypothesis HA1
(the first alternative), suggesting that emotional arousal
does in fact improve the quality of memory cues
associated with the emotionally arousing events.

The graph (
fig.
10
)

shows that on average visual memory
cues associated with events which occurred while
emotional arousal was present in the subjects allow for
higher recall performance of more detailed information
about events which occurred while emotional arousal
was pre
sent in the subjects, with the improvement of
above 24%.


6.

Conclusions and Further Work


This paper presented a good attempt to provide insight
into improving the quality of memory cues using simple
biosensors and lifelogging devices, such as SenseWear
and
SenseCam. However it was only tested on healthy,
young subject. This


while validating whether the
technology works in practice


does not define its
effectiveness if used by people with episodic memory
impairments, such as dementia caused by Alzheimer’s
disease or anterograde amnesia. It would be plausible and
is also very possible that the research in this field will be
continued. Further research could explore the potential of
here tested tools and techniques for the purpose of
providing improved visual

memory cues for a wide range
of sufferers of episodic memory impairment. This study
also suffered greatly from small number of subjects the
experiments were performed on. Higher volumes of data
would allow for clearer, more precise analysis.

A problem wi
th noise in data existed during
this study, however analysis of various factors
contributing to creation of noise could be analysed to
limit the side effects. This would define the usefulness of
accelerometer and body temperature sensors embedded
in the Se
nseWear in measuring user’s physical activity
and allowing to reduce the effect it has on the data
emotional arousal readings.

Also the issue of deployment of the tool as an
end user product remains unsolved. It is feasible to create
a software tool, much

like the program provided with the
commercial version of the SenseCam made by
Vicon
Motion Systems Ltd
., but with smart picture set creator,
which utilises the algorithm presented in this study to cut
down the amount of pictures presented to the user,
therefore easing the problem of time consuming and
tedious process of sorting through the pictures describ
ed
in the introduction of this paper.

The presented solution could also be deployed
on a mobile platform, and if hardware allowed it


a
biosensor with GSR recording capability could serve as a
trigger for a wearable camera device, so that pictures are
on
ly taken while the user is in the state of emotional
arousal.

The study outlined a successful evaluation of
working, novel tool for validating quality of visual cues
and in turn improving memory recall, based on those
cues. It assessed that the usage of a
lready existing
biosensor and life
-
logging technologies accessible as off
the shelf products gives positive results and can be used
without the necessity of hardware augmentations.

The first study identified what is the range of
delays present between exp
eriencing emotional situations
and signs of them in the GSR readings. It showed that
there exists a mapping between the two and that further
development of emotion detection based on such
technique is plausible.

The second study successfully validated the

algorithm used for identification of emotional arousal in
the GSR data, which in turn allowed matching the
emotionally stamped events with appropriate memory
cues. The memory recall tests based on the information
supplied by the software solution had show
n that the tool
this study created does work, and visual memory cues
produced with it gave over 24% better results than
average non
-
emotionally stamped pictures in terms of
amount of details recalled by the subjects.


7.

Acknowledgements


The author would li
ke to acknowledge Dr Corina Sas, for
the
all the support and guidance provided both
throughout the development of this paper and the project
as a whole.


8.

References


[1]
Adam K. Anderson, Yuki Yamaguchi, Wojtek
Grabski and Dominika Lacka
(2006)
,
Emotional
m
emories are not all created equal: Evidence for
selective memory enhancement
,
Learn. Mem.
2006 13:
711
-
718 originally published online November 13, 2006
,
Access the most recent version at:
http://learnmem.cshlp.org/content/13/6/711.abstract


[2] Brewer, W. F.(1988). Qualitative analysis of the
recalls of randomly sampled autobiographical events. In
M. M. Gruneberg, P. E. Morri
s, & R. N. Sykes (Eds.),
Practical Aspects of Memory: Current Research and
Issues

(Vol. 1, pp. 263
-
268). Chichester: Wiley, 1988.


[
3
] St. George
-
Hyslop, P.H. (2000, December). Piecing
together Alzheimer's.
Scientific American,
283, 76
-
83


[4] Steve Hodges, Lyndsay Williams, Emma Berry,
Shahram Izdali, James Srinivasan, Alex Butler, Gavin
Smyth, Narinder Kaptur and Ken Wood

(2006).
SenseCam: A Retrospective Memory Aid.
P. Dourish
and A. Friday (Eds.): Ubicomp 2006,

LNCS 4206, pp.
177
-
193.


[5]
Lane, Richard D.; Quinlan, Donald M.; Schwartz,
Gary E.; Walker, Pamela A.; et al (1990), The Levels of
Emotional Awareness Scale: A cognitive
-
developmental
measure of emotion.,

Journal of Personality Assessment
,
Vol
.

55(1
-
2), Fal
l

1990, 124
-
134.


[6]

Matthew L.Lee & Anind K. Dey (2007, October),
Human
-
Computer Interaction Institute, Carnegie Mellon
University,
ASSETS'07
, Tempe, Arizona, USA.


[7] Andrew R. Mayes and Neil Roberts (2001), Theories
of Episodic Memory.,
Philosophical Transactions of the
Royal Society B: Biological Sciences

(2001) 356, 1395
-
1408


[8] Elizabeth A. Phelps (2004), Human emotion and
memory: interaction of the amygdala and hippocampal
complex,
Current Opinion in Neurobiology 2004,

Vol.
14:198
-
202.


[9] Mel Slater, Christoph Guger, Guenter Edilnger,
Robert Leeb,
Gert Pfurtscheller, Angus Antley, Maia
Garau, Andrea Brogni, Doron Friedman(2006), Analysis
of Physiological Responses to a Social Situation in an
Immersive Virtual Environment,
Presence, Vol 15 No. 5,

October 2006, 663
-
569.


[10] Wilson, R.S., Carlos F.,
Barnes, L.L., Schneider,
J.A., Bienias, J.L., Evans, D.A., and Bennet, D.A.
(2002). Participation in Cognitively Stimulating
Activities and Risk of Incident Alzheimer Disease.
JAMA, 287
. 742
-
748


[11] BodyMedia, March 2011,
http://SenseWear.bodymedia.com/


[
12
] Google Chart API, March 2011,
http://code.google.com/apis/chart/





Appendix

A


Subject Interview Transcripts


These are original, unchanged transcripts presenting the
subject interviews done for study

2, Section 5 of the
paper.


Interpretation of conversation:

I


The Interviewer

S


The Subject


Information in italic are the parameters of the picture,
GSR represents the galvanic skin response level, and the
edge is the algorithm’s output indicating th
e level of
emotional arousal.


Transcript One


I: part one of the experiment one, subject EM

S: ok


pic 1


Name: 00010053.JPG

Date: 15 Mar 2011 13:38:00 GMT

GSR: 126

Edge: 5.857143 distance: 0

Note:

recognised the time


S: I think that that was either when

I came in or at some...

I: Also remember the pictures are in chronological order,
so these will be earlier than the others

S: Yeah, but I've been to this place like, multiple times
throughout the thing so...

S: It's either when I came in or afterwards
when I went
over to this area, I was doing some tests

S: Actually because that thing's there it's actually when I
came in, or quite early on.

S: It's broken?

I: I almost crashed my program...


pic 2


Name: 00010240.JPG

Date: 15 Mar 2011 14:32:58 GMT

GSR:
269

Edge: 1.0 distance: 0

Note:

setting up experiment, said less than last time


I: Ok, next picture.

S: That's me setting up part of the test bed, so that was,
yeah, that was quite early on in the day.

S: When I was just working basically, setting up
expe
riment, as, yeah.


pic 3


Name: 00010390.JPG

Date: 15 Mar 2011 15:16:58 GMT

GSR: 277

Edge: 1.0 distance: 0

Note:

didnt say much either


I: Ok, next one.

S: That's still setting up the experiment. Well, the test
bed. That's towards the end of setting up
the test bed.

S: That was probably that, yeah.

I: Ok.


pic 4


Name: 00010450.JPG

Date: 15 Mar 2011 15:35:18 GMT

GSR: 281

Edge: 1.0 distance: 0

Note:

...


I: Next one.

S: Me, picking up a pen.

I: Can you remember more or less what you were doing?

S: I thi
nk I was gonna write on that... I think this might
be after I discovered that the thing actually worked. Yes,
this could very well be that I just, like, discovered the
whole thing worked, so.


pic 5


Name: 00010486.JPG

Date: 15 Mar 2011 15:43:40 GMT

GSR:
278

Edge: 2.0 distance: 0

Note:

said a bit more


I: Number five.

S: That's me writing... I'm either drawing out something
that I'm setting up, or it's me writing a message that I
held up to the camera.

S: And also it's really close to the other one.


pic

6


Name: 00010527.JPG

Date: 15 Mar 2011 15:57:48 GMT

GSR: 277

Edge: 1.0 distance: 0

Note:

trying to cause a spike and failed


I: Number six.

S: Oh, this is one I was sitting there trying make emotion
spike. I made sure I (something) my finger so I could
r
ecognise this picture if it came up. Yeah, I was
consciously ruining your experiment, hehe.

I: Your gonna want to read them after...

S: Ok, heh.

I: That was six.

S: Yep.


pic 7


Name: 00010604.JPG

Date: 15 Mar 2011 16:21:20 GMT

GSR: 277

Edge: 4.0 distanc
e: 1

Note:

bored or spiking


S: Seven, looks like it was either just after or during that,
ehm, yeah, so basically at that part I was pretty bored. Or
I was just doing that spiking thing.

I: So that was seven.


pic 8


Name: 00010621.JPG

Date: 15 Mar 2011
16:26:36 GMT

GSR: 277

Edge: 3.0 distance: 0

Note:

really bored as said


S: Eight, I was pretty damn bored. I was sitting and
waiting for Ben to come back, so he could look at some
stuff. So I was pretty much just sitting there waiting for
this long. (??)


pic 9


Name: 00010827.JPG

Date: 15 Mar 2011 17:22:54 GMT

GSR: 242

Edge: 4.5 distance: 0

Note:

said a lot, rememberd details


I: Ok.

S: Not really sure what I'm doing there. I was probably
just checking the configuration. Of the test bed. In fact
that thing

was running, so I was just testing that I would
be able to... Oh, I was writing the script that I was gonna
use to actually get my data, which I'm gonna use to...

I: Can you remember if anyone was there?

S: If anyone was here...?

I: Yes.

S: Well, there's
like people on the other side, trying most
of this (??), but there was no
-
one, like, right next to me.

I: That was nine.

S: Yep.


pic 10


Name: 00010843.JPG

Date: 15 Mar 2011 17:27:36 GMT

GSR: 247

Edge: 3.0 distance: 1

Note:

a bit of detail


S: Ten?

I:
Yeah, ten.

S: That's me getting TCP dumped where I found my
laptop. That's one of things that I'm doing to set up my
test, so I can gather data.

I: Ok, that's it, thank you.


Transcript Two


I: This is test number two, subject PS.

I: Picture one.


Pic 1


Name: 00011234.JPG

Date: 17 Mar 2011 12:49:28 GMT

GSR: 190

Edge: 7.428571 distance: 0

Note:

said a lot, mentioned emotions


S: Wiec tutaj jadłem sobie, jadłem, to był obiad. Mam
powiedzieć godzinę o której to było?

I: Jeśli pamiętasz to powiedz.

S: Godzina to była kolo... dwunastej, dwunastej
trzydzieści. Przygotowywałem się jeszcze potem na
zajęcia na godzinę pierwsza. Byłem trochę
zdenerwowany że nie zdążę zjeść przed wyjściem.

I: Ok?

S: Ok, zatrzymać tutaj?

I: Nie.

I: Następne, tylko poczekaj

jeszcze moment.


Pic 2


Name: 00011245.JPG

Date: 17 Mar 2011 12:51:30 GMT

GSR: 193

Edge: 95.0 distance: 0

Note:

mentioned emotions, said somethings wrong


S: Tutaj, jeszcze przed wyjściem zdążyłem zrobić, przed
wyjściem chyba albo jak wróciłem zdążyłem z
robić
aktualizację mojego antywirusa. Byłem znowu
zdenerwowany że znowu jest coś nie tak. Ale to był tylko
update antywirusa i tyle.

S: Już mówić?

I: Poczekaj, tylko muszę to znaleźć.

S: Aha.


Pic 3


Name: 00011357.JPG

Date: 17 Mar 2011 13:25:32 GMT

GSR: 205

Edge: 1.4444444 distance: 0

Note:

a lot of detail, not many people around


S: Picture number three. Poszedłem na wykład. Był to
mój pierwszy wykład, na godzinę pierwszą, tego dnia.
Zdziwiłem się że nie ma moich reszty znajomych z
którymi zawsze si
edzę. I doktorka też nie było i było
trochę mało osób. I okazało się że przyszedł zupełnie
inny koleś żeby wykładać. I jeszcze tutaj rozmawiałem z
innymi znajomymi którzy tutaj przyszli. Rozmawialiśmy
o mojej fryzurze nowej.


Pic 4


Name: 00011458.JPG

Dat
e: 17 Mar 2011 14:04:06 GMT

GSR: 225

Edge: 1.0 distance 2

Note:

details but no emotions


S: Tutaj... już mogę mówić?

I: Picture number four.

S: Picture number four. Wracałem do domu, po
wykładzie. Tuż przedtem sprawdziłem pocztę i nie
przyszły moje dokumen
ty które potrzebowałem do
odnowienia swojego tego, railway card. No i wracałem
do domu żeby się przespać.

S: Jaka inwigilacja kurde... hehe.

I: Na potrzeby nauki.

S: A jak nie wiem to co?

I: To mówisz że nie wiesz.


Pic 5


Name: 00011735.JPG

Date: 17
Mar 2011 16:13:00 GMT

GSR: 309

Edge: 2.0 distance: 0

Note:

not much


S: A tutaj oglądałem śmieszne rzeczy na internecie, na
stronie kwejk.pl, tuż potem jak wróciłem z wykładu.
Byłem trochę śpiący, więc zaraz potem poszedłem spać.

I: Jaki numer?

S: Number five.


Pic 6


Name: 00011827.JPG

Date: 17 Mar 2011 16:35:34 GMT

GSR: 349

Edge: 3.4 distance: 0

Note:

laughed, but not said much, gave some details, but no
emotions


I: Number six.

S: Hahaha! Number Six... Hahaha! Moja współlokatorka
przyszła do
mnie do pokoju pogadać, hahaha. Nie
wiedziałem że takie coś się zrobi. Przyszła do mnie do
pokoju pogadać ze mną. Gadaliśmy odnośnie
coursework'ów które mamy do zrobienia, egzaminów na
przyszły tydzień, kiedy wracamy do domu, oraz o co się
pokłóciłem ze zn
ajomą na imprezie.

I: Number six, was it?

S: Tak, number six, sorry.

S: Ty je wybierasz?

I: No.

S: Aha.


Pic 7


Name: 00011850.JPG

Date: 17 Mar 2011 16:40:18 GMT

GSR: 347

Edge: 61.0 distance 0

Note:

no emotions mentioned, brief details


I: Dobra,
number seven.

S: Number seven. Tutaj sobie myłem zęby zaraz potem
jak Caroline wyszła z mojego pokoju, moja
współlokatorka, i się szykowałem na wykład. Miałem też
jeszcze przed tym wykładem zmyć naczynia, ale nie
zdążyłem. If it's relevant...


Pic 8


Name:

00011899.JPG

Date: 17 Mar 2011 16:52:06 GMT

GSR: 188

Edge: 4.0 distance: 0

Note:

some details, mentioned it was important


S: Number eight. Ehm, tutaj szedłem na wykład z
pewnym Niemcem, to jest właśnie ten Niemiec w sumie,
co idzie przede mną, to jest mój wykładowca. To był
tutorial na egzamin który będziemy mieli za tydzień. Tak
więc był to trochę, było to trochę ważne. No i

chyba tyle.
To było o godzinie piątej.


Pic 9


Name: 00011967.JPG

Date: 17 Mar 2011 17:13:00 GMT

GSR: 368

Edge: 1.0 distance: 0

Note:

making notes, pissing people off


S: Number nine. Jestem na wykładzie. Jeden, mam kilka
osób z, ze wschodu na swoim tym,

na swoim kursie i
ciągle zadają pytania, i być może byłem zdenerwowany
bo wszystkich wkurzają swoimi pytaniami, i robiłem
notatki.


Pic 10


Name: 00012059.JPG

Date: 17 Mar 2011 17:39:02 GMT

GSR: 375

Edge: 10.0 distance: 1

Note:

not said much


I:
Dziesięć.

S: Znowu na wykładzie. Robiłem notatki. Pisałem, coś z
tablicy pisałem, jakieś obliczenia najprawdopodobniej.
That's number ten.

S: Zatrzymać?

I: Tak.


Transcript T
hree


Transcript of interview, subject CB

I: This is test number three, subject CB, picture number
one.


Pic 1

Name: 00012244.JPG

Date: 19 Mar 2011 13:19:34 GMT

GSR: 255

Edge: 2.0 distance: 1

Note:

take picture herself, not much details


S: In this picture… , should I talk about this picture?

S:
Ok, this was when….

I: You don’t have to answer if you don’t remember.

S: This was, this is in the morning, like when I just got
up… (?)

S: And Chris asked to take a picture of him…

S: Yeah, I actually took this picture myself, cuz he said
to take a pictu
re of him, yeah, that was about it. We were
like in my kitchen there, I think Tom, you had left by
th
en…

I: Don’t mention me.

S: O f***.

I: haha.

S: O s*
**… haha, that’s about it.


Pic 2

Name: 00012316.JPG

Date: 19 Mar 2011 13:34:14 GMT

GSR: 254

Edge: 1.5
distance: 10

Note:

not much detail


I: Ok, picture two.

S: We were just about to make some scrambled eggs
here, Vanessa was saying ohh... “will it be ok to dring

frozen milk?”, because that milk thing was frozen, that’s
about it really, in the kitchen again, that was in the
morning. About it, again.


Pic 3

Name: 00012423.JPG

Date: 19 Mar 2011 13:59:22 GMT

GSR: 187

Edge: 1.0 distance: 0

Note:

chilling out, some de
tail


I: Picture three.

S: Oh, I’m doing my nails here, Mike was washing up,
we were watching the news and it was about all that
Libya crap, heh, so I was trying to understand that, but I
was just chilling out there, and that again, that was just
after I h
as eaten in our kitchen. That’s about it again.


Pic 4

Name: 00012528.JPG

Date: 19 Mar 2011 14:23:36 GMT

GSR: 162

Edge: 1.0 distance: 0

Note:

no emotions, some detail


S: Here I was just to start gonna do some… I was
working on my ling, doing some adverti
sing poster (??)
analysis, and then again I was sitting on the sofa
watching TV, I was on my own here, so I was very in to
my work, again that was like, just after I’d finished doing
my nails. That’s about it.

M: Did you do any work?


S: Yes I did do work

Mike.

I: That was number four, wasn’t it?

S: Yeah. Four out of ten, yeah?


Pic 5

Name: 00012702.JPG

Date: 19 Mar 2011 15:03:30 GMT

GSR: 166

Edge: 6.0 distance: 0

Note:

boring


I: Ok, the next one.

S: Ok. I can’t remember doing that. Don’t know what
I’m do
ing there. I don’t think I was reading a magazine.

I: It’s ok, if you say you don’t know, you don’t know…
It’s part of the experiment.

S: Yeah, yeah… It must have been while I was doing
work. (??)

S: Oh yes I do! I remember that. Because I remember
that

(name from magazine cover), cuz she’s getting fat,
hehe. Yeah. I just basically looked at a front cover and I
thought: boring…, and put it down. And then I started
doing work. I was doing this before work to be honest.


Pic 6

Name: 00012782.JPG

Date: 19
Mar 2011 15:22:06 GMT

GSR: 181

Edge: 12.0 distance: 0

Note:

emotions appear, some details


I: Ok, next picture.

S: I was just getting ready to get out for a walk in this
one. And felt ugly,because I was trying to do my hair and
it wasn’t going anywhere.

I
: Were you annoyed?

S: No, I just felt fed up, Mike was like, on his phone,
like, just being lazy, and just like that so… But we were
going out for a walk, so I was quite motivated to do that.
Yeah, it’s about it for that one, I was in my room there.


Pic

7

Name: 00012916.JPG

Date: 19 Mar 2011 15:57:10 GMT

GSR: 139

Edge: 1.0 distance: 0

Edge: 7.25 distance: 0

Note:

emotions, feelings, talking about herself


I: And the next picture.

S: There, we were walking to asda in the car park, we
were just done on a
quite a long walk, so we were just
like ohhh… I was quite happy actually, but I felt a bit
self conscious, because I had the camera and, like, like
on my thing so I was a bit self conscious to getting there
and it’s a bit like… I was a bit, yeah, I was a b
it, what’s
the word… Where you’re like…

I: Self conscious…?

S: Yeah, you don’t want to go in there, like, I didn’t
know whether to take the camera off, like embarrassed,
yeah. Ehm, yeah. I was quite tired at that point too… But
me and Mike were like, we we
re just laughing in that
last… and we were linking arms because we love each
other, heh.


Pic 8

Name: 00012947.JPG

Date: 19 Mar 2011 16:05:06 GMT

GSR: 159

Edge: 2.0 distance: 0

Note:

a bit of much emotions, some details


I:Ok, next one.

S: Oh, we’re looking ad, oh… We were going through a
shoe isle there and I remember Mike saying: “Oh, you’re
gonna spend ages here…”. This is in asda. And I was
like, (??) I was quite self conscious, I was rushing right,
couldn’t even browse. We were just
trying some clothes,
there were some clothes that I picked up.


Pic 9

Name: 00012955.JPG

Date: 19 Mar 2011 16:07:38 GMT

GSR: 160

Edge: 10.0 distance: 0

Note:

details, emotions


I:Right, next one.

S: I was in the changing rooms here. I was just about to
tr
y a top I think. Yes, I know, I was trying a top.

I: Did you like that top?

S: I did like it, I was quite excited to try them on. I was
very tempted to buy them but I think I would fell crap
about myself if I was just gonna buy them for the sake of
it, so

I thought no, don’t waste my money. And instead
Mike bought me a nighty. I decided to put them back.
Because I though I’m just buying them to just make
myself feel better.


Pic 10

Name: 00013120.JPG

Date: 19 Mar 2011 16:53:16 GMT

GSR: 148

Edge: 2.0 dista
nce: 1

Note:

not much said, talked about herself


I: And now the last one.

S: That was walking back, probably walking back. And
we were eating sweets and talking. Quite content, but I
just wanted to get back at that point to be honest because
I was quite
tired. And cold.

I: This is the end of the experiment, thank you.

After the experiment the subject mentioned that the
pictures that where emotional made her think about it
more, and the memories were more vivid.


Transcript Four


Subject GC transcript

I:

This is test number four, subject GC.

I: Picture number one.


Pic 1

Name: 00013647.JPG

Date: 20 Mar 2011 14:53:10 GMT

GSR: 87

Edge: 1.1111112 distance: 0

Note:

little detail


I: Po polsku, tylko mow: zdjecie numer jeden…

S: Zdjecie numer jeden, wydaje mi

sie ze jest to poczatek
dnia. Jestem zmeczona i bardzo senna, ale
zdeterminowana bo pisze raport. Co jeszcze?

I: Jak juz nic wiecej, to mow ze nic wiecej.

S: Aha, to nic wiecej.

S: Trudno bedzie mi opisac kazde wydazenie...

I: Wybralem takie zeby nie by
ly podobne.

S: Ok.

S: Mam laptopa na kolanach to znaczy ze tu juz mi sie
znudzilo siedzenie przy biurku. Czyli juz bylam
znudzona.


Pic 2

Name: 00013754.JPG

Date: 20 Mar 2011 15:24:26 GMT

GSR: 83

Edge: 5.5 distance: 1

Note:

even less detail, picture very

similar to the previous

little emotion


S: Na tym zdjeciu dalej pisze swoj raport finansowy. To
musialo byc jeszcze pozniej bo calkowicie juz zmienilam
pozycje. W kierunku okna. A to juz mnie wtedy nogi
bola.

I: Ogolnie zdjecia sa poukladane chronologicz
nie.

S: Sa?

S: Aha, ok, to chyba dlatego. Ok. Bylam senna. Jeszcze
sie tak nie zdenerwowalam przy pisaniu tego raportu.
Pewnie sobie pare chwil zrobilam przerwy na facebook’u
i na pudelku. Boze, nie mam pojecia. Dobra.


Pic 3

Name: 00014009.JPG

Date: 20 M
ar 2011 16:59:50 GMT

GSR: 70

Edge: 1.0 distance: 1

Note:

not said much at all, mentioned trivial things


S: Na zdjeciu numer trzy... Mam dosyc pisania raportu.
Chyba nie jestem dobrym przykladem. Jestem po jakims
obiedzie. Po platkach. Z nikim jeszcze nie
rozmawialam.
Jestem w sumie bardziej zrelaksowana. Nie chce mi sie
pisac raportu wiec... Rozrywka. Ok.


Pic 4

Name: 00014135.JPG

Date: 20 Mar 2011 17:40:26 GMT

GSR: 92

Edge: 1.75 distance: 2

Note:

giving up, sleepy and tired


I: No.

S: Juz opadam z sil. T
otalna rezygnacja. Chodze po
pokoju bo mi sie juz nie chce totalnie tego pisac. I
najbardziej mi sie chce spac. Bo to jest jednak strasznie
nudzace. Probowalam wtedy dzwonic do rodzicow. Ale
nie odebrali telefonu. I to jest po tym jak skonczylam
jesc drogi

raz platki.


Pic 5

Name: 00014187.JPG

Date: 20 Mar 2011 17:55:00 GMT

GSR: 88

Edge: 3.333333 distance: 0

Note:

nothing new, still writting the report, said was not
emotional


I: Numer piec.

S: Piec. Zaskakujaco dalej jestem w trakcie pisania
raportu. Zbiera sie we mnie energia zeby dokonczyc go
dzisiaj. Ogolnie bylam bardzo zmeczona i bardzo, znaczy
jeszcze nie wkurzona, pozniej pod koniec dnia juz bylam
bardziej zmeczona ale jeszcze nie bylo

koniec dnia bo sa
zaslony odsloniete jeszcze to nie bylo tak pozno.


Pic 6

Name: 00014274.JPG

Date: 20 Mar 2011 18:18:40 GMT

GSR: 90

Edge: 1.0 distance: 0

Note:

really bored, called someone because of that

little detail, relaxing


I: Zdjecie szesc.

S: O
, tutaj, tutaj gadam z Wiktoria. Do ktorej
zadzwonilam bo juz mi sie totalnie nie chcialo nic robic.
Chwilowa rozrywka. Mialam obejrzec film, nie
obejrzalam niestety. Mialam jej pozyczyc, nie
pozyczylam. Tutaj na relaksie opowiadam o pisaniu
raportu. Tak.

Relaksuje sie przy okropnej pogodzie w
Lancaster’ze. Ok.

I: To bylo szesc nie?

S: To bylo szesc.


Pic 7

Name: 00014468.JPG

Date: 20 Mar 2011 19:25:00 GMT

GSR: 89

Edge: 1.6666666 distance: 0

Note:

trying not to fall asleep


S: Siedem.

S: O, tu tez jest.
Dochodze do mometu kiedy juz
naprawde nie chce mi sie pisac raportu. No ale dalej
trzymam komputer na kolanach. Wiec, mmm... Probuje
nie usnac. I probuje cos robic a nie udawac ze cos robie. I
bylam juz troche bardziej zestresowana. Znaczy tak,
wkurzylam s
ie troche.


Pic 8

Name: 00014508.JPG

Date: 20 Mar 2011 19:39:12 GMT

GSR: 76

Edge: 4.0 distance: 0

Note:

not much, maybe little emotions


S: O, znowu rozmawiam przez telefon. Teraz
rozmawiam z moja kolezanka, Anna, ktora zadzonila do
mnie pytajac jak mi idzie pisanie raportu. Bo sama tez
pisze i nie wie jak go napisac. Nie chcialo mi sie z nia
gadac. I w tym samym czasie rozmawialam z rodzic
ami
jeszcze, ktorzy do mnie zadzwonili, jakies 10 minut
gadalam z nimi, nawet wiecej. Raczej bylam
zadowolona, ale juz totalnie zmeczona.


Pic 9

Name: 00014534.JPG

Date: 20 Mar 2011 19:46:38 GMT

GSR: 88

Edge: 10.5 distance: 0

Note:

talking to parents, giv
es more thought and discussion

than others


I: Number nine.

S: O, jak widac, totalny relaks. Nawet komputera juz nie
mam na kolanach, to znaczy ze totalna rezygnacja, juz
przestalam pisac. Sukces. Czekam na... A nie, to wtedy
to chyba rozmawialam z rodzica
mi jeszcze dalej. Bo...
Jak skonczylam gadac z Anna to caly czas jesli tam,
chronologicznie to dalej z nia gadalam. Znaczy dalej
rozmawialam z rodzicami. Ale jak to... To mosi byc
jakiej zdjecie co dopiero w momecie kiedy... Nie wiem,
nie pamietam zebym ta
k siedziala. Nie, bo jak
rozmawialam z Anna, to pozniej ty od razu do mnie
zadzwoniles i od razu poszlam otworzyc ci drzwi. Chyba
ze to bylo jak, w trakcie rozmawiania z nia. Ale nie
wydaje mi sie. Moze to bylo jakis momet jeszcze jak
rozmawialam. Jeszcze
z kims musialam rozmawiac. Nie
no, tylko dwa razy przeciez rozmawialam. Nie wiem.
Kiedy to bylo... Bo to musialo byc jakos przed rozmowa
z Anna... Albo w czasie rozmowy z Anna.

S: Dobra, nastepne?


Pic 10

Name: 00014566.JPG

Date: 20 Mar 2011 19:54:44 GMT

GSR: 92

Edge: 7.5 distance: 0

Note:

relaxed, happy that something else happened

after finished talking to someone on the phone

a lot of details.


I:Numer dziesiec.

S: Tutaj totalnie na luzie, jestem w kuchni, zmienilam
terytorium, nie mam przy sobie komput
era. Z jakims
zafascynowaniem tlumacze... Ok. Pije herbate spokojnie
z Tomkiem, ktory pije rowniez herbate. To bylo od razu
po tym jak skonczylam z Anna. To znaczy skonczylam z
nia gadac i pozniej Ty do mnie dzwoniles i skonczylam z
nia rozmawiac bo mowie
ze ktos do mnie dzwoni. Ale to
chyba jest swierza herbata, to dopiero jest poczatek
naszej konwersacji jeszcze. To chyba tyle. O boze, taki
interesujacy dzien ze nie wiem... Wybacz.

I: Ok, this is the end of experiment, thank you.




Apendix B


All the w
orking documents associated with this study can
be found on the folowing website:


http://www.lancs.ac.uk/ug/fratczak/index.htm


This include all data files, notes and sorce code of the
software
solution, however excludes the pictures taken
using

the SenseCam due to privacy issues concerning the
study participants.