High Performance Associative
Neural Networks:
Overview and Library
Presented at AI’06, Quebec city, Canada, June 7

9, 2006
Oleksiy K. Dekhtyarenko
1
and Dmitry O. Gorodnichy
2
1

Institute of Mathematical Machines and Systems, Dept. of Neurotechnologies,
42 Glushkov Ave., Kiev, 03187, Ukraine.
olexii@mail.ru
2

Institute for Information Technology, National Research Council of Canada,
M

50 Montreal Rd, Ottawa, Ontario, K1A 0R6, Canada.
dmitry.gorodnichy@nrc.gc.ca
http://synapse.vit.iit.nrc.ca
2
Associative Neural Network Model
Features:
•
Distributed storage of information
fault tolerance
•
Parallel way of operation
efficient hardware
implementation
•
Non

iterative learning rules
fast, deterministic training
Confirms to three main principles of neural processing
:
1.
Non

linear processing
2.
Massively distributed collective decision making
3.
Synaptic plasticity
1.
to accumulate learning data in time by adjusting synapses
2.
to associate receptor to effector (using thus computed synaptic values)
The
Associative Neural Network
(AsNN) is a dynamical
nonlinear system capable of processing information via
the evolution of its state in high dimensional state

space.
3
Examples of Practical Applications
•
Face recognition from video
*
•
“
Electronic Nose
”
**
*
D. Gorodnichy
–
“Associative Neural Networks as Means for Low

Resolution Video

Based
Recognition”,
IJCNN’05
**
A. Reznik; Y. Shirshov; B. Snopok; D. Nowicki; O. Dekhtyarenko & I. Kruglenko
–
“Associative Memories for Chemical Sensing”,
ICONIP'02
4
Associative Properties
Convergence Process
Network evolves according to the state update rule:
–
set of memorized patterns
We want the network to be
retrieve data by associative
similarity (to restore noisy or
incomplete input data):
5
Sparse Associative Neural Network
Advantages over Fully

Connected Model:
•
Less memory needed for s/w simulation
•
Quicker convergence during s/w simulation
•
Fewer and/or more suitable connections for h/w
implementation
•
Greater biological plausibility
Output of neuron
i
can affect
neuron
j
(
w
ij
≠ 0) if and only if:
Architecture,
or
Connectivity Template
:
Connection Density
:
6
Network Architectures
Random Architecture
1D Cellular Architecture
Small

World Architecture
1
–
the worst
5
–
the best
Associative
Performance
Memory
Consumption
Hardware
Friendly
Regular (cellular)
1
5
5
Small

World
2
5
4
Scale

Free
2
5
3
Random
3
5
2
Adaptive
4
5
2
Fully

Connected
5
1
1
7
Compare to
…
Fully connected net with
n
=24x24
neurons obtained by tracking and
memorizing faces (of 24x24 pixel
resolution) from
real

life
video
sequences [Gorodnichy
’
05]
•
Notice visible inherent synaptic
structure !
•
This synaptic interdependency is
utilized by Sparse architectures.
8
Some Learning Algorithms
•
Projective
•
Hebbian (Perceptron LR)
•
Delta Rule
•
Pseudo

Inverse
–
selection operator
, where
1.
Performance Evaluation Criteria
Error correction capability
(Associativity strength)
Capacity
Training complexity
Memory requirements
Execution time: a) in Learning and b) in Recognition
9
Comparative Performance Analysis
Networks with Fixed Architectures
Associative performance and training complexity as a function of
number of stored patterns
Cellular 1D network with dimension 256 and connection
radius 12, randomly generated data vectors
10
Comparative Performance Analysis
Influence of Architecture
Sparse network with dimension 200, randomly generated
data vectors, various ways of architecture selection
Associative performance as a
function of connection density
•
PI WS
–
PseudoInverse Weight
Select, architecture targeting
maximum informational capacity
per synapse
•
PI Random
–
Randomly set
sparse architecture with
PseudoInverse learning rule
•
PI Cell
–
Cellular architecture
with PseudoInverse learning rule
•
PI WS Reverse
–
architecture
constructed using the opposite
criterion of PI WS
11
Associative Neural Network Library
•
Publicly available
at
http://synapse.vit.iit.nrc.ca/memory/pinn/library.html
•
Effective
C++ implementation of full and sparse associative networks
•
Includes
noniterative Pseudo

Inverse LR
with possibility of
addition/removal of selected vectors to/from memory
•
Different learning rules
:
Projective, Hebbian, Delta Rule, Pseudo

Inverse
•
Different architectures
: fully

connected, cellular (1D and 2D), random,
small

world, adaptive
•
Desaturation Technique
: allows to increase memory capacity up to 100%
•
Different update rules
: synchro. vs. asynchro. Detection of cycles
•
Different testing functions
: absolute and normalized radius of attraction,
capacity
•
Associative Classifiers
: Convergence

based, Modular
12
Associative Neural Network Library
Hierarchy of Main Classes
Comments 0
Log in to post a comment