Hebbian Learning Rules

stingymilitaryElectronics - Devices

Nov 27, 2013 (3 years and 11 months ago)

82 views














Hebbian Learning Rules





Meg Broderick


December 12, 2002


Submitted in Partial Fulfillment of the

Requirements for DCS860E, Artificial Intelligence

Doctor of Professional Studies

Pace University


Page
2

of
9

Hebbian Learning Rules


Learning rules are a
lgorithms which describe the relative weights of connections in a
network. Some communications, psychological and biological associations may be governed by
these relationships. In 1949, D.O. Hobbs,
the Father of Cognitive Psychobiology [Harnad]
,

publish
ed a description of a simple set of rules in
The Organization of Behavior

[Hobbs]
.

Simply
put, he explained that the simultaneous excitation of two neurons results in the strengthening of the
connection between them. [French]
This finding provided new w
ays to observe, measure and
improve the interaction of elements in a computer network, web
-
based search engines, as well as
physiological learning in the brain. While there are many variations on learning rules, including
delta rule and back propagation,
this report will focus only on the Hebbian Learning Rules.


I.

Neural Learning
:
The Hebb Legacy


Donald Olding Hebb (1904
-
1985) was a graduate of Dalhousie University in Alberta and
McGill University in Quebec, Canada. By training a psychologist, he perfo
rmed significant research
in the area of neurology and the reaction of the brain to various stimuli. Despite the pervasive
behavioral turn that psychology had taken at that time, he continued to pursue the physiological
relationships of neurons and brain
functioning [Klein].

Hebb postulated that the “connections between neurons increase in efficacy in proportion to
the degree of correlation between pre
-

and post
-

synaptic activity.” [Klein]
He also suggested that
groups of neurons firing together crea
te a cell
-
assembly which last after the firing event. Finally, he
postulated that thinking is the sequential activation of cell
-
assemblies [Klein]. Another way of
viewing these hypotheses is that the proximity and activity of neurons can create a type o
f persistent
learning or memory
.




Page
3

of
9

II.

Types of Learning

The Hebbian learning laws described coincidence learning, in which the proximity of the
events affected the relative weights of the learning. Using the example from
Artificial Learning:
Structur
es and Strategies for Complex Problem Solving

[Luger]:

Suppose neuron
i

and
j

are connected so the output of
i (o
i
)

is

an input of
j (o
j
)
. The weight
adjustment between them,
Δ
W
is the sign (+ or
-
) of
c
, the constant controlling the learning
rate.
O
i

and

O
j

are the signs of the outputs
o
i

and
o
j
, respectively.


O
i

O
j

O
i

*O
j

+

+

+

+

-

-

-

+

-

-

-

+


A.

Unsupervised Learning


Unsupervised learning describes the “natural” w
ay in which the various neurons learn from
each other. No artificial weights are added to move the inputs in one direction or another. Miller
and McKay postulate that these correlation
-
based synaptic methods are basically unstable: either all
synapses
grow until each reaches the maximum allowable strength or all decay to zero strength
[MacKay]. For this reason, supervised learning, forced or modified (hybrid) constraints in learning
are often used in psychobiological training models.


An example of hy
brid Hebbian learning shown in Luger models the classic experiment of
Pavlovian training for dogs: the dogs hear the bell and see the food. This fires the neurons to train
them to respond to the auditory and visual stimuli. Ultimately, the brain “learns
” that the sound of

Page
4

of
9

the bell implies food and the physical response (salivation) occurs. The formula that demonstrates
that relationship is:

Δ
W = c * f(X,W) * X

Where:
c

= learning constant

F(X,W) =

o
i
,
X

is input to

i


Using the example in the text (pages 447
-
4
50
), the reader learns that by applying the rules to
the initial vectors {1,
-
1,1} and {
-
1, 1,
-
1}, and the weight factors {1,
-
1,

1} and {0, 0, 0}, after 13
interations the network continued to respond positively to the stimulus. Then, changing the
stimulus, the network responded as it had been trained . More graphically, the dog started to salivate
at the bell without the visual s
timulus
, even when the stimulus was degraded
.

B.

Supervised Learning


Supervised Hebbian Learning uses the same concepts as described earlier, with a small, but
powerful change. The weight is adjusted to guide the learning to the desired action or sol
ution.
This method starts in the same way as the unsupervised training, but a variable
D
, or desired output
vector, is introduced into the process:

Δ
W = c * f(X,W) * X
b
ecomes

Δ
W = c * D * X

and Δ
W
j
k

= c * d
k

* x
j


then Δ
W = c * Y * X,
where
Y*X is

the other vector product
(matrix multiplication)
and
training the network,
W
t+1
= W
t

+
c*Y
j

* X
j

which explodes to
,
W
1
= W
0

+
c
(
Y
1

* X
1
+

Y
2

* X
2
+…
Y
t

* X
t
)
,
where

W
0

is the initial weight
configuration.



Page
5

of
9

Managing the weights provide significant control over

the behavior of the processes.
While
the mathematics is interesting, the application of these algorithms provide
s

real insight into the
brilliance of these discoveries.

III.

Application of
Hebbian
Learning Rules


A.

Unsupervised Learning and Zip Code Ide
ntification


One variation of the use of the Hebbian Learning model is pattern identification including
handwritten digit recognition. In this example, Sven Behnke of the University of Berlin
generated a neural abstraction pyramid using the iterative tech
niques. This parallels the
pattern recognition done by the visual cortex. Each layer is 2
-
dimensional representation of
the image. The columns of the pyramid are formed by the overlapping fields from the arrays.
Using this technique, the system is able

to learn the components that make up each of the
digits from 0


9 that might appear in a German zip code. After the initial learning has taken
place, the similar numbers, in different order, are passed through the processor, to test the error
rate.
In

this study, the construction of the weights for the Neural Abstraction Pyramid led to
very high recognition of digits in the test case. Behnke suggests that further work can be
performed using this technique on more complex images. [Behnke]


B.

Neural Netw
orks and Optical Character Recognition

In his paper, P. Patrick van der Smagt applied different neural network techniques to optical
character recognition. Using nearest neighbor (Hebbian), feed forward, Hopfield network and
competitive (Kohonen) learning
, he examines the characteristics of

the images and the quality
of the results. While the nearest neighbor technique seemed to perform the best, all methods

Page
6

of
9

had the following advantages: automatic training and retraining, graceful degradation and
robust
performance and good resource use (parallel processing and reduced storage). [van der
Smagt]


C.

Hebbian Learning and VLSI

In this example, Hafliger and Mahowald describe the effect of spike
-
based normalizing
hebbian learning on CMOS chips. As in the case

of the neurons in the brain acting upon each
other, the proximity of the circuits within the VLSI chip affect its behavior and performance
and how it can be programmed. [Hafliger]

NASA’s Jet Propulsion Laboratory described
similar experience[Assad]
.

IV.

Conc
lusion

Imitating the Hebbian principles of learning as they apply to the brain, the computer scientist
has the opportunity to build models to evaluate patterns or affect behavior of electronics
in a very
effective and efficient manner. As recent research
indicates, the opportunities for these techniques
are still evolving. Modifications to the methodology as well as the combination of multiple
approaches can lead to much higher recognition rates in the area of pattern evaluation and computer
chip creation
.




Page
7

of
9

References

Assad, Christopher and Kewley, Davic,

“Analog VLSI Circuits for Hebbian Learning in Neural
Networks,” NPO
-
20965, August, 2001.
http://www.nasatech.com/Briefs/Aug01/NPO2
0965.html

Becker, Suzanna
,
Lecture 3: Learning and Memory
, Fall 2002,
McMaster

University
,

Ontario,
Canada.
http://www.psychology.mcmaster.ca/3BN3/GazCh4PIModels.PDF

Behnke, Sven
,

“Hebbian Learning and
C
ompetition in the Neural Abstraction Pyramid,”
Free
University of Berlin
, Institute of Computer Science, October 10, 1999.

http://page.inf.fu
-
berlin.d
e/~behnke/papers/ijcnn99/ijcnn99.html

Bollen, J. and Heylighen, F.

(2001) “Learni
ng Webs,”
November 16, 2001,
http://pespmc
1
.vub.ac.be/LEARNWEB.html

http://pespmc
1
.vub.ac.be/
ADHYPEXP
.html

French, Bonnie M.,
“Learning Rule,”
University of Alberta Cognitive Science Dictionary
,

February, 1997.

http://www.psych.ualberta.ca/~mike/Pearl_Street/Dictionary/contents/L/learning_rule.html

Harnad, Steven,

“D. O. Hebb: Father of Cognitive Psychobiology,”
Behavioral and Brain Sciences
,
1985.
http://cogsci.soton.ac.uk/~harnad/Archive/hebb.html

,
http://www.princeton.edu/~harnad/hebb.html

Halfige
r, Philipp and Mahowald, Misha
(1999). “
Spike based normalizing hebbian learning in

an
analog VLSI artificial neuron,”
Learning in Silicon
, ed. Gert Cauwenberghs. Also, in

Page
8

of
9

Kluwer Academics Journal on Analog integrated circuits and signal processing (18/2):
Special issue on Learning in Silicon.
http://www.ifi.uio.no/~halfiger/aicsp99_abstract.html

Hebb, D. O.
(1949).

The Organization of Behavior
. New York: Wiley
.


Howe, Michael and Miikkulainen, Risto (2000),

Hebbian Learning and Temporary Storage in the
Convergence
-
Z
one Model of Episodic Memory,”
Neurocomputing

32
-
33: 817
-
821.

http://www.cs.utexas.edu/users/nn/pages/publications/memory.html

Klein, Raymond, “
The Hebb Legacy,” Dalhou
sie University, Canada.
http://web.psych.ualberta.ca/~bbcs99/hebb%20legacy.html


http://www_mitpress.edu/MITECS/w
ork/klein1_r.html

Luger, George F
. (2002)


Artificial Intelligence: Structures and Strategies for Complex Problem
Solving
, 4
th

Edition
. “Hebbian Coincidence Learning,” pp. 446
-
456. Addison
-
Wesley, Essex,
England.

McCarthy, J. and others
, “A Proposal fo
r the Dartmouth Summer Research Project on Artificial
Intelligence,” August 31, 1955.
http://www
-
formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

M
iller, Kenneth

D. and Ma
c
Kay, David J. C.
, “The Role of Constraints in Hebbian Learning,”
Neural Computation
, 1994..
http://www.inference.phy.cam.ac.uk/mackay/abstracts/constraints.html


Orr, Genevieve,
“Lecture Notes: CS
-
449 Neural Networks,” Willamette University, Fall 1999.
http://www.willamette
.edu/~gorr/classes/cs449/intro.html


Page
9

of
9


Raicevic, Peter and Johansson, Christopher.
“Biological Learning in Neural Networks,” Fran
beteende till cognition, December, 2001. (Paper Submitted in partial fulfillment of course
requirements.)

Van d
er Smagt, P. Pa
trick,
“A Comparative Study of Neural Network Algorithms
A
pplied to
Optical Character Recognition,” ACM, 089791
-
32
-
8/90/007/1037, 1990, pp. 1037
-
1044.

Other Sources
:


http://140.
113.216.56/course/NN2002_BS/slides/07hebbian.pdf


http://web.bryant.edu/~bblais/slides/job_talk/tsld023.html