Chapters 6-7 (PPTX, 1904 KB) - URPP Foundations of Human ...

signtruculentBiotechnology

Oct 2, 2013 (3 years and 10 months ago)

222 views

Spikes, Decisions, Actions

The
dynamical foundations of neuroscience


Valance WANG

Computational Biology and Bioinformatics,

ETH Zurich

The last meeting


Higher
-
dimensional linear dynamical systems


General solution


Asymptotic stability


Oscillation


Delayed feedback


Approximation and simulation

Outline


Chapter 6. Nonlinear dynamics and bifurcations


Two
-
neuron networks


Negative feedback: a divisive gain control


Positive feedback: a short term memory circuit


Mutual Inhibition: a winner
-
take
-
all network


Stability of steady states


Hysteresis and Bifurcation


Chapter 7. Computation by excitatory and inhibitory networks


Visual search by winner
-
take
-
all network


Short term memory by Wilson
-
Cowan cortical dynamics





Chapter 6. Two
-
neuron networks

Nagative

feedback

Positive
feedback

Mutual
inhibition

Input

Input

Input

Input

Two
-
neuron networks


General form (in absence of stimulus input):



𝑥
1

=


1
,

2



𝑥
2

=

(

1
,

2
)


Reading current state

1
.

2

as input to the update function


1
,

2
,


1
,

2


Steady states:




1
,

2
=
0




1
,

2
=
0

Negative feedback: a divisive gain control


In retina,


Light
-
> Photo
-
receptors
-
> Bipolar cells
-
> Ganglion cells
-
> optic
nerves



Amacrine

cell



This forms a relay chain of information


To stabilize representation of information, bipolar cells receive
negative feedback from
amacrine

cell



Negative feedback: a divisive gain control


In retina,


Negative feedback: a divisive gain control







Equations
:


dB
dt
=
1
τ
B
(

B
+
L
1
+
A
)


dA
dt
=
1
τ
A
(

A
+
2B
)

B

A

L
ight


Equations
:




=
1
𝜏

(


+

1
+

)




=
1
𝜏

(


+
2
)


Nullclines
:




+

1
+

=
0




+
2
=
0


Equilibrium point:



𝑞
=

1
+
1
+
8
4



𝑞
=
2



Linear stability of steady states


Introduction to
Jacobian
:


Given

dt

1


𝑛
=

1
(

1
,

,

𝑛
)


𝑛
(

1
,

,

𝑛
)


Jacobian


𝜕


𝑥
1
,

,
𝑥
𝑛
𝜕
𝑥
1
,

,
𝑥
𝑛
=
𝜕

1
𝜕
𝑥
1

𝜕

1
𝜕
𝑥
𝑛


𝜕

𝑛
𝜕
𝑥
1

𝜕

𝑛
𝜕
𝑥
𝑛


Example: given
our update function



1
(

,

)
=
1
𝜏

(


+

1
+

)



(

,

)
=
1
𝜏

(


+
2
)


Jacobian


𝜕

1
𝜕
𝜕

1
𝜕
𝜕

𝜕
𝜕

𝜕
=

1
𝜏


1
𝜏


1
+

2
2
𝜏


1
𝜏


Linear stability of steady states

Linear stability
of steady states


Proof:


Our equations





=


,






=

(

,

)


Apply a small perturbation to the steady state,
u,v

<< 1, take
this point as initial condition




0


𝑞
+

0




0


𝑞
+

0


Where

0
=

,

0
=

, u(t),v(t) represents deviation from
steady states










Proof (cont.):


Plug in and solve





(

)

=

(

𝑒𝑞
+


)

=

(

)

=



,


=

(

𝑞
+

,

𝑞
+

)




𝑞
,

𝑞
+

𝜕
𝜕


𝑞
,

𝑞
+

𝜕
𝜕

𝑞
,

𝑞
+



𝜕
𝜕

𝑞
,

𝑞
𝜕
𝜕

𝑞
,

𝑞








=


𝑒𝑞
+



=



=



,


=

(

𝑞
+

,

𝑞
+

)




𝑞
,

𝑞
+

𝜕

𝜕


𝑞
,

𝑞
+

𝜕

𝜕

𝑞
,

𝑞
+



𝜕

𝜕

𝑞
,

𝑞
𝜕

𝜕

𝑞
,

𝑞






Finally









=
𝜕

𝜕


𝑞
,

𝑞
𝜕

𝜕

𝑞
,

𝑞
𝜕

𝜕


𝑞
,

𝑞
𝜕

𝜕

𝑞
,

𝑞




Then use eigenvalue to determine asymptotic behavior

Negative feedback: a divisive gain control


Equations
:




=
1
10
(


+
10
1
+

)




=
1
10
(


+
2
)


Fixed point
(

𝑞
,

𝑞
)
=
(
2
,
4
)


Stability

analysis


Jacobian

at

(2,4)

=

1
1
0

1
25
1
5

1
10


Eigenvalues
𝜆
=

0
.
1
±
0
.
089
𝑖

=> asymptotically stable


Unique
stable

fixed

point

=>
our

fixed

point

is

a «global
attractor
»

Two
-
neuron
networks

Nagative

feedback

Positive
feedback

Mutual
inhibition

Input

Input

Input

Input

A short
-
term memory circuit by positive
feedback


In monkeys’ prefrontal cortex

A short
-
term memory circuit by positive
feedback


First, let’s analyze the behavior of the system in absence of
external stimulus





Equations:




1

=
1
𝜏


1
+

3

2




2

=
1
𝜏


2
+

3

1


E1
E2

A sigmoidal activation function:


=

100
𝑃
2
1
20
2
+
𝑃
2
0



0


<
0


P: stimulus strength


S: firing rate



A short
-
term memory circuit by positive
feedback


Equations:




1

=
1
𝜏


1
+

3

2




2

=
1
𝜏


2
+

3

1



Nullclines:



1
=

3

2
=
100
3

2
2
120
2
+
3

2
2



2
=

3

1
=
100
3

1
2
120
2
+
3

1
2


Equilibrium point:


9

1
3

900

1
2
+
120
2

1
=
0




1𝑞
=
0
,
20
,
80


E
2eq

can be obtained similarly



Equilibrium point:


(

1𝑞
,

2𝑞
)
=
0
,
0
,
20
,
20
,
80
,
80


Stability analysis:


(0,0
): Jacobian
=

0
.
05
0
0

0
.
05
,
𝜆
=

0
.
05
,

0
.
05

 



(20,20):
Jacobian

=

0
.
05
0
.
08
0
.
08

0
.
05
,
𝜆
=
+
0
.
03
,

0
.
13

𝑛  


(100,100):
Jacobian

=

0
.
05
0
.
02
0
.
02

0
.
05
,
𝜆
=

0
.
07
,

0
.
03

 

Hysteresis and Bifurcation


The term ‘hysteresis’ is derived from Greek, meaning ‘to lag
behind’.


In present context, this means that the present state of our
neural network is determined not just by the present state
and input, but also by the
state and input
in the history
(“path
-
dependent”).

Hysteresis and Bifurcation


Suppose we apply

a brief stimulus K to the neural network







The steady states of E1 becomes



1
=
100
3

1
+

2
120
2
+
3

1
+

2


Demo

E1
E2
K

Hysteresis and Bifurcation


Due to change in parameter value K, a pair of equilibrium
points may appear or disappear. This phenomenon is known
as bifurcation.

Two
-
neuron networks

Nagative

feedback

Positive
feedback

Mutual
inhibition

Input

Input

Input

Input

Mutual inhibition: a winner
-
take
-
all neural
network for decision making









1

=
1
𝜏


1
+

𝐾
1

3

2




2

=
1
𝜏
(


2
+

𝐾
2

3

1
)



Demo


K1

E1
E2
K2

Chapter 6. Two
-
neuron networks

Nagative

feedback

Positive
feedback

Mutual
inhibition

Input

Input

Input

Input

Chapter 7. Multiple
-
Neuron
-
network


Visual search by a winner
-
take
-
all network


Wilson
-
Cowan
cortical dynamics


Visual search by winner
-
take
-
all
network


Visual search


Visual search by winner
-
take
-
all network


A N+1 Neuron
-
network, each neuron receives perceptive
input


T for target, D for distractor








𝜏
𝑇

=

T
+
S
(
E
T

3ND
)


𝜏



=

D
+

(



3
𝑁

1


3
)


E
T

T
D
E
D

D
E
D


Stimulus to target neuron:80, to disturbing neurons:79.8







Stimulus to target neuron: 80, to disturbing neurons: 79



Further, this model can be extrapolated for higher level
cognitive decisions. It is common experience that decisions
are more difficult to make and take longer when the number
of appealing alternatives increases.


Once a decision is definitely made, however, humans are
reluctant to change their decision. (Hysteresis in cognitive
process!)

Wilson
-
Cowan model (1973)


Cortical neurons may be divided into two classes:


excitatory (E), usu. Pyramidal neurons


and inhibitory (I), usu.
interneurons


All forms of interaction occur between these classes:


E
-
> E, E
-
> I, I
-
> E, I
-
> I


Recurrent excitatory network are local, while inhibitory
connections are long range







A one
-
dimensional spatial
-
temporal model


𝜏
𝜕
𝑥
,

𝜕
=


(

)
+


(




(

)



𝐼
𝐼
(

)
+

(

)
)
𝑥
𝑥


𝜏
𝜕
𝐼
𝑥
,

𝜕
=

𝐼
(

)
+

𝐼
(


𝐼

(

)



𝐼
𝐼
𝐼
(

)
+

(

)
)
𝑥
𝑥



E(x,t
),
I(x,t
) := mean firing rates of neurons


x

:= position


P,Q := external inputs


w
EE
,

w
IE
,

w
EI
,

w
II
,

:= weights of interactions





𝜏
𝜕
𝑥
,

𝜕
=


+
1




(







𝐼
𝐼
+

)
𝑥
𝑥


𝜏
𝜕𝐼
𝑥
,

𝜕
=

𝐼
+
1

𝐼

𝐼
(


𝐼




𝐼𝐼
𝐼
+

)
𝑥
𝑥



S
patial
exponential
decay is determined by, e.g.








=


exp

(

𝑥

𝑥

𝜎
𝐸𝐸
)


x := position of input


x’ := position away from the input


Sigmoidal activation function




=
100
𝑃
2
𝜃
2
+
𝑃
2


P := stimulus input


Sigmoidal curve with respect to P


Example: short term memory in prefrontal cortex


A brief stimulus = 10ms, 100 µ
m







A brief stimulus = 10ms, 1000 µm

Wilson
-
Cowan model


Examples: short term memory, constant stimulus









Summary of Chapter 7


Winner
-
take all network


Visual search can be disturbed by the number of irrelevant but
similar objects



Wilson
-
Cowan model


A one
-
dimensional spatial
-
temporal dynamical system


Applications:


Short term memory in prefrontal cortex