Download actions - Canvas

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Single-unit recording wikipedia , lookup

Neuropharmacology wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Transcript
spikes
decisions..
-^actions
dynamical foundations of neuroscience
HUGH R. WILSON
Spikes, decisions, and actions
84
100
CO
or
Q.
2000
4000
6000
8000
Time (ms)
Fig. 6.7 Response of (6.14) to a brief. 200ms stimulus coinciding with the narrow peak on the upper left.
Recurrent excitation maintains activity of both E(t) cells at a high level, but activity slowly decays as neural
adaptation A{t) builds up. After 5000 ms a sudden loss of neural activity occurs at a bifurcation.
current value of the adaptation variables A\ and A2. These adaptation variables change
200 times more slowly than the spike rates, so they can be viewed as parameters that
slowly change the structure of the equilibrium points of the system. As illustrated in Fig.
6.7, the response of the system follows the slowly changing equilibrium points for more
than 5000ms. Then a catastrophe occurs: one asymptotically stable equilibrium joins with
the unstable saddle point and vanishes, so the neural response rapidly drops to zero.
Reference to Fig. 6.5 shows that the adaptation variable functions like a slowly varying
input driving the system, once excited, back through the bifurcation point at A.
The mathematical notion of a catastrophe or bifurcation also underlies the geology of
plate tectonics and earthquakes. As pressure builds up on tectonic plates, they compress
only slightly for a long time, so the distance between points on opposite sides of a fault line
changes little. At some point, however, the pressure becomes great enough to overcome
frictional forces, and the plates rapidly slip to a new equilibrium position, thus producing
an earthquake, which can be a true catastrophe in the vernacular sense! The mathematical
concepts analogous to those in this neural short-term memory example underlie geophysical catastrophes as well.
6.7
Competition and neural decisions
So far we have analyzed two nonlinear neural networks: one for divisive gain control and
one for short-term memory. The former involved a negative feedback loop, while the
latter incorporated mutual excitation. A further possible interaction between two neurons is mutual inhibition, which will be examined here. As we shall see, the state space of
two mutually inhibitory neurons is similar to that of the memory network in having two
asymptotically stable steady states separated by an unstable saddle point. However, each
steady state in this case is defined by activity in one neuron and complete inhibition of the
Nonlinear neurodynamics and bifurcations
85
other, so this network makes one of two mutually exclusive decisions based on the relative
strengths of inputs to the two neurons.
Consider the following equations:
dF,
1
-i± =
-{-El+S{Ki-3E2))
dt
T
dF
2
+ S(K2- 3 F , ) )
Z = -(-E2
~dT T
( lOO(x)2
S(x) --= \ 1202 + (x)2 x > 0
u
(6.18)
x<0
K\ and K2 here are the stimuli to the two neurons in the network, and S(x) is again the
Naka-Rushton function from (2.11). Assume r = 20 ms. Each neuron inhibits the other
subtractively with a synaptic strength of —3. Explore the responses of this network by
running WTA2.m using various combinations of excitatory inputs K\ and K2. Above a
minimum level of excitation (about 50) and assuming initial conditions with all variables
zero, the system always switches to an equilibrium point at which the more strongly
stimulated neuron is active and the other neuron has been shut off by inhibition. This is
the simplest example of a winner-take-all (WTA) network. This name has been used to
describe such networks, because the neuron receiving the strongest stimulus will win the
inhibitory competition with the other neurons and in turn suppress all of its competitors.
Let us analyze (6.18) in the case K\ and K2 = 120. Due to the competitive inhibition, one
steady state is E\ = S(K) = 50, and F 2 = S(K - 3 x 50) = S(-30) = 0. Similarly, the
reader can easily verify that E\ = 0, and F 2 = 50 is also an equilibrium point. If you run
WTA2.m, you will see that the isoclines intersect at a third equilibrium point in addition to
the two above. From symmetry considerations you might expect this to occur where
E\ = F 2 , and this is correct. If one sets F| = F 2 in either of the isocline equations in (6.18),
the MatLab roots function gives the solution E\ = E2 = 20.
As in previous examples, the stability of each steady state must next be determined. As
(50,0) and (0, 50) will be the same, let us just examine the Jacobian matrix at the former
state:
/(6.19)
0
The eigenvalues here are obviously both identical: X = - 1/r, so (50,0) and (0, 50) are
both asymptotically stable nodes that are critically damped. At (20,20) we can use (6.10)
to evaluate the Jacobian, with the result:
T
8
~57
5r
1
~T
(6.20)
/
120
Spikes, decisions, and actions
Fig. 8.4 Schematic of an annulus (gray region) satisfying Theorem 10 but containing three limit cycles. Two
are asymptotically stable (solid curves), but the intervening one (dashed curve) must be unstable. A is an
unstable node or spiral point. Representative trajectory directions are shown by the arrows.
boundaries must all approach limit cycles (not necessarily the same one). If there is more
than one limit cycle, asymptotically stable limit cycles must alternate with unstable limit
cycles. You can convince yourself of this by imagining what would happen to trajectories
originating between two nested, asymptotically stable limit cycles: they would have to be
separated by an unstable limit cycle, which is illustrated schematically in Fig. 8.4.
Although the existence of alternate asymptotically stable and unstable limit cycles may
seem to be an unlikely occurrence, they are actually predicted by the Hodgkin-Huxley
equations, and their existence has been experimentally verified! Armed with Theorems 9
and 10, we are now ready to study limit cycles in two-dimensional neural systems.
8.2
Wilson-Cowan network oscillator
As a first application of these criteria to neural oscillations, let us consider a localized (i.e.
non-spatial) version of the Wilson- Cowan (1972) equations. The equations presented
here are the simplest example of these equations that possesses a limit cycle. Consider a
four-neuron network consisting of three mutually excitatory E neurons which in turn
stimulate one inhibitory I neuron that provides negative feedback onto the three E cells as
depicted on the left in Fig. 8.5. Neural circuits like this are typical of the cortex, where
inhibitory GABA neurons comprise about 25% of the total population of cortical cells
with the rest being mainly excitatory glutamate neurons (Jones, 1995). Thus, the network
in Fig. 8.5 may be thought of as approximating a local cortical circuit module.
Let us simplify the Wilson-Cowan network by assuming that all E neurons receive
identical stimuli and have identical synaptic strengths. Under these conditions we can
invoke symmetry and set F, = E2 — E3, thereby reducing the number of neurons in the
network, a procedure sometimes termed subsampling (Wallen et al., 1992). This results in
the mathematically equivalent two-neuron network shown on the right in Fig. 8.5. In fact,
we can generalize this argument to any number of mutually excitatory and inhibitory
neurons with identical interconnections, so the key concept is that of recurrent excitation
coupled with recurrent inhibition. Note that by reducing the network to two neurons
(or two neural populations), the recurrent excitation is transformed into equivalent
Nonlinear oscillations
121
Fig. 8.5 Neural circuit of a network oscillator (Wilson and Cowan, 1972). Excitatory connections are shown
by arrows and inhibitory by solid circles. The simplified network on the right is mathematically identical to
that on the left by symmetry if all E —> E connections have the same strength, etc.
self-excitation by the F neuron. The equations for the spike rates are:
dF 1
-E+S(\.6Ed? 5V
d/_ 1
-I+S(\.5E))
d?~To'
I+K))
(8.
The function 5 in (8.2) is the Naka-Rushton function from (2.11) with N = 2, M = 100,
and a = 30. These equations indicate that the F neuron receives recurrent excitation with
synaptic weight 1.6 and also receives subtractive inhibition from the / neuron. The
external input that drives the network is K, which is assumed constant. The / neuron
receives excitatory input from the E neuron with weight 1.5, and the time constants for E
and /are 5ms and 10ms respectively. When K = 0, it is easy to verify that F = 0, / = 0
is the only equilibrium point and is asymptotically stable. In an intermediate range of
lvalues the dynamics change, however, and limit cycle oscillations result.
Let us examine the state space of (8.2) in order to prove the existence of limit cycles for
K = 20. The isocline equations are:
E= S(\.6E-I+
K)
I=S(\.5E)
v(8.3)
;
The second of these equations is easily plotted in its current form and is shown by the
dashed line in Fig. 8.6. To plot the first isocline, however, we must employ the inverse of
S(x), which is obtained as follows:
S(x)
Mx2
so y = S(x) has the inverse:
M-
for 0 < y < M
(8.4)
122
Spikes, decisions, and actions
100
100
200
300
400
Time (ms)
Fig. 8.6 Limit cycle of the Wilson-Cowan (1972) equations (8.2). Results are plotted in the phase plane
(above) along with the two isoclines from (8.3) and (8.5). The lower panel plots £(r) (solid line) and R(l)
(dashed line) as functions of time.
Therefore, the first isocline in (8.3) becomes:
/=
\.6E+K
(8.5)
As M = 100 and o = 30, the resulting isoclines for A' = 20 are plotted in the E-I state
space in Fig. 8.6. Note that there is a unique equilibrium point, which is the solution of
(8.3) with the first equation transformed into form (8.5). To solve for the equilibrium, we
simply substitute the second equation in (8.3) into (8.5) to get:
A/(1.5F)a 2 ±(1.5F) 2
1.6F-20±cr
M
(8.6)
Nonlinear oscillations
123
MatLab provides an easy method for solving (8.6): write a function script for the left-hand
side of (8.6) (called WCequilib.m on the disk) and use the command fzero('WCequilib',
guess) where 'guess' is a first approximation to the answer. This is all implemented in
MatLab script Equilibrium WC.m, which finds that F = 12.77 at equilibrium, so 7 = 28.96
from (8.3).
Given the values of Fand / a t equilibrium, we can now calculate the Jacobian of (8.2).
Using the formula for dS/d.v in (6.10):
0.42
Q32
—0 39 \
_QA J ;
A = 0.16±0.24i
(8.7)
Thus, the only equilibrium point of the system is an unstable spiral point. We can now
use the Poincare-Bendixon theorem to prove that (8.2) must have at least one asymptotically stable limit cycle. Given the fact that the neural response function 0 < S < 100,it
follows that trajectories can never leave the state space box bounded by 0 and 100. This
can be proven by considering the values of both derivatives in (8.2) on the boundaries
of this box: dF/d?>0 when F = 0; dF/d?<0 when F=100; d//d?>0 when / = 0 ; and
d//d? < 0 when / = 100. This represents an enormous simplification when dealing with
nonlinear dynamics of neurons: spike rates are always bounded by zero below and a
maximum value determined by the absolute refractory period. Thus, all trajectories that
enter the box 0 < F < 100, 0 < / < 100, must stay within it, and all trajectories must also
leave some small neighborhood of the unstable equilibrium point. Therefore, we have
created an annulus containing no interior steady states, so by Poincare-Bendixon Theorem 10, an asymptotically stable limit cycle must exist. If you run the MatLab simulation
WCoscillator.m, you will see that an asymptotically stable limit cycle does indeed exist,
and it is plotted in Fig. 8.6 for K=20. Experimentation with a wide range of initial
conditions shows that all trajectories do indeed approach the limit cycle asymptotically.
In addition, it is interesting to experiment with other values of K in (8.2) to determine the
stimulus range producing limit cycles.
8.3 FitzHugh-Nagumo equations
The simplest equations that have been proposed for spike generation are the FitzHughNagumo equations. Like the Hodgkin-Huxley equations (see Chapter 9), these equations
have a threshold for generating limit cycles and thus provide a qualitative approximation
to spike generation thresholds. FitzHugh was well aware that his equations did not
provide a detailed model for action potentials but emphasized: 'For some purposes it is
useful to have a model of an excitable membrane that is mathematically as simple as
possible, even if experimental results are reproduced less accurately." (FitzHugh, 1969).
This simplicity will aid in studying limit cycle oscillations.
The FitzHugh (1961) and Nagumo et al. (1962) equations describe the interaction
between the voltage V across the axon membrane, which is driven by an input current
/input and a recovery variable R. R may be thought of as mainly reflecting the outward K+
current that results in hyperpolarization of the axon after each spike. As these variables