Download Excitability changes that complement Hebbian learning

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Internet protocol suite wikipedia , lookup

CAN bus wikipedia , lookup

List of wireless community networks by region wikipedia , lookup

Recursive InterNetwork Architecture (RINA) wikipedia , lookup

Transcript
Network: Computation in Neural Systems
March 2006; 17: 31–41
Excitability changes that complement Hebbian learning
MAIA K. JANOWITZ & MARK C. W. VAN ROSSUM
Institute for Adaptive and Neural Computation, School of Informatics, 5 Forrest Hill, Edinburgh, EH1
2QL, UK
(Received 10 March 2005; revised 23 June 2005; accepted 4 August 2005)
Abstract
Experiments have shown that the intrinsic excitability of neurons is not constant, but varies with
physiological stimulation and during various learning paradigms. We study a model of Hebbian synaptic
plasticity which is supplemented with intrinsic excitability changes. The excitability changes transcend
time delays and provide a memory trace. Periods of selective enhanced excitability can thus assist in
forming associations between temporally separated events, such as occur in trace conditioning. We
demonstrate that simple bidirectional networks with excitability changes can learn trace conditioning
paradigms.
Keywords: Hippocampus, excitability, trace conditioning, Hebbian learning
Introduction
Current theories of learning and memory almost invariably relate to Hebb’s ideas, which
postulate that memory is stored in the synaptic weights and learning is the process that
changes those weights. The advantage of using the synaptic weight rather than the whole
neuron to store information is that there are many more synapses than there are neurons,
and hence the storage capacity of synaptic memory is much higher and storage is connection
specific. However, here we argue that cell-wide changes of the excitability can complement
the learning.
The intrinsic excitability of a cell determines how much activity will result with a given
amount of input. The excitability of neurons is not fixed but is regulated (for reviews see
Zhang & Linden 2003; Daoudal & Debanne 2003). During various phases of learning
excitability can change dramatically in both cortical and hippocampal networks (Brons &
Woody 1980; Moyer et al. 1996; Giese et al. 2001.) In vitro studies have revealed changes
in both pre-synaptic (Ganguly et al. 2000; Li et al. 2004) and postsynaptic (Daoudal et al.
2002; Xu et al. 2005) excitability after LTP protocols. A recent study, however, showed
that an increase in the excitability can be induced solely by short periods of high activity,
independently of synaptic activation (Cudmore & Turrigiano 2004).
In this study, we integrate these findings and show that transient changes in the excitability
of neurons can assist in Hebbian learning. Although the excitability changes are cell-wide
and thus not synapse specific, subsequent Hebbian learning can be synapse specific. We find
Correspondence: Mark van Rossum, Institute for Adaptive and Neural Computation, School of Informatics, 5
Forrest Hill, Edinburgh, EH1 2QL, UK. Tel: 44 131 6511211. E-mail: [email protected].
c 2006 Taylor & Francis
ISSN: 0954-898X print / ISSN 1361-6536 online DOI: 10.1080/09548980500286797
32
M. K. Janowitz & M. C. W. van Rossum
that as a result, associations can be learned even when the stimuli do not overlap in time, or
when the stimulated nodes are not directly connected. Thus, excitability changes enrich the
repertoire of learning rules.
We apply this learning scheme to trace conditioning. In trace conditioning, a tone is
sounded; after the tone stops, there is a delay of about a second, which is followed by an
air-puff in the eye. With training, animals learn to associate the tone with the unpleasant
air-puff, and will close their eyes before the air-puff arrives. Importantly, the tone and airpuff are not simultaneous, so that a memory trace is essential in order for the association
to develop. Various approaches have been proposed to allow for learning the association of
tone and air-puff, including persistent activity during the delay, and learning rules that span
across time (Rodriguez & Levy 2001). Theoretically, the need for a trace during learning
has been mainly developed in reward learning where action and reward can be separated in
time (Sutton & Barto 1998). Learning associations between temporally related events has
also been suggested as a means to learn invariances in perception (Földiák 1991; Wallis &
Rolls 1996).
However, the biological implementation of the trace is not known. It has been suggested
to stem from the NMDA time-constant, synaptic facilitation or persistent activity (see Wallis
(1998) for a review). The novelty of this study is that we use excitability to provide the memory
trace and bridge temporal gaps. The purpose of this study is to explore how excitability
changes might assist association in simple network models.
Methods
Network and connections
The network consisted of either two or three layers. Each layer typically contained four
nodes; this small number is chosen for convenience and is not a restriction of the network.
The nodes can represent single neurons or small groups thereof. The layer was connected
with excitatory synapses to the next layer in an all-to-all fashion, that is, each node was
connected to each neuron in the neighbouring layers. The connections were bidirectional
and plastic in both directions.
Within each layer, all nodes received mutual (lateral) inhibition; these inhibitory connections cause competition between the nodes within a layer. The weights of the inhibitory
connections were fixed at winh = 0.8 (self-inhibition was excluded).
Nodes
The activity of each node was given by its firing rate and ranged between 0 and 1. The input
h to node i was given by
hi =
wi j r j − winh
r k + e xti + Ei
(1)
j
k
The sum over the inputs j represents the input from the adjacent layers and implements the
all-to-all connectivity; the sum over the inhibitory inputs k was over all the nodes within the
layer and implements the lateral inhibition. The input e xti represents an external stimulus
to the nodes; it takes the value 1 when the stimulus is turned on, and is zero otherwise. In
the first variant of the model, the firing rate is modelled as a threshold-linear function.
τ
dr i
= −r i + g(hi − T)
dt
(2)
Excitability changes that complement Hebbian learning
33
where g(x < 0) = 0, g(0 < x < 1) = x, g(x > 1) = 1. In a second variant of the model, we
i
used a logistic activation function τ dr
= −r i + 1/[1 + exp(T − hi )].
dt
Time was measured in arbitrary units. The time constant τ determines the dynamics of
the nodes; its value was set to 2 simulation time-steps. In biology, the time constant is on the
order of 10 ms. The threshold T determines a general offset which prevents firing when no
input is given; its value was fixed at T = 0.1.
Excitability changes
The novelty in Equation 1 is the excitability Ei of the node. The excitability could take two
values. The default (low) value was zero. When the firing rate increased in one time-step more
than 0.2 units, the excitability switched to its high value, 0.05. The input–output relation for
these two values of the excitability is shown in Figure 1. Note that the excitability change is
rather small, in agreement with the data (Cudmore & Turrigiano 2004). Furthermore, when
the excitability becomes too high, the node might be active even in the absence of input.
With these parameters, this is prevented for the threshold linear neuron; the neuron remains
silent when the excitability is high and no input is present.
The excitability stayed high for a fixed amount of time, long enough to associate with the
second stimulus, which meant 40 time-steps. Other schemes, such as exponentially decaying
excitability, could also be adapted. Unfortunately, biological data on the decay of excitability
is lacking.
Learning rules
We use a Hebbian learning rule to change the excitatory connections between the nodes. At
each time-step the weights are updated according to
wi j = ηr i r j
(3)
where r i and r j are post- and pre-synaptic activities. The η = 0.1 is the learning rate. Its
value is not crucial; it should be fast enough to get sufficient learning, but slow enough
Figure 1. Implementation of excitability changes. The input–output relation (Equations 1 and 2) is plotted for the
two possible values of the excitability. Note, that the change in the excitability is quite small.
34
M. K. Janowitz & M. C. W. van Rossum
to prevent stability problems. With this value, the equilibrium is reached after some 100
stimulus repetitions. At the start of the simulation all weights are set to the same value. A
small Gaussian noise perturbation (σ = 0.01) was added to the initial weights to prevent the
network from remaining stuck in marginal states.
It is well-known that this type of Hebbian learning when unconstrained leads to uncontrolled weight growth. To prevent this, we limited the minimal and maximal synaptic weight
and fixed the sum of the excitatory weights onto each node. We used a subtractive normalization rule, which quite generally leads to synaptic competition (Miller & MacKay 1994).
To set the maximal weight, one can use the following argument: it is not difficult to show
that as soon as the weights are larger than 1, due to the recurrent connections the activity
can become unstable and firing rates explode. Even if, in order to prevent this instability, the
activity is capped at 1, attractor states appear which have high activity even in the absence of
input and lead to a rapid rise in the synaptic weights, as was confirmed in the simulations.
These attractor states appear when the maximal value of the weight is 1 or higher. Therefore,
the maximal synaptic weight was set to 1 (the minimal synaptic weight was zero). The sum
of the synaptic weight was fixed to 1.2. The reason for this choice is that with these settings
at least two weights will be non-zero.
Already this relatively simple model has quite a few parameters. To examine the valid parameter range, we employed a search over parameters such as the amount of lateral inhibition,
the value of the sum of the weights in the various layers and the gain of the input–output relation. As is common in these sort of simulations, we found a rather large region of parameter
combinations for which the simulations work as described.
Results
Model architecture
We explored the role of excitability modulation in a network model. The model is a layered
network with either two or three layers (Figures 2a and 2b). The goal is to develop an
association between a given node in one layer and a given node in the other layer. In the case
of trace conditioning, in which auditory and sensory signals need to be associated, the layers
can be labeled ‘auditory’ and ‘sensory’, but in general we refer to them as top and bottom
layer. Associations can develop in both directions because the network is fully symmetric.
The activity of each node is modeled by its firing rate and has a threshold-linear activation function (see Methods). The excitatory connections between the layers are bidirectional
(recurrent) and all-to-all. Their synaptic weights were subject to standard Hebbian learning with subtractive normalization (see Methods). Biologically, recurrent connections on a
neuron-to-neuron basis are rare, but the model is reasonable when the nodes are thought of
as populations of neurons. Furthermore, within a given layer nodes inhibit each other.
Excitability changes
Recent data suggest that high activity of a neuron can by itself cause a long lasting increase
in its excitability (Cudmore & Turriagiano 2004). This effect does not rely on synaptic
activation, but is purely a result of post-synaptic activity. To model this the excitability was
reflected in a left-ward shift of the F/I curve (Figure 1a) and was chosen to roughly match
the electro-physiology. Note that the shift in the curve is quite small.
Experimentally, it is as yet unclear how the excitability decreases again after an increase.
One possible option is that subsequent induction of LTP decreases excitability (Fricker &
Excitability changes that complement Hebbian learning
35
Figure 2. Network architecture and the proposed model of the role of excitability changes and the interaction with
learning. (a) and (b) Layout of the network, two or three layers of neurons were connected all-to-all. The labels
‘auditory’ and ‘sensory’ apply to the case of trace conditioning, pairing a sound with an air-puff. For clarity the
all-to-all inhibitory connections within the layers are not shown, and the excitatory connections from and to only
one node are shown. (c) The association mechanism in the two-layer model. (i) Stimulus A is presented, exciting
a node. (ii) but also enhancing its intrinsic excitability (indicated by the star pattern). (iii) Next, a stimulus A is
presented to the other layer. This will mainly excite node A in the other layer, leading to LTP in the connection
between the A and A node (thicker arrow). (iv) Subsequent activation with stimulus A will now also excite the
node that received A .
Johnston, SFN abstract 2001). Another option is that the excitability simply stays high and
then decays back to its baseline. For simplicity, we choose the second option, although the
first option is also compatible with the proposed model. In addition, homeostatic mechanisms
are thought to adjust excitability eventually when activity remains too high or too low for
long periods (Desai et al. 1999).
Two layer model
A schematic of the stimulus protocol and network activity is shown in Figure 2c. We trained
the network on the following task: first a stimulus A was presented in the bottom layer
(Figure 2ci). The high activity in the node increases its excitability (Figure 2cii). After a delay,
stimulus A is presented to top layer. The delay is short enough such that the excitability of
node A was still high when A was presented. The connection between A and A is potentiated
(Figure 2ciii), causing a higher response in the A node on subsequent stimulation of A (Figure
2civ).
In addition, a second stimulus, called B, was presented in the bottom layer to a different
node. This was done after a longer delay, in which the excitability returned to its base
level. This was followed by stimulus B in the top layer. One could well imagine the simpler
protocol in which just an association between A and A needs to be learned. The network
can also learn this task, but the task used here is more challenging. It requires the separation
of the A–A and B–B pairings, as would be required in associations learned under natural
conditions.
The (A–A )–(B–B ) protocol was repeated until the weights stabilized. In Figure 3, the
activity of the nodes is plotted at various phases of the learning process. For clarity only the
36
M. K. Janowitz & M. C. W. van Rossum
Figure 3. Excitability assisted learning of associations in the two layer network. The bottom left diagram shows
the stimulation protocol, in which stimulus A in the bottom layer is followed after a delay by stimulus A in the
top layer. The lower (upper) graph shows the activity in the bottom (top) layer. For clarity only the activity of the
two nodes associated to the stimuli are shown. The solid (dashed) line indicates the firing rate of the nodes A and
A (B and B ). The first column is the naive situation before learning takes place. Presentation of either stimulus
A or B weakly activates all top neurons. But subsequent stimulus A leads to higher activity of the A node, as it is
more excitable (open arrow). The middle column represents the situation after learning. After learning stimulus A
selectively activates the A neuron (solid arrow), but not the B neuron (dashed line). The situation is the same for
the B–B association. The rightmost column show the activity after learning has converged but with the excitability
of all nodes reset to its low, default value. The association remains intact, showing that the associations do not rely
on the excitability change once learned.
activity of the nodes that are stimulated is shown (two in each layer); the activity of the other
nodes is not selective. The solid lines show the rates of the nodes that receive the A and A
stimuli; the dashed line the activity of the B and B nodes. Figure 3(left) shows the activities
before learning. In this phase, stimulus A in the bottom layer weakly activates all top neurons.
In the top left panel, this is reflected by the small activity bump of both A and B nodes in the
top layer (overlapping solid and dashed lines). The connections between the layers are at this
stage homogeneous and not selective; the activation of all neurons in the top layer is similar.
Next, stimulus A is presented to the top layer (high rate in top, left panel). Note that the
other node (dashed line) is inhibited through lateral inhibition. The nodes in the bottom
layer are weakly activated, but the node that previously received stimulus A has a higher
excitability and it will have a higher firing rate (open arrow) than the other node. As a result,
the connection between A and A will be potentiated. At the same time the competitive
Excitability changes that complement Hebbian learning
37
learning rules weaken the connection of A to other nodes. This is the mechanism behind
the excitability assisted learning.
It is interesting to note that during the learning phase in the time between stimulus A
and A , the activity of all nodes is zero. This shows that the learning does not depend on
trace activity induced by stimulus A, but relies on the enhanced excitability trace of the
A node. The excitability acts like a hidden variable, it is only visible when the neuron is
activated. As we discuss below, this is relevant for experimental observations made during
trace conditioning experiments.
The same stimulus pattern was repeated 100 times. The middle column of Figure 3 represents the situation when the weights have reached a steady state and the learning has stopped.
Now, stimulus A strongly activates the A node (solid arrow), while the activity of the other
node in the top layer (dashed line) remains zero, hence the activation is selective. The same
holds for the B–B association. In other words, the task is learned. In the current implementation, high activity drives the excitability changes, therefore the excitability keeps switching
to a high level even after the association has been learned. One might suspect this distort
the results. However, when the excitability is kept fixed at its low level after learning, the
association remains correctly intact, Figure 3(right). This shows that the learning has been
transferred to the synaptic weights and does not rely on periods of high excitability anymore.
As a control, we tested whether the model could learn the associations when the excitability
changes were turned off. The excitability was either fixed at its high or its low (default) level.
In both cases nodes in the top layer became responsive to the stimulus in the bottom layer,
but stimulus A did not lead to selective activation of the A node and random associations
developed.
Excitability changes promote persistent activity
The threshold-linear neuron model used above has no activity when input is absent. It
demonstrates that the formation of the correct association lies in the intrinsic excitability.
However, we also implemented a network in which the nodes have a logistic activation
function. This commonly used activation function can be interpreted as the average activity
of many noisy binary neurons.
When we repeat the stimulus protocol, we find that the association is also learned in this
network (Figure 4). It demonstrates that the principle of excitability assisted learning is not
dependent on implementation details. However, there is a noteworthy difference in the activity: during the delay period, the nodes have a higher activity even when the stimulus is absent.
Unlike Figure 3, the activity remains above the resting activity level between stimulus A and
A (arrow). The period of higher activity terminates when the excitability returns to its baseline level (around t = 30 in the bottom layer, and t = 50 in the top layer). This effect is simply
due to the lack of a sharp threshold in the activation function of the nodes below which the
activity is zero. In contrast to the result with threshold-linear nodes where the high excitability
was hidden, the high excitability here is directly reflected in the firing rate. In other words,
the high excitability promotes higher levels of activity. Thus, recently activated neurons have
a higher activity level and the high excitability thus acts as a simple working memory.
Three layer model
We wondered whether association can also be made in more complicated networks. In particular, we tested a network in which there was a hidden layer and no direct connection between
the nodes that receive the A and the A stimulus, (Figure 2b). The motivation is that also
38
M. K. Janowitz & M. C. W. van Rossum
Figure 4. Persistent activity changes and learning associations. The network and figure is identical to Figure 3,
but the nodes have a logistic activation function. This network also learned the association, but in contrast to the
threshold-linear network, the activity stays high between the two to-be-paired stimuli (arrow).
in the nervous system there will not always be a direct connection between the nodes to be
associated. It is known that such associations can develop in networks equipped with the
trace rule (Wallis & Rolls 1996); here, we demonstrate that network with excitability changes
can also learn this task.
The association could be learned successfully with either the sigmoidal or threshold-linear
activation function of the nodes. We show the result for the threshold-linear activation function. In Figure 5, the activity of the stimulated nodes in the bottom and top layer is shown,
as well as the activity of all the nodes in the hidden layer. Initially, the middle layer is hardly
active (Figure 5 left). Under the influence of the competitive Hebbian learning, the nodes in
the middle layer develop associations with nodes in either the bottom or the top layer, but
not with both (Figure 5 middle). The association between bottom and top layer in this early
phase of learning is absent, as a stimulus in the bottom layer does not result in activity in the
top layer. However, as learning continues the middle nodes become more active as a result
of the competitive learning and the correct associations develop (Figure 5 right).
Once the associations are learned, the network displays persistent activity, as in the network
in the previous section, although the behaviour is less robust than in the two layer case. We
found that when the parameters were such that this persistent activity did not develop;
this network architecture could not correctly associate. Although we can not exclude the
possibility that the network has a parameter regime in which the task is learned without
Excitability changes that complement Hebbian learning
39
Figure 5. Excitability assisted learning in a three layer model with threshold-linear nodes. The different rows
represent the activity of the nodes recieving stimuli in bottom and top layers, whereas the middle row show the
activity of all four nodes in the middle layer (solid, dashed, gray solid, and gray dashed). The columns show the
activity before, during early learning (30 iterations), and the stable state after learning has converged (after 100
iterations).
relying on persistent activity, it seemed easier to get this network to operate correctly with
persistent activity.
Discussion
This study explores the interaction between excitability changes and networks with Hebbian
plasticity. The excitability acts as a ‘label’, which identifies which cell was recently active.
If this activity is later followed by another stimulus, the excitable node will have higher
activity. The network can pick out the labeled cell, and Hebbian learning can take place.
Thus this mechanism bridges gaps between temporally separated stimuli. Similar ideas have
been proposed in temporal difference learning under the name eligibility trace (Schultz 1998;
Sutton & Barto 1998).
As our study with a hidden layer shows, association is also possible when the stimulated
neurons are not directly coupled, but connected only indirectly through other neurons. Experimentally, this putative role of the excitability change in the discussed forms of learning
should be testable pharmacologically. Namely, drugs that block the excitability change should
block learning the associations.
40
M. K. Janowitz & M. C. W. van Rossum
We based our model on the observation that high activity of a neuron can enhance its
excitability (Cudmore & Turriagiano 2004). It is important that this form of excitability
changes does not require synaptic activation. This contrasts with studies where excitability
changes occur alongside synaptic plasticity, often sharing the same biochemical pathways
(Daoudal & Debanne 2003; Li et al. 2004; Xu et al. 2005). Such excitability changes are
harder to unite with the scheme proposed here, as they seem to require Hebbian learning
to take place simultaneously with excitability changes, whereas in the proposed scheme one
follows the other.
Role of excitability changes in trace conditioning
The proposed model can also help to explain data of trace conditioning. In trace conditioning
a tone is presented, followed by silence, followed by an air-puff in the eye. After many
presentations animals learn the association and close the eye before the air-puff arrives. The
task is hippocampal dependent (Kim et al. 1995). (In contrast, delay conditioning, in which
there is no delay between the end of the tone and the air-puff, only requires the cerebellum.)
In rabbits, learning of the task is accompanied with strong increases in excitability of
hippocampal pyramidal cells (Moyer et al. 1996, 2000). The amount of excitability change
strongly correlates with whether the task is learned or not, and drugs that increase excitability
can improve the learning of the task (Weiss et al. 2000). Interestingly, no clear correlation
between excitability and the activity levels were observed (McEchron & Disterhoft 1997).
This observation is consistent with our model with the threshold-linear nodes, because also
there is no enhanced activity between the two stimuli.
However, given the model proposed here, it is unclear why pseudo-conditioning (just
presenting the conditioned or the unconditioned stimulus) does not lead to enhanced excitability, as was observed in the data (Moyer et al. 1996). An alternative hypothesis is that
enhanced excitability is only required for consolidation of the memory after its acquisition,
as suggested by Moyer et al. (1996). Experiments that examine short-term memory retention could decide between these two possibilities. Another explanation of the data is that
the excitability is a necessary condition for learning (LTP) and is regulated from outside the
hippocampus. However, this would still require another source for the trace.
Application to the trace rule
A related application of the excitability mechanism is the trace rule. The trace rule is a
proposed learning rule that associates instantaneous input with temporally filtered activation
(Földiák 1991). One application of the trace rule is in learning invariances in continuous
visual input (Wallis & Rolls 1996). The trace rule was also the basis for a trace conditioning
model (Rodriguez & Levy 2001). Here, in a similar fashion, the high excitability stores the
history of the activity of the cell. This way, temporally related stimuli can be associated.
Other potential roles for excitability changes
While not explored in detail here, we like to mention that the high excitability provides a
simple model for priming, in which priming is nothing but the enhanced excitability of a
neuron. Suppose a certain stimulus, e.g., a word, activates a neuron and enhances its excitability. Upon subsequent activation, the primed neuron will have a higher firing rate which
presumably leads to a shorter reaction time. Alternatively, when a population of neurons is
Excitability changes that complement Hebbian learning
41
interrogated, e.g., in task in which words within a certain category need to be generated, the
primed neuron will be more active and likely win the competition with other neurons in the
pool. Also when top-down input arrives, such as by attentional feedback, the neuron with
the high excitability will be picked out automatically. This model of priming nicely integrates
with the proposed learning model.
Acknowledgements
We thank Bob Cudmore and Kit Longden for enlightening discussions.
References
Brons JF, Woody CD. 1980. Long-term changes in excitability of cortical neurons after Pavlovian conditioning.
J Neurophysiol 44:605–615.
Cudmore RH, Turrigiano GG. 2004. Long-term potentiation of intrinsic excitability in layer V visual cortical
neurons. J Neurophysiol 91:341–348.
Daoudal G, Debanne D. 2003. Long-term plasticity of intrinsic excitability: learning rules and mechanisms. Learning and Memory 10:456–465.
Daoudal G, Hanada Y, Debanne D. 2002. Bidirectional plasticity of excitatory postsynaptic potential (EPSP)-spike
coupling in CA1 hippocampal pyramidal neurons. Proc Natl Acad Sci 99:14512–14517.
Desai NS, Rutherford LC, Turrigiano GG. 1999. Plasticity in the intrinsic electrical properties of cortical pyramidal
neurons. Nat Neurosci 2:515–520.
Földiák P. 1991. Learning invariance from transformation sequences. Neural Comp 3:194–200.
Ganguly K, Kiss L, Poo M.-m. 2000. Enhancement of presynaptic neuronal excitability by correlated presynaptic
and postsynaptic spiking. Nat Neurosci 3:1018–1026.
Giese KP, Peters M, Vernon J. 2001. Modulation of excitability as a learning and memory mechanism: A molecular
genetic perspective. Physiology and Behavior 73:803–810.
Kim JJ, Clark RE, Thompson RF. 1995. Hippocampectomy impairs the memory of recently, but not remotely,
acquired trace eyeblink conditioned responses. Behav Neurosci 109:195–203.
Li C-y, Lu J, Wu C, Duan S, Poo M.-m. 2004. Bidirectional modification of presynaptic neuronal excitability
accompanying spike timing-dependent synaptic plasticity. Neuron 41:257–268.
McEchron MD, Disterhoft JF. 1997. Sequence of single neuron changes in CA1 hippocampus of rabbits during
acquisition of trace eyeblink conditioned responses. J Neurophysiol 78:1030–1044.
Miller KD, MacKay DJC. 1994. The role of constraints in Hebbian learning. Neural Comp 6:100–126.
Moyer JR, Power JM, Thompson LT, Disterhoft JF. 2000. Increased excitability of aged rabbit ca1 neurons after
trace eyeblink conditioning. J Neurosci 20:5476–5482.
Moyer JR, Thompson LT, Disterhoft JF. 1996. Trace eyeblink conditioning increases ca1 excitability in a transient
and learning-specific manner. J Neurosci 16:5536–5546.
Rodriguez P, Levy WB. 2001. A model of hippocampal activity in trace conditioning: Where is the trace? Behavioral
Neuroscience 115:1224–1238.
Schultz W. 1998. Predictive reward signal of dopamine neurons. J Neurophysiol 80:1–27.
Sutton RS, Barto AG. 1998. Reinforcement Learning: An Introduction. Cambridge: MIT Press.
Wallis G. 1998. Spatio-temporal influences at the neural level of object recognition. Network 9:265–278.
Wallis G, Rolls ET. 1996. A model of invariant object recognition in the visual system. Prog Neurobiol 51:167–194.
Weiss, C, Preston AR, Oh MM, Schwarz RD, Welty D, Disterhoft JF. 2000. The M1 muscarinic agonist CI-1017
facilitates trace eyeblink conditioning in aging rabbits and increases the excitability of ca1 pyramidal neurons.
J Neurosci 20:783–790.
Xu J, Kang N, Jiang L, Nedergaard M, Kang J. 2005. Activity-dependent long-term potentiation of intrinsic
excitability in hippocampal CA1 pyramidal neurons. J Neurosci 25:1750–1760.
Zhang W, Linden DJ. 2003. The other side of the engram: Experience-driven changes in neuronal intrinsic excitability. Nat Rev Neurosci 4:885–900.