Download Firing Frequency of Leaky Integrate-and

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Neuropsychopharmacology wikipedia , lookup

Single-unit recording wikipedia , lookup

Neuropharmacology wikipedia , lookup

Transcript
J. theor. Biol. (1998) 195, 87–95
Article No. jt980782
Firing Frequency of Leaky Integrate-and-fire Neurons with Synaptic
Current Dynamics
N B*‡  S S†
* LPS, Ecole Normale Supérieure, 24 rue Lhomond, 75231 Paris Cedex 05, France
and † Dipartimento di Fisica, Università di Firenze Largo E. Fermi 3, 50100 Firenze,
Italy
(Received on 2 October 1997, Accepted in revised form 8 July 1998)
We consider a model of an integrate-and-fire neuron with synaptic current dynamics, in which
the synaptic time constant t' is much smaller than the membrane time constant t. We
calculate analytically the firing frequency of such a neuron for inputs described by a random
Gaussian process. We find that the first order correction to the frequency due to t' is
proportional to the square root of the ratio between these time constants, zt'/t. This implies
that the correction is important even when the synaptic time constant is small compared with
that of the potential. The frequency of a neuron with t' q 0 can be reduced to that of the
basic IF neuron (corresponding to t' = 0) using an ‘‘effective’’ threshold which has a linear
dependence on zt'/t. Numerical simulations show a very good agreement with the analytical
result, and permit an extrapolation of the ‘‘effective’’ threshold to higher orders in zt'/t.
The obtained frequency agrees with simulation data for a wide range of parameters.
7 1998 Academic Press
1. Introduction
A basic problem in modeling the stochastic
spiking activity of a neuron is, given the statistics
of the input, to determine the statistics of its
output. This problem has proven difficult even
for simple input statistics and simple models such
as the leaky integrate-and-fire (IF) neuron
(Knight, 1972). The IF neuron is essentially a
capacitor charged by the spikes arriving on the
dendrite and has no intrinsic spiking dynamics.
The distribution of emitted inter spike intervals
(ISI) has often been studied in the framework of
the diffusion approximation, which consists in
approximating the input of the neuron by a
Gaussian stochastic process (see e.g. Tuckwell,
‡ Author to whom correspondence should be addressed.
0022–5193/98/021087 + 09 $30.00/0
1988). A relatively simple expression can be
obtained for the mean frequency of such a
neuron as a function of the statistics (mean and
variance) of its input (Ricciardi, 1977; Amit &
Tsodyks, 1991). It can in turn be used in models
of networks of IF neurons to obtain a
self-consistent theory which gives the firing rates
in the stable stationary states of such networks
(Amit & Brunel, 1997a,b). The distribution of
ISIs for the simple IF neuron with a periodic
input has also recently been considered by
Bulsara et al. (1996) who studied stochastic
resonance phenomena in such a model.
Once a simple model like the basic leaky IF
neuron is well understood, it is also important to
determine how incorporating more realistic
features modifies its properties. For example, the
basic IF neuron has no synaptic current
7 1998 Academic Press
.   . 
88
dynamics; this results in an excitatory postsynaptic potential (EPSP) rising instantaneously
to its peak value when a spike is received at the
synapse, in contrast with real synapses, whose
EPSPs have a finite rise time. Such EPSPs can be
modelled using a first order differential equation
for the synaptic current (see e.g. Frolov &
Medvedev 1986). Such current dynamics introduces temporal correlations in the synaptic
current which are not present in the simple leaky
IF neuron. In this approach, the stochastic
dynamics of the neuron is described by two
coupled equations for the membrane potential
and the synaptic current, respectively.
The goal of this paper is to understand how the
introduction of such synaptic current dynamics
modifies the firing properties of the neuron, when
the diffusion input is first filtered by the synapses
on an independent time scale. In this problem the
relevant parameter is the square root k of the
ratio between the time constants of the synaptic
current and the potential. The average ISI is
calculated analytically to first-order in k. This
calculation suggests that a neuron with a finite
synaptic time constant can be reduced to an
effective IF neuron without synaptic current
dynamics (k = 0), but with an ‘‘effective’’
threshold which has a linear dependence on k.
We have performed extensive simulations in a
wide range of parameters, for which we have
confirmed that the diffusion approximation is
valid (large number of arriving spikes per
integration time constant, EPSPs small compared with threshold). Simulation results can be
fitted by a quadratic polynomial in k for the
effective threshold when the mean synaptic input
drives the neuron below threshold. The linear
term in the extrapolation is in good agreement
with the analytical calculation. The numerical
extrapolation of the effective threshold given by
this fitting procedure yields a resulting average
ISI consistent with simulation data in a wide
range of k, for 0 Q k Q 1.
2. The Model
    
We study a leaky integrate-and-fire (IF)
neuron (see e.g. Knight, 1972; Ricciardi, 1977;
2.1.
Tuckwell, 1988), characterized by its depolarization at the soma V(t), which obeys the
integrator equation
tV (t) = −V(t) + tI(t)
(1)
where t is the integration time of the membrane
depolarization at the spike emitting part of the
soma and I(t) is the synaptic current charging
that part of the membrane. When the depolarization reaches the threshold u the neuron emits a
spike, and the potential is reset to a value H,
after an absolute refractory period t0 .
    
The effective afferent current I(t), due to the
temporal variation of the synaptic conductances
provoked by afferent spikes charging the
spike-emitting integrator in eqn (1), obeys the
equation:
2.2.
N
t'I (t) = −I(t) + s Ji s d(tik − t)
i=1
(2)
k
The sum over i is over the synaptic sites on the
dendrites, while the sum over k is over all spikes
arriving at a given site, and tik is the time of
arrival of spike number k at synapse i. t' is the
time constant of the conductance changes at the
synaptic sites (see e.g. Frolov & Medvedev, 1986;
Amit & Tsodyks, 1991). Ji is the efficacy of
synapse i, and N is the number of afferent
synapses. In the following, for simplicity, we set
Ji = J for all i.
A single spike arriving at a resting neuron at
time t = 0 (V(t = 0) = 0) provokes an EPSP of
the form
V(t) = J
$ 0 1
0 1%
t
t
t
exp − − exp −
t − t'
t
t'
U(t)
if t $ t', or
0 1
t
t
V(t) = J exp − U(t)
t
t
if t = t'. In both equations U is the Heaviside
function, U(x) = 1 for x q 0 and 0 otherwise.
Some EPSPs are shown in Fig. 1 for J = 0.1 mV,
t = 10 ms, and three different values of t' = 0, 5,
 -- 
in which j is a Gaussian white noise with zero
mean and unit variance, Qj(t)j(t ') q =
d(t − t '). We perform a change of variables
0.12
0.10
V(t) (mV)
89
0.08
I = m̃ +
0.06
s̃
z2t '
z
and
V = tm̃ +
X
t
s̃x,
2
to obtain dimensionless variables x and z. With
these new variables the system of eqns (1,2)
becomes
0.04
0.02
x
z
xt = − +
t ztt'
0.00
0
10
20
30
40
t (ms)
F. 1. Single EPSP for J = 0.1 mV, t = 10 ms, and three
different values of t' = 0, (—), 5 ms (– – –) and 10 ms (- - -).
10 ms. The EPSP amplitude (the value of the
potential at its maximum) decreases when t'
increases, but the integrate EPSP (the area under
the curve) is independent of t'.
 
We suppose that the spike train received by the
neuron is a Poisson process with rate Nn, where
n is the rate of activation of an individual synapse
and N is the number of synapses. If n is low, but
in an interval t the number of arriving spikes is
high due to the large number of input channels
N (i.e. Nn1), and if the individual synaptic
efficacies are small compared with threshold
(i.e. Ju), we can use a diffusion approximation
(e.g. Tuckwell, 1988), which consists in approximating the Poisson process corresponding to the
input spike train,
2.3.
z
zt = − +
t'
The threshold u and the reset potential H, in
these new variables, are
u
=
m = m̃t = NJnt,
by a diffusion process with the same mean and
variance as the Poisson process S. The two first
moments of S are
QS(t) q = JnN 0 m̃
QS(t)S(t ') q c = J 2nNd(t − t ') 0 s̃2d(t − t ')
Thus we rewrite S as
S(t) = m̃ + s̃j(t)
H
=
H−m
z2
s
s = s̃zt = JzNnt
In this way we have extracted the dependencies
on t' and t from the stationary distribution of z,
which is now a Gaussian distribution with zero
mean and unit variance.
The characteristic time-scales of the system are
t' and t. Our purpose is to study the behaviour
of the system for small values of t'/t. We rescale
time t : tt, and define k = zt'/t. Equations
(1,2) become
N
k
u−m
z2,
s
where
S(t) = s Ji s d(tik − t),
i=1
X
2
j(t)
t'
z
xt = −x + .
k
(3)
z
z2
zt = − 2 +
j(t).
k
k
(4)
The correlation function of the current is:
0 1
=t=
k2
lim Q z(t)z(t + t) q = exp −
t:a
The variable z has a correlation time k 2. On
time-scales much longer than k2 the current may
be considered as Gaussian white noise. To
understand the effect of the current dynamics, we
consider two stochastic processes corresponding
to two neurons, one with k = 0 and the other
.   . 
90
with k q 0. The potential of the first neuron, x0 ,
is described by
t = 0, i.e. a Gaussian distribution with mean zero
and unit variance. Thus
xt = −x + z2j(t)
p(x, z,0) = d(x − H
)
The second, xk , follows eqns (4, 3), in which we
use the same white noise j(t) as for x0 . To
investigate the effect of k q 0 we consider the
square root of the mean quadratic difference
between these two processes
z2p
z2
e− 2
(6)
The Fokker–Planck equation, eqn (5) may be
written in the form of the continuity equation
(see e.g. Risken, 1984)
k2
D(t) = z(xk (t) − x0 (t))2 ,
1p 1Jz 1Jx
+
+
=0
1t
1z
1x
where the probability currents Jx and Jz are
defined by
and find
D(t) = kzz(0)2(e−2t − e−2t/k ) + 1 − e−2t/k
2
2
Jx = kzp − k 2xp,
+ O(k2 )
This means that the same stochastic white noise
input will typically yield a difference in the
potentials of these two neurons of order k. Thus
we anticipate the effect of synaptic current
dynamics to be of order k.
3. Analytical Results: First-order Correction to
the First Passage Time
Our aim is to study the average time needed
for a neuron whose potential is initially at H, due
to the post-spike reset, to emit its first spike, i.e.
the average time for the potential to escape the
interval ] − a, u]. We define p(x, z, t) as the
probability density that at time t the neuron has
depolarization x and current z, and has not yet
crossed the spiking threshold by time t. This
density obeys the Fokker–Planck equation
associated to the system of stochastic eqns (3, 4)
k2
1
0
1
1p
12p 1zp
1xp
1p
=
− kz
+
+ k2
1t
1z2
1x
1x
1z
(5)
At t = 0, the potential of the neuron starts again
to integrate its input after the absolute refractory
period. Thus the potential is at reset potential
x = H
. The distribution of the current when a
spike is emitted is conditioned by the fact that
the current has driven the potential above
threshold. If t' is smaller than the refractory
period, however, the distribution of the current
will come back to its stationary distribution at
0
Jz = − zp +
1p
1z
1
The dynamics of the neuron is restricted to the
half plane x E u. The neuron can escape from
the half plane only on the line x = u, and by
definition it is only possible to cross this line
from below. This means that the probability
current should vanish at z = 2a and x = −a,
while it should be positive at x = u. The
conditions on J at z = 2a and x = −a
correspond to the following conditions on p:
lim xp(x, z, t) = 0,
x:−a
lim zp(x, z, t) = 0,
z:2a
and
lim
z:2a
1p(x, z, t)
= 0.
1z
The last condition on J is specific to this
problem:
Jx (x = u
, z, t) = k(z − ku
)p(x = u
, z, t) e 0.
This condition requests that p(u
, z, t) = 0 (an
absorbing boundary condition) for z E ku
. It
means that we have no chance of finding the
neuron with a current z smaller than ku
at the
boundary x = u
, because that would mean the
potential of the neuron was above threshold
immediately before. The fact that we need to
solve a problem in which the boundary condition
is assigned only on a half line complicates the
solution of the Fokker–Planck equation. This
boundary condition is represented in the x − z
plane in Fig. 2.
 -- 
91
in which ueff is an ‘‘effective’’ threshold
z
ueff = u − z
k
F. 2. The half-plane boundary condition: p(x, z, t) has
to be zero on the bold line, x = u, z Q ku
.
We turn now to the mean first passage time T.
Since the potential of the neuron can escape from
the interval ] − a, u], but it is not possible to
return to this interval, the probability that the
potential has not reached threshold in the whole
interval [0, t] is equal to the probability it is
below threshold at time t. Thus the probability
that the first passage time T is larger than t is
g g
u
a
dx
−a
dzp(x, z, t). (7)
−a
P(t) obeys the time boundary conditions
P(0) = 1
lim P(t) = 0.
and
t:+a
We have been able to calculate the mean first
passage time using an expansion in k. The idea
and a few details of the calculation can be found
in the Appendix. For more details, see (Brunel &
Sergi, in prep). We obtain, to the first order in
k = zt'/t
T = tzp
g
u − m/s
c(w) dw
H − m/s
X 01
−t
0 1
t' 1 zp
u−m
z
c
t 2 z2
s
(8)
where
c(w) = exp(w 2 )[erf(w) + 1]
(9)
and z is the Riemann zeta function (see e.g.
Abramovitz & Stegun, 1964). Note that this
equation is identical to the first order development in zt'/t of the function
T = tzp
g
ueff − m/s
H − m/s
c(w) dw
X
t'
t'
s = u + 1.03
s
2t
t
Integrating G with respect to z gives the
probability density n(x) of the potential.
Developing n(x) near x = u
gives:
x
Pr(T e t) 0 P(t) =
0 1X
1
2
(10)
n(x) = g0 (x) − z
01
−x + u
1
ke 2 2 −x + u
+ 1.46k
2
2
2
This means that the density is not zero at x = u
,
but rather at x = u
+ 1.46k. Thus a neuron with
a small k q 0 has the same behaviour as a neuron
with k = 0 with a renormalized, or ‘‘effective’’
threshold ueff . Note that this corrective term of
order k is in agreement with the qualitative
discussion of the end of Section 2.
This suggests that a practical solution to
investigating higher order terms in k is to find the
effective threshold taking into account higher
order terms in k that gives the right mean first
passage time. This is done in the next section,
using numerical simulations both to check the
first order term and to obtain an extrapolation to
higher orders.
4. Simulations
Numerical simulations were performed to
check the validity of eqns (8, 10). First, the
validity of the diffusion approximation was
checked with t' = 0. A Poisson spike train with
frequency nin was generated as an input to the IF
neuron. The threshold was set to u = 1, the reset
potential to H = 0, and the synaptic efficacy to
J = 0.01. The output frequency was estimated by
dividing the total number of emitted spikes by
the duration of the simulation.
The simulation was composed of 10 different
runs of duration 1000s each, with t = 10ms.
Thus during each run the neuron received about
108 spikes. In this way we obtained, for each nin
an average frequency nout and a corresponding
standard error. The resulting output frequency
was compared with eqn (8), for t' = 0, m = Jnin t
and s = Jznin t. The result shown in Fig. 3
indicates that for this value of J the diffusion
approximation is a very good approximation to
.   . 
92
to fit the data in the whole interval 0 Q k Q 1 by
nout = 1/T(k) where
0.35
0.30
0.25
T(k) = tzp
0.20
g
ueff − m/s
c(w) dw + c
(11)
out
H − m/s
0.15
in which the ‘‘effective’’ threshold is
0.10
ueff = u + s(ak + bk2 )
0.05
with fitting parameters a, b and c, a and b signal
linear and quadratic deviations from the
standard IF transfer function due to synaptic
current dynamics. In some cases, the finite value
of J causes a significant deviation from the
diffusion approximation. This deviation was
signalled by a significant non-zero value of c.
When such a value was met, the corresponding
parameter set (a, b) was rejected for further
consideration (this was the case for low input
and output frequencies, i.e. nin t = 85). The
insertion of a cubic term in the limits of the
interval did not improve significantly the quality
of the fit, and the value of the coefficient of the
cubic term was not significantly different from
zero. We thus limited ourselves to the quadratic
expression.
The results of the fitting procedure are shown
in Table 1. In all cases the value of a obtained
numerically is consistent with the analytical
estimate a 0 1.03. In the other hand the value of
b depends on nin . The values of b are a function
of u
= (u − m)/s, i.e.
0.00
70
75
80
85
90
95
100
in
F. 3. Testing the theory for the diffusion approximation, eqn (8), for t' = 0 ms. (—) eqn (8); (r): simulation
results. Both input and output frequencies have been
multiplied by t so that they represent average numbers of
either received or emitted spikes per integration time
constant. For t = 10 ms, the corresponding numbers have
to be multiplied by 100 to obtain frequencies in Hz. Thus
the output frequency range is 0–35 Hz. Note that the
errorbars are smaller than the diamonds.
a Poisson input. The advantage of using a
Poisson input in the simulation is that it avoids
the difficulties of simulating a continuous
Gaussian process.
We then simulated an IF neuron with finite
synaptic time constant t'. The simulation was
performed essentially in the same manner,
integrating the equations for both potential and
current between two successive input spikes. The
output frequency was again estimated dividing
the total number of emitted spikes by the
duration of the simulation.
We made four series of simulations, for
nin t = 85, 90, 95, 100. All series correspond to an
average synaptic input depolarizing the neuron
below threshold. In the case nin t = 100 the
average input is exactly equal to the threshold.
We indeed expect changes due to a finite t' to be
the most important in this region, where firing is
purely driven by the fluctuations in the input.
For each value of nin , we performed simulations
for different values of k = zt'/t, 0 Q k Q 1. The
simulations were again composed of 10 different
runs of duration 1000s each, with t = 10ms. In
this way we obtained, for each nin , an average
frequency nout (k) and a corresponding standard
error. Then, using eqn (10) as a guideline we tried
(12)
b = b0 + b1 u
with b0 0 − 0.35 and b1 0 0.27.
Figure 4 compares the results of numerical
simulations for different values of nin as well as
the theoretical results obtained using eqns
(11,12), with the parameters obtained with the
fit, i.e. a = 1.03, b0 = −0.35, b1 = 0.27. This
figure shows that using an effective threshold
T 1
Table of fitted coefficients for each value of nin used
in the simulation
nin
u
a
b
90
95
100
1.05
0.51
0.00
1.00 2 0.01
1.04 2 0.01
1.03 2 0.01
−0.04 2 0.01
−0.19 2 0.01
−0.35 2 0.01
 -- 
0.35
0.30
in
= 100
in
= 95
in
= 90
0.25
out
0.20
0.15
0.10
0.05
in
= 85
0.00
0.0
0.2
0.4
0.6
0.8
1.0
'/
F. 4. Comparison of simulation results (symbols), with
eqn (11) (– – –) in which the effective threshold is given by
eqn (12), with parameters a = 1.03, b = −0.34 + 0.29u
and
c = 0, in the regime below threshold. Input rates are
indicated next to the corresponding curves. Frequency units
as in Fig. 3. Standard errors are smaller than the size of
symbols.
which is quadratic in k in the usual IF transfer
function yields a good agreement with simulation data up to k = 1. On the other hand,
trying a fit of the type
T(k) = tzp
g
u − m/s
c(w) dw + ak + bk2 + g
H − m/s
gives very poor results in the same interval. Thus
eqn (10) seems to take into account higher order
terms in the development in k in such a way as
to give a good approximation to the numerical
results.
These simulations show that the use of the IF
transfer function with an effective threshold
which is polynomial in k succeeds in reproducing
simulation results up to k = 1. Note that if we
limit ourselves to the (exact) first order
correction to the first passage time, the
agreement holds in a much smaller region, for t'
smaller than about 0.1t.
5. Discussion
Using a combination of analytical and
numerical techniques we have found an expression for the frequency of an IF neuron with
finite synaptic current time constants, for inputs
93
described by a random Gaussian process. The
expression is exact to first order in zt'/t, and is
in very good agreement with simulations in the
whole range 0 Q t' Q t. We restricted t' Q t
since the time constant of conductance changes
is typically shorter than the membrane time
constant (see e.g. Douglas et al., 1991; Mason
et al., 1991; Tsodyks & Markram, 1997).
The dependence of the firing frequency on the
synaptic time constant t' could be in principle
obtained in single neuron recordings in in vitro
slice preparations. Whittington et al. have
manipulated pharmacologically the decay time
constant of inhibitory post-synaptic currents
(IPSCs) in a slice preparation of the rat
hippocampus
using
either
pentobarbital
(Whittington et al., 1995) or thiopental (Traub
et al., 1996). The firing frequency at different
values of IPSC time constants could then be
obtained by intracellular recording of single
neurons in slice preparations, while an arbitrary
spike train is injected to the cell. This is in
principle feasible using current techniques (see
e.g. Tsodyks & Markram, 1997).
The transfer function determined in the
present paper could be used in a self-consistent
theory of a recurrent neural network (Amit &
Brunel, 1997a,b). It would be interesting in
particular to investigate how the synaptic
properties modify the collective properties of
these networks (stability conditions of a low
activity spontaneous state, firing frequencies in
‘‘memory states’’, etc. . . .). Furthermore, the
time constant of conductance changes is thought
to play a fundamental role in dynamical
phenomena underlying collective oscillations
and synchronization in such networks. We hope
the present study could be a first step to
understanding the influence of these time
constants on such properties. Last, we suggest
such a mixture of analytical and numerical
techniques could be useful to study more
complex neuronal models, including, for
example, nonlinearities in the subthreshold
dynamics of the membrane potential.
We thank Daniel Amit for his encouragement and
many comments on the manuscript, Fabio Martinelli
for useful discussions, and Sid Wiener for a careful
reading of the manuscript.
.   . 
94
REFERENCES
A, M. & S, I. (1964). Handbook of
Mathematical Functions. Washington, DC: National
Bureau of Standards
A, D. J. & B, N. (1997a). Model of global
spontaneous activity and local structured activity during
delay periods in the cerebral cortex. Cerebral Cortex 7,
237.
A, D. J. & B, N. (1997b). Dynamics of a recurrent
network of spiking neurons before and following
learning. Network 8, 373.
A, D. J. & T, M. V. (1991). Quantitative study of
attractor neural networks retrieving at low spike rates.
Networks 3, 121.
B, R. & P, V. (1983). Half-range completeness for the Fokker–Planck equation. J. Stat. Phys. 32,
565.
B, A. R., E, T. C., D, C. R., L,
S. B. & L, K. (1996). Cooperative behaviour
in periodically driven noisy integrate-and-fire models
of
neuronal
dynamics.
Phys.
Rev.
E
53,
3958–3969.
D, R. J., M, K. A. C. & W, D.
(1991). An intracellular analysis of the visual responses in
cat visual cortex. J. Physiol. (London) 440, 659.
F, A. A. & M, A. V. (1986). Substantiation of
the ‘‘point approximation’’ for describing the total
activity of the brain with the use of a simulation model.
Biofizika 31, 304 (Engl. transl. Biophysics 31, 332).
H, P. S., D, C. R. & L, C. D. (1989).
Mean exit times for particles driven by weakly coloured
noise. SIAM J. Appl. Math. 49, 1480.
K, B. W. (1972). Dynamics of encoding in a
population of neurons. J. Gen. Physiol. 59, 734.
M, A., N, A. & S, K. (1991). Synaptic
transmission between individual pyramidal neurons of
the rat visual cortex in vitro. J. Neurosci. 11, 72.
R, L. M. (1977). Diffusion Processes and Related
Topics on Biology. Berlin: Springer-Verlag.
R, H. (1984). The Fokker Planck Equation. Berlin:
Springer-Verlag.
T, R. D., W, M. A., C, S. B., B́,
G. & J, J. G. R. (1996). Analysis of gamma
rhythms in the rat hippocampus in vitro and in vivo.
J. Physiol. 493, 471.
T, M. V. & M, H. (1997). The neural code
between neocortical pyramidal neurons depends on
neurotransmitter release probability. Proc. Natl. Acad.
Sci. U.S.A. 94, 719.
T, H. C. (1988). Introduction to Theoretical
Neurobiology. Cambridge: Cambridge University
Press.
W, M. A., T, R. D. & J, J. G. R.
(1995). Synchronized oscillations in interneuron networks
driven by metabotropic glutamate receptor activation.
Nature 373, 612.
passage time QT q
writing
QT q = −
g
+a
t
0
1P(t)
dt =
1t
g g g
a
=
as a function of p
u
dz
−a
P(t) dt
0
a
dx
−a
g
+a
dtp(x, z, t).
0
We may recast the mean exit time problem as a
stationary problem by defining
G(x, z) =
g
a
p(x, z, t) dt.
0
Integrating eqn (5) with respect to time between
0 and +a, and using the time boundary
condition eqn (6) yields
−k d(x − H
)
=
−z2
1
2
z2p
0
e
2
1
12G 1zG
1xG
1G
− kz
.
+
+ k2
1z 2
1x
1x
1z
(A.1)
The boundary conditions on G are identical to
those on p since G is simply the integral of p over
time. Once G is known, the mean first passage
time is simply given by the integral of G over the
whole half-plane x Q u
.
Our strategy is to develop the solution of eqn
(A.1) in powers if k = zt'/t. We look for a
solution in the form G = Gl + G b, where Gl is the
solution of the problem without the boundary
condition at x = u
, and Gb is the solution
different from zero in a narrow interval close to
x = u
. We then choose the sum of these two
solutions that matches the prescribed boundary
condition in x = u
. In the following, G l will be
called ‘‘free solution’’ while G b will be called
‘‘boundary solution’’.
In the following only the main steps of the
calculation of both solutions are presented. The
details can be found in (Brunel & Sergi, in prep.).
Free Solution
We look for a solution as a power series in k,
APPENDIX
From the relation Pr(t Q T Q t + dt) =
−(1P/1t) dt, we can express the mean first
Gl =
1
z2p
a
s k rfr (x, z);
r=0
(A.2)
 -- 
we find:
−z2
f0 (x, z) = g0 (x)e
;
2
0
f1 (x, z) = g1 (x) − z
1
1g0 (x) −z
e2.
1x
2
The functions g0 , g1 satisfying the boundary
condition at x = −a are
g0 (x) = exp
+ c0 exp
−x 2
2
−x 2
2
g
u
U(y − H
)exp
x
and
y2
dy
2
g1 (x) = c1 exp
−x 2
2
(A.3)
where c0 , c1 will be determined by the half line
boundary condition.
Boundary Solution
We look now for a solution which is non-zero
in an area whose width is of order k near
threshold. The natural change of variable is
r=
x − u
,
k
s = z − ku
.
This corresponds to concentrating in a stripe of
width k close to the line x = u
, and to translate
the point (u
, ku
) to the origin of the new
coordinate system. We choose to concentrate on
a region of width of order k because, as we have
discussed at the end of Section 2, the effect of the
current dynamics on the potential is of order k.
We are looking for a solution which is equal to
95
zero for r : −a, and such that the sum of the
free and the boundary solutions locally satisfies
the boundary condition. In this coordinate
system the reset potential H
: −a as k : 0,
since u
− H
k; this allows us to discard the
l.h.s. of eqn (A.1). The calculation of the first
orders in k of this solution is more involved and
is presented elsewhere (Brunel & Sergi, in prep.).
It uses recently developed half-range expansion
techniques (Beals & Protopopescu, 1983; Hagan
et al., 1989).
Once the boundary solution is obtained, the
sum of the free and the boundary solutions is
required to vanish on the half line x = u
, z Q ku
in order to satisfy the boundary condition at
orders 0 and 1 in k. The result is
G(x, s) = G0 (x, s) + kG1 (x, s) + O(k2 )
G0 (x, s) =
G1 (x, s) =
z2p
1
z2p
g
s2 −x2
1
s2
e− 2 e
2
u
y2
U(y − H
)e 2 dy
x
0 01
e− 2 − z
1 −x + u
1g (x)
e 2 −s 0
2
1x
2
2
1
in which z is the Riemann zeta function (see e.g.
Abramovitz & Stegun, 1964). The mean first
passage time is the integral of G(x, s) over the
half plane
QT q =
g g
u
+a
dx
−a
dsG(x, s)
−a
and after some straightforward algebra we
obtain, to the first order in k = zt'/t, eqn (8).