Download Theoretical Neuroscience: From Single Neuron to Network Dynamics

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Artificial neural network wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Theta model wikipedia , lookup

Molecular neuroscience wikipedia , lookup

Mirror neuron wikipedia , lookup

Apical dendrite wikipedia , lookup

Neural oscillation wikipedia , lookup

Neuroanatomy wikipedia , lookup

Eyeblink conditioning wikipedia , lookup

Development of the nervous system wikipedia , lookup

Multielectrode array wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Caridoid escape reaction wikipedia , lookup

Single-unit recording wikipedia , lookup

Optogenetics wikipedia , lookup

Synaptogenesis wikipedia , lookup

Synaptic noise wikipedia , lookup

Catastrophic interference wikipedia , lookup

Neurotransmitter wikipedia , lookup

Central pattern generator wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Neural modeling fields wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Sparse distributed memory wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Convolutional neural network wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Metastability in the brain wikipedia , lookup

Chemical synapse wikipedia , lookup

Neural coding wikipedia , lookup

Recurrent neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Synaptic gating wikipedia , lookup

Biological neuron model wikipedia , lookup

Nervous system network models wikipedia , lookup

Transcript
Theoretical neuroscience:
From single neuron to network dynamics
Nicolas Brunel
Outline
• Single neuron stochastic dynamics
• Network dynamics
• Learning and memory
Single neurons in vivo seem highly stochastic
Single neuron stochastic dynamics: the LIF model
• LIF neuron with deterministic + white noise inputs,
√
dV
τm
= −V + µ(t) + σ(t) τm η(t)
dt
Spikes are emitted when V
= Vt , then neuron reset to Vr ;
• P (V, t) is described by Fokker-Planck equation
∂P (V, t)
σ 2 (t) ∂ 2 P (V, t)
∂
τm
=
+
[(V − µ(t))P (V, t)]
2
∂t
2
∂V
∂V
Single neuron stochastic dynamics: the LIF model
• P (V, t) is described by Fokker-Planck equation
σ 2 (t) ∂ 2 P (V, t) ∂
∂P (V, t)
=
[(V − µ(t))P (V, t)]
τm
+
∂t
2
∂V 2
∂V
• Boundary conditions:
Vt : absorbing b.c. + probability flux at Vt
= firing probability ν(t):
– At threshold
P (Vt , t) = 0,
2ν(t)τm
∂P
(Vt , t) = − 2
∂V
σ (t)
– At reset potential Vr : what comes out at Vt must come
back at Vr
P (Vr− , t) = P (Vr+ , t),
∂P −
∂P +
2ν(t)τm
(Vr , t)−
(Vr , t) = − 2
∂V
∂V
σ (t)
LIF model: stationary inputs µ(t)
= µ0 , σ(t) = σ0
Z Vt −µ0
2
σ
(V − µ0 )
2ν0 τm
2
exp(u
)Θ(u − Vr )du
exp −
P0 (V ) =
V −µ0
σ
σ2
σ
Z Vt −µ
0
σ
√
1
exp(u2 )[1 + erf(u)]
= τm π
Vr −µ0
ν0
σ
Z Vt −µ
Z x
0
σ
2
2
2
x
y2
CV
= 2πν0
e dx
e (1 + erfy)2 dy
Vr −µ0
σ
−∞
Time-dependent inputs
• Given an arbitrary time-dependent input (µ(t), σ(t)) what is the instantaneous firing
rate ν(t)?
Computing the linear firing rate response
• Strategy:
– start with small time-dependent perturbations around means,
µ(t) = µ0 + µ1 (t), σ(t) = σ0 + σ1 (t)
– linearize FP equation and obtain the linear response of P
= P0 + P1 (t) and
ν = ν0 + ν1 (t) (solution of inhomogeneous 2nd order ODE).
Z t
Rµ (t − t0 )µ1 (t0 ) + Rσ (t − t0 )σ1 (t0 )dt0
ν1 (t) =
ν̃1 (ω) = Rµ (ω)µ̃1 (ω) + Rσ σ̃1 (ω)
–
Rµ and Rσ can be computed explicitly in terms of confluent hypergeometric
functions.
– go to higher orders in ...
LIF model: linear rate response Rµ (ω) (changes in µ)
√
• High frequency behavior: Rµ (ω) ∼ ν0 /(σ0 2iωτm )
√
• Translates into a t initial response for step currents.
More realistic models
High ω behavior
• Colored noise inputs:
dV
τm
dt
dW
τs
dt
= −V + µ(t) + σ(t)W
= −W +
√
r
Rµ (ω) ∼
τs
τm
τm η(t)
• More realistic spike generation:
√
dV
τm
= −V + F (V ) + µ(t) + σ(t) τm η(t)
dt
Spike emitted when V
→ ∞; then reset at Vr
– EIF: F (V )
= ∆t exp((V − VT )/∆t )
Rµ (ω) ∼
1/ω
– QIF: F (V )
∼V2
Rµ (ω) ∼
1/ω 2
– PIF: F (V )
∼Vα
Rµ (ω) ∼
1/ω α/(α−1)
Conclusions
• In simple spiking neuron models, response of instantaneous firing rate can be much
faster than the response of the membrane;
• EIF model: fits well pyramidal cell data, allows to understand quantitatively factors
controlling speed of firing rate response;
• Cut-off frequency of real neurons is very high (∼ 200 Hz or higher) ⇒ allows very fast
population response to time dependent inputs
• EIF can be mapped to both LNP and Wilson-Cowan-type firing rate models, with a time
constant that depends on intrinsic parameters of the cell, and on instantaneous rate
itself
Local networks in cerebral cortex
• Size ∼ cubic millimeter
• Total number of cells ∼ 100,000
• Types of cells:
– pyramidal cells - excitatory (80%)
– interneurons - inhibitory (20%)
• Connection probability ∼ 10%
• Synapses/cell: ∼ 10,000
(total 109 synapses/mm3 )
• Each synapse has a small effect: depolarization/ hyperpolarization ∼ 1-10%
of threshold.
Randomly connected network of LIFs
• N neurons. Each neuron receives K < N randomly chosen connections from other
neurons. Couplings between neurons J (J < 0 is total coupling strength).
• Neurons = leaky integrate-and-fire:
τm
dVi (t)
= −Vi + Ii
dt
Threshold Vt , reset Vr
• Total input of a neuron i at time t
X X
√
k
S(t − tj ) + σext τm ηi (t)
Ii (t) = µext + J
cij
j
k
where S(t) describes time course of PSCs, tk
j spike time of k th spike of neuron j , cij
chosen randomly such that
P
j cij
= K for all i.
Analytical description of irregular state
• If neurons are firing approximately as Poisson processes, and connection probability is
small (K/N 1), then the recurrent inputs to a neuron can be approximated as
q
√
2
2
Ii (t) = µext + JKτ ν(t − D) + σext + J Kν(t − D)τ τ ηi (t)
where ηi (t) are uncorrelated white noise.
• We can use again Fokker-Planck formalism,
∂P
σ 2 (t) ∂P
∂
τ
=
+
[(V − µ(t))P ] ,
∂t
2 ∂V 2
∂V
where
–
µ(t) = average input (external − recurrent inhibitory)
µ(t) = µext + JKτ ν(t − D)
–
σ(t) = ‘intrinsic’ noise due to recurrent interactions
2
σ 2 (t) = σext
+ J 2 Kν(t − D)τ
Asynchronous state, linear stability analysis
1. Asynchronous state (constant instantaneous firing rate):
√
Z
Vt −µ0
σ0
1
ν0
= τm π
µ0
= µext + KJν0 τm
σ02
2
= σext
+ KJ 2 ν0 τm
Vr −µ0
σ0
exp(u2 )[1 + erf(u)]
2. Linear stability analysis:
P (V, t)
ν0 (t)
= P0 (V ) + P1 (V, λ) exp(λt)
=
ν0 + ν1 (λ) exp(λt) . . .
⇒ obtain eigenvalues λ
3. Instabilities of asynchronous state occur when Re(λ)
= 0;
4. Weakly non-linear analysis: behavior beyond the bifurcation point
5. Finite size effects
Randomly connected E-I networks
H
9
HH
H
HH
j
H
1
KA
A
Conclusions - network dynamics
• Network dynamics can be studied analytically using Fokker-Planck formalism;
• Inhibition-dominated networks settle in highly irregular states, that can be either
asynchronous or synchronous;
• Such irregular states reproduce some of the main experimentally observed features of
spontaneous activity in cortex in vivo:
– Highly irregular firing of single cells at low rates;
– Broad distribution of firing rates (close to lognormal)
– Weak correlations between cells
• Synchronous irregular oscillations similar to fast oscillations observed in cerebellum,
hippocampus, cerebral cortex
• LFP spectra from all these structures can be fitted quantitatively by the model
• Irregularity persists in randomly connected networks in the absence of noise
• Irregular dynamics can be truly chaotic (positive Lyapunov exponents) or ‘stably
chaotic’ (negative Lyapunov exponents)
Synaptic plasticity, learning and memory
Synaptic plasticity and network dynamics: future
challenges
• So far, most studies of learning and memory in networks have focused on networks
with fixed connectivity (typically Hebbian - assumed to be the result of learning)
• With Hebbian connectivity matrices, networks become multistable - with one
background state, and a multiplicity of ‘selective’ attractors representing stored
memories.
• Challenges:
– Devise ‘learning rules’ (i.e. dynamical equations for synapses) consistent with
known data
– Insert such rules in networks, and study how inputs with prescribed statistics shape
network attractor landscape
– Study maximal storage capacity of the network, with different types of attractors
– Learning rules that are able to reach maximal capacity?