* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Chapters 6-7 - Foundations of Human Social
Eyeblink conditioning wikipedia , lookup
Emotion and memory wikipedia , lookup
Neuroeconomics wikipedia , lookup
Response priming wikipedia , lookup
Caridoid escape reaction wikipedia , lookup
Optogenetics wikipedia , lookup
Executive functions wikipedia , lookup
Neuropsychopharmacology wikipedia , lookup
Psychophysics wikipedia , lookup
Metastability in the brain wikipedia , lookup
Neural modeling fields wikipedia , lookup
Neural coding wikipedia , lookup
Negative priming wikipedia , lookup
Holonomic brain theory wikipedia , lookup
Perceptual control theory wikipedia , lookup
Stimulus (physiology) wikipedia , lookup
Nervous system network models wikipedia , lookup
Catastrophic interference wikipedia , lookup
Sparse distributed memory wikipedia , lookup
Feature detection (nervous system) wikipedia , lookup
Biological neuron model wikipedia , lookup
Synaptic gating wikipedia , lookup
Convolutional neural network wikipedia , lookup
Central pattern generator wikipedia , lookup
Spikes, Decisions, Actions The dynamical foundations of neuroscience Valance WANG Computational Biology and Bioinformatics, ETH Zurich The last meeting • Higher-dimensional linear dynamical systems • • • • General solution Asymptotic stability Oscillation Delayed feedback • Approximation and simulation Outline • Chapter 6. Nonlinear dynamics and bifurcations • Two-neuron networks • Negative feedback: a divisive gain control • Positive feedback: a short term memory circuit • Mutual Inhibition: a winner-take-all network • Stability of steady states • Hysteresis and Bifurcation • Chapter 7. Computation by excitatory and inhibitory networks • Visual search by winner-take-all network • Short term memory by Wilson-Cowan cortical dynamics Chapter 6. Two-neuron networks Input Input Nagative feedback Input Positive feedback Input Mutual inhibition Two-neuron networks • General form (in absence of stimulus input): • • 𝑑𝑥1 𝑑𝑡 𝑑𝑥2 𝑑𝑡 = 𝐹 𝑥1 , 𝑥2 = 𝐺(𝑥1 , 𝑥2 ) • Reading current state 𝑥1 . 𝑥2 as input to the update function 𝐹 𝑥1 , 𝑥2 , 𝐺 𝑥1 , 𝑥2 • Steady states: • 𝐹 𝑥1 , 𝑥2 = 0 • 𝐺 𝑥1 , 𝑥2 = 0 Negative feedback: a divisive gain control • In retina, • Light -> Photo-receptors -> Bipolar cells -> Ganglion cells -> optic nerves • Amacrine cell • This forms a relay chain of information • To stabilize representation of information, bipolar cells receive negative feedback from amacrine cell Negative feedback: a divisive gain control • In retina, Negative feedback: a divisive gain control Light B A • Equations: • dB dt = 1 (−B τB + L ) 1+A • dA dt = 1 (−A τA + 2B) • Equations: 𝑑𝐵 𝑑𝑡 • 𝑑𝐴 𝑑𝑡 = 1 (−𝐵 𝜏𝐵 + 𝐿 ) 1+𝐴 = 1 (−𝐴 𝜏𝐴 + 2𝐵) 8 • Nullclines: • −𝐵 + 𝐿 1+𝐴 =0 • −𝐴 + 2𝐵 = 0 • Equilibrium point: • 𝐵𝑒𝑞 = −1+ 1+8𝐿 4 • 𝐴𝑒𝑞 = 2𝐵 dB/dt=0 dA/dt=0 9 A - amacrine cell response • phase plane analysis for L=10 10 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 B - bipolar cell response 7 8 9 10 Linear stability of steady states • Introduction to Jacobian: 𝑥1 𝐹1 (𝑥1 , … , 𝑥𝑛 ) 𝑑 … = … • Given dt 𝑥𝑛 𝐹𝑛 (𝑥1 , … , 𝑥𝑛 ) • Jacobian ≡ 𝜕𝐹 𝑥1 ,…,𝑥𝑛 𝜕 𝑥1 ,…,𝑥𝑛 𝜕𝐹1 𝜕𝑥1 … 𝜕𝐹1 𝜕𝑥𝑛 = … … 𝜕𝐹𝑛 𝜕𝑥1 … 𝜕𝐹𝑛 𝜕𝑥𝑛 • Example: given our update function 1 𝐿 • 𝐹1 (𝐵, 𝐴) = 𝜏 (−𝐵 + 1+𝐴) 𝐵 1 • 𝐺(𝐵, 𝐴) = 𝜏 (−𝐴 + 2𝐵) 𝐴 • Jacobian ≡ 𝜕𝐹1 𝜕𝐵 𝜕𝐺 𝜕𝐵 𝜕𝐹1 𝜕𝐴 𝜕𝐺 𝜕𝐴 1 = −𝜏 𝐵 2 𝜏𝐴 1 𝐿 2 𝐵 1+𝐴 1 −𝜏 𝐴 −𝜏 Linear stability of steady states Linear stability of steady states • Proof: • Our equations • • 𝑑𝐵 𝑑𝑡 𝑑𝐴 𝑑𝑡 = 𝐹 𝐵, 𝐴 = 𝐺(𝐵, 𝐴) • Apply a small perturbation to the steady state, u,v << 1, take this point as initial condition • 𝐵 0 ≔ 𝐵𝑒𝑞 +𝑢 0 • 𝐴 0 ≔ 𝐴𝑒𝑞 +𝑣 0 • Where 𝑢 0 = 𝑢, 𝑣 0 = 𝑣, u(t),v(t) represents deviation from steady states • Proof (cont.): • Plug in and solve • 𝑑 𝐵(𝑡) 𝑑𝑡 • 𝑑𝐴 𝑑𝑡 𝑑(𝐵𝑒𝑞 +𝑢 𝑡 ) 𝑑𝑢(𝑡) = = 𝐹 𝐵, 𝐴 = 𝐹(𝐵𝑒𝑞 + 𝑢, 𝐴𝑒𝑞 + 𝑣) 𝑑𝑡 𝑑𝑡 𝜕𝐹 𝜕𝐹 ≈ 𝐹 𝐵𝑒𝑞 , 𝐴𝑒𝑞 + 𝑢 𝐵 ,𝐴 +𝑣 𝐵 ,𝐴 +⋯ 𝜕𝐵 𝑒𝑞 𝑒𝑞 𝜕𝐴 𝑒𝑞 𝑒𝑞 𝜕𝐹 𝜕𝐹 𝑢 ≈ 𝐵 ,𝐴 𝐵 ,𝐴 𝜕𝐵 𝑒𝑞 𝑒𝑞 𝜕𝐴 𝑒𝑞 𝑒𝑞 𝑣 = = 𝑑 𝐴𝑒𝑞 +𝑣 𝑡 𝑑𝑡 = 𝑑𝑣 𝑡 𝑑𝑡 𝜕𝐺 ≈ 𝐺 𝐵𝑒𝑞 , 𝐴𝑒𝑞 + 𝑢 𝜕𝐵 𝜕𝐺 𝜕𝐺 ≈ 𝐵𝑒𝑞 , 𝐴𝑒𝑞 𝜕𝐵 𝜕𝐴 = 𝑑𝐺 𝐵, 𝐴 = 𝐺(𝐵𝑒𝑞 + 𝑢, 𝐴𝑒𝑞 + 𝑣) 𝜕𝐺 𝐵𝑒𝑞 , 𝐴𝑒𝑞 + 𝑣 𝐵 ,𝐴 +⋯ 𝜕𝐴 𝑒𝑞 𝑒𝑞 𝑢 𝐵𝑒𝑞 , 𝐴𝑒𝑞 𝑣 • Finally • 𝑑 𝑑𝑡 𝐵 = 𝐴 𝜕𝐹 𝜕𝐵 𝜕𝐺 𝜕𝐵 𝐵𝑒𝑞 , 𝐴𝑒𝑞 𝐵𝑒𝑞 , 𝐴𝑒𝑞 𝜕𝐹 𝜕𝐴 𝜕𝐺 𝜕𝐴 𝐵𝑒𝑞 , 𝐴𝑒𝑞 𝐵𝑒𝑞 , 𝐴𝑒𝑞 𝑢 𝑣 • Then use eigenvalue to determine asymptotic behavior Negative feedback: a divisive gain control • Equations: • 𝑑𝐵 𝑑𝑡 𝑑𝐴 𝑑𝑡 = = 1 10 (−𝐵 + ) 10 1+𝐴 1 (−𝐴 + 2𝐵) 10 • Fixed point (𝐵𝑒𝑞, 𝐴𝑒𝑞 ) = (2,4) • Stability analysis • Jacobian at (2,4) = 1 10 1 5 − 1 25 1 − 10 − dB/dt=0 dA/dt=0 9 8 A - amacrine cell response • phase plane analysis for L=10 10 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 B - bipolar cell response 7 8 9 10 • Eigenvalues 𝜆 = −0.1 ± 0.089𝑖 => asymptotically stable • Unique stable fixed point => our fixed point is a «global attractor» Two-neuron networks Input Input Nagative feedback Input Positive feedback Input Mutual inhibition A short-term memory circuit by positive feedback • In monkeys’ prefrontal cortex A short-term memory circuit by positive feedback • First, let’s analyze the behavior of the system in absence of external stimulus E1 E2 • Equations: • • 𝑑𝐸1 𝑑𝑡 𝑑𝐸2 𝑑𝑡 = = 1 𝜏 1 𝜏 −𝐸1 + 𝑆 3𝐸2 −𝐸2 + 𝑆 3𝐸1 • A sigmoidal activation function: 𝑆 𝑃 = 100𝑃2 1202 +𝑃2 0 • P: stimulus strength • S: firing rate 𝑃≥0 𝑃<0 A short-term memory circuit by positive feedback • Equations: • • 𝑑𝐸1 𝑑𝑡 𝑑𝐸2 𝑑𝑡 = = 1 𝜏 1 𝜏 −𝐸1 + 𝑆 3𝐸2 −𝐸2 + 𝑆 3𝐸1 • Nullclines: • 𝐸1 = 𝑆 3𝐸2 = 100 3𝐸2 2 1202 + 3𝐸2 2 • 𝐸2 = 𝑆 3𝐸1 = 100 3𝐸1 2 1202 + 3𝐸1 2 • Equilibrium point: • 9𝐸13 − 900𝐸12 + 1202 𝐸1 = 0 • ⇒ 𝐸1𝑒𝑞 = 0,20,80 • E2eq can be obtained similarly phase plane analysis 100 dE1/dt=0 dE2/dt=0 90 80 70 E2 60 50 40 30 20 10 0 0 10 20 30 40 50 E1 60 70 80 90 100 • Equilibrium point: • (𝐸1𝑒𝑞 , 𝐸2𝑒𝑞 ) = 0,0 , 20,20 , 80,80 • Stability analysis: −0.05 0 , 𝜆 = −0.05, −0.05 ⇒ 𝑠𝑡𝑎𝑏𝑙𝑒 0 −0.05 −0.05 0.08 • (20,20): Jacobian = , 𝜆 = +0.03, −0.13 ⇒ 0.08 −0.05 𝑢𝑛𝑠𝑡𝑎𝑏𝑙𝑒 • (0,0): Jacobian = • (100,100): Jacobian = 𝑠𝑡𝑎𝑏𝑙𝑒 −0.05 0.02 0.02 , 𝜆 = −0.07, −0.03 ⇒ −0.05 Hysteresis and Bifurcation • The term ‘hysteresis’ is derived from Greek, meaning ‘to lag behind’. • In present context, this means that the present state of our neural network is determined not just by the present state and input, but also by the state and input in the history (“path-dependent”). Hysteresis and Bifurcation • Suppose we apply a brief stimulus K to the neural network K E1 E2 • The steady states of E1 becomes • 𝐸1 = • Demo 100 3𝐸1 +𝐾 2 1202 + 3𝐸1 +𝐾 2 Hysteresis and Bifurcation • Due to change in parameter value K, a pair of equilibrium points may appear or disappear. This phenomenon is known as bifurcation. Two-neuron networks Input Input Nagative feedback Input Positive feedback Input Mutual inhibition Mutual inhibition: a winner-take-all neural network for decision making • • K1 K2 E1 E2 𝑑𝐸1 𝑑𝑡 𝑑𝐸2 𝑑𝑡 = = • Demo 1 −𝐸1 𝜏 1 (−𝐸2 𝜏 + 𝑆 𝐾1 − 3𝐸2 + 𝑆 𝐾2 − 3𝐸1 ) Chapter 6. Two-neuron networks Input Input Nagative feedback Input Positive feedback Input Mutual inhibition Chapter 7. Multiple-Neuron-network • Visual search by a winner-take-all network • Wilson-Cowan cortical dynamics Visual search by winner-take-all network • Visual search Visual search by winner-take-all network • A N+1 Neuron-network, each neuron receives perceptive input • T for target, D for distractor ET ED ED T D D 𝑑𝑇 𝑑𝑡 𝑑𝐷 𝜏 𝑑𝑡 • 𝜏 = −T + S(ET − 3ND) • = −D + 𝑆(𝐸𝐷 − 3 𝑁 − 1 𝐷 − 3𝑇) • Stimulus to target neuron:80, to disturbing neurons:79.8 35 winner neuron 30 response 25 20 15 10 5 0 0 100 200 300 400 500 time 600 700 800 900 1000 • Stimulus to target neuron: 80, to disturbing neurons: 79 35 winner neuron 30 response 25 20 15 10 5 0 0 100 200 300 400 500 time 600 700 800 900 1000 • Further, this model can be extrapolated for higher level cognitive decisions. It is common experience that decisions are more difficult to make and take longer when the number of appealing alternatives increases. • Once a decision is definitely made, however, humans are reluctant to change their decision. (Hysteresis in cognitive process!) Wilson-Cowan model (1973) • Cortical neurons may be divided into two classes: • excitatory (E), usu. Pyramidal neurons • and inhibitory (I), usu. interneurons • All forms of interaction occur between these classes: • E -> E, E -> I, I -> E, I -> I • Recurrent excitatory network are local, while inhibitory connections are long range • A one-dimensional spatial-temporal model 𝜕𝐸 𝑥,𝑡 𝜕𝑡 𝜕𝐼 𝑥,𝑡 𝜏 𝜕𝑡 • 𝜏 = −𝐸(𝑥) + 𝑆𝐸 ( • = −𝐼(𝑥) + 𝑆𝐼 ( • • • • 𝑥 𝑤𝐸𝐸 𝐸(𝑥) 𝑥 𝑤𝐸𝐼 𝐸(𝑥) − − 𝑥 𝑤𝐼𝐸 𝐼(𝑥) 𝑥 𝑤𝐼𝐼 𝐼(𝑥) E(x,t), I(x,t) := mean firing rates of neurons x := position P,Q := external inputs wEE, wIE, wEI, wII, := weights of interactions + 𝑃(𝑥)) + 𝑄(𝑥)) 𝜕𝐸 𝑥,𝑡 𝜕𝑡 𝜕𝐼 𝑥,𝑡 𝜏 𝜕𝑡 • 𝜏 = −𝐸 + 1 − 𝑘𝐸 𝑆𝐸 ( • = −𝐼 + 1 − 𝑘𝐼 𝑆𝐼 ( 𝑥 𝑤𝐸𝐸 𝐸 𝑥 𝑤𝐸𝐼 𝐸 − − 𝑥 𝑤𝐼𝐸 𝐼 𝑥 𝑤𝐼𝐼 𝐼 + 𝑄) • Spatial exponential decay is determined by, e.g. • 𝑤𝐸𝐸 𝑥 − 𝑥′ = 𝑏𝐸𝐸 exp(− 𝑥−𝑥 ′ 𝜎𝐸𝐸 ) • x := position of input • x’ := position away from the input • Sigmoidal activation function • 𝑆 𝑃 = 100𝑃2 𝜃 2 +𝑃2 • P := stimulus input • Sigmoidal curve with respect to P + 𝑃) • Example: short term memory in prefrontal cortex • A brief stimulus = 10ms, 100 µm 110 100 E (red) & I (blue) Responses 90 80 70 60 50 40 30 20 10 0 0 200 400 600 800 1000 1200 Distance in microns 1400 800 1000 1200 Distance in microns 1400 1600 1800 2000 1600 1800 2000 • A brief stimulus = 10ms, 1000 µm 110 100 E (red) & I (blue) Responses 90 80 70 60 50 40 30 20 10 0 0 200 400 600 Wilson-Cowan model • Examples: short term memory, constant stimulus 110 100 E (red) & I (blue) Responses 90 80 70 60 50 40 30 20 10 0 0 500 1000 1500 2000 2500 Distance in microns 3000 3500 4000 Summary of Chapter 7 • Winner-take all network • Visual search can be disturbed by the number of irrelevant but similar objects • Wilson-Cowan model • A one-dimensional spatial-temporal dynamical system • Applications: • Short term memory in prefrontal cortex