* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download שקופית 1
Development of the nervous system wikipedia , lookup
Mirror neuron wikipedia , lookup
Central pattern generator wikipedia , lookup
Neuropsychopharmacology wikipedia , lookup
Feature detection (nervous system) wikipedia , lookup
Long-term depression wikipedia , lookup
Holonomic brain theory wikipedia , lookup
Molecular neuroscience wikipedia , lookup
Caridoid escape reaction wikipedia , lookup
Pre-Bötzinger complex wikipedia , lookup
Recurrent neural network wikipedia , lookup
Catastrophic interference wikipedia , lookup
End-plate potential wikipedia , lookup
Neuromuscular junction wikipedia , lookup
Pattern recognition wikipedia , lookup
Activity-dependent plasticity wikipedia , lookup
Synaptogenesis wikipedia , lookup
Sparse distributed memory wikipedia , lookup
Convolutional neural network wikipedia , lookup
Stimulus (physiology) wikipedia , lookup
Types of artificial neural networks wikipedia , lookup
Neurotransmitter wikipedia , lookup
Single-unit recording wikipedia , lookup
Neural modeling fields wikipedia , lookup
Neural coding wikipedia , lookup
Nonsynaptic plasticity wikipedia , lookup
Synaptic gating wikipedia , lookup
Nervous system network models wikipedia , lookup
What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? STDP Finds the Start of Repeating Patterns in Continuous Spike Trains Spiking neurons are flexible computational modules. Enable implement with their adjustable synaptic parameters an enormous variety of different transformations from input spike trains to output spike trains. The perceptron convergence theorem asserts the convergence of a supervised learning algorithm. In contrast, no guarantee for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, hold for STDP in the case of Poisson input spike trains. The resulting necessary and sufficient condition can be formulated in terms of linear separability. ◦ In case of perceptrons (McCulloch-Pitts neurons): threshold gates with static synapses, static batch inputs and outputs. ◦ In case of STDP: time-varying input and output streams. The theoretically predicted convergence of STDP with teacher forcing also holds for more realistic neurons models, dynamic synapses, and more general input distributions. The positive learning results hold for different interpretations of STDP where: ◦ changes the weights of synapses ◦ modulates the initial release probability of dynamic synapses STDP is related to various important learning rules and learning mechanisms. Question: To what extent STDP might support a more universal type of learning where a neuron learns to implement an “arbitrary given” map? There exist many maps from input spike trains to output spike trains that can’t be realized by a neuron for any setting of its adjustable parameters. ◦ For example, no values of weight could enable a generic neuron to produce a high-rate output spike train in the absence of any input spikes. A neuron learn to implement transformations in a stable manner with a parameter setting that represents a equilibrium point for the learning rule under consideration (STDP). STDP always produces bimodal distribution of weights, the minimal or maximal possible. ◦ Need to consider such conditions. Which of the many parameters that influence the input-output behavior should be viewed as being adjustable for a specific protocol for inducing synaptic plasticity (i.e., “learning”)? STDP adjust the following parameters: ◦ scaling factors w of the amplitudes ◦ initial release probabilities U Whereas: ◦ An increase of parameter U will increase the amplitude of the EPSP for the first spike. ◦ An increase of the scaling factor w tends to decrease the amplitudes of shortly following EPSPs. Assumption: during learning, the neuron is taught to fire at particular points in time via extra input currents, ◦ which could represent synaptic inputs from other cortical or subcortical areas. SNCC – spiking neuron convergence conjecture: STDP enables neurons to learn under this protocol: ◦ starting with arbitrary initial values ◦ any input-output transformation that the neuron could implement ◦ in a stable manner for some values of its adjustable parameters. A standard leaky integrate-and-fire neuron model: dVm m (Vm Vresting ) Rm ( I syn (t ) I background I inject (t )) dt ◦ Vm = membrane potential ◦ m = membrane time constant ◦ Rm = membrane resistance ◦ I syn (t ) = the current supplied by the synapse ◦ I background = a constant background current ◦ I inject (t ) = currents induced by a “teacher” If Vm exceeds the threshold voltage, it is reset and held there for the length of the absolute refractory period. The model proposed in Markram, Wang and Tsodyks (1998), predicts the amplitude of the excitatory postsynaptic current (EPSC) for the kth spike in a spike train with interspike intervals 1, 2 ,..., k 1 through the equations: Ak w uk Rk uk U uk 1 (1 U )e k 1 F Rk 1 ( Rk 1 uk 1 Rk 1 1)e k 1 D The variables u [0,1], R [0,1] are dynamic, whose initial values for the first spike are u1 U , R1 1 The parameters U, D, and F were randomly chosen from gaussian distributions that were based on empirically found data for such connections: ◦ If the input was excitatory (E) the mean values of these three parameters (with D, F expressed in seconds) were chosen to be 0.5, 1.1, 0.05. ◦ If the input was inhibitory (I) then 0.25, 0.7, 0.02. ◦ The SD of each parameter was chosen to be 10% of its mean. The effect of STDP is commonly tested by measuring in the postsynaptic neuron the amplitude A1 of the EPSP for a single spike from the presynaptic neuron. The interpretations for any change A in the amplitude of A1 w U R1 can caused by : ◦ A proportional change w of the parameter w ◦ A proportional change U of the initial release probability u1 = U ◦ A change of both w and U According to Abbott & Nelson (2000), the change A1 in the amplitude A1 of EPSPs (for the first spike in a test spike train) that results from pairing of: pre ◦ a firing of the presynaptic neuron at some time t post t pre t ◦ a firing of the postsynaptic neuron at time t can be approximated for many cortical synapses by terms of the form: W e t / , if A(t ) t / , if W e t 0 t 0 with constants W+,W−, τ+, τ− > 0 ◦ with an extra clause that prevents the amplitude A1 from growing beyond some maximal value Amax or below 0.