Download Networks of Neurons (2001)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Multielectrode array wikipedia , lookup

Caridoid escape reaction wikipedia , lookup

Aging brain wikipedia , lookup

Neuroinformatics wikipedia , lookup

Human intelligence wikipedia , lookup

Haemodynamic response wikipedia , lookup

Neural modeling fields wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Neuroeconomics wikipedia , lookup

Intelligence wikipedia , lookup

Brain wikipedia , lookup

Brain Rules wikipedia , lookup

History of neuroimaging wikipedia , lookup

Selfish brain theory wikipedia , lookup

Mirror neuron wikipedia , lookup

Neural oscillation wikipedia , lookup

Neuroplasticity wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Neuropsychology wikipedia , lookup

Neurotransmitter wikipedia , lookup

Connectome wikipedia , lookup

Artificial neural network wikipedia , lookup

Donald O. Hebb wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Neural coding wikipedia , lookup

Neural engineering wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Convolutional neural network wikipedia , lookup

Neurophilosophy wikipedia , lookup

Neuroscience and intelligence wikipedia , lookup

Evolution of human intelligence wikipedia , lookup

Single-unit recording wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Synaptogenesis wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Central pattern generator wikipedia , lookup

Molecular neuroscience wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Axon wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Recurrent neural network wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Artificial intelligence wikipedia , lookup

Optogenetics wikipedia , lookup

Mind uploading wikipedia , lookup

Circumventricular organs wikipedia , lookup

Biological neuron model wikipedia , lookup

Development of the nervous system wikipedia , lookup

Chemical synapse wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Neuroanatomy wikipedia , lookup

Synaptic gating wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Metastability in the brain wikipedia , lookup

Nervous system network models wikipedia , lookup

Transcript
Michael Arbib: CS564 - Brain Theory and Artificial Intelligence
Lecture 3 The Brain as a Network of Neurons
Reading Assignments:*
TMB2:
Section 2.3
HBTNN:
Single Cell Models (Softky and Koch)
Axonal Modeling (Koch and Bernander)
Perspective on Neuron Model Complexity (Rall)
* Unless indicated otherwise, the TMB2 material is the required reading, and
the other readings supplementary.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
1
The "basic" biological neuron
Dendrites
Soma
Axon with branches and
synaptic terminals
The soma and dendrites act as the input surface; the axon carries the outputs.
The tips of the branches of the axon form synapses upon other neurons or upon
effectors (though synapses may occur along the branches of an axon as well
as the ends). The arrows indicate the direction of "typical" information flow
from inputs to outputs.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
2
From Passive to Active Propagation
For "short" cells passive propagation suffices to signal a potential
change from one end to the other;
If the axon is long, this is inadequate since changes at one end would
decay away almost completely before reaching the other end.
If the change in potential difference is large enough, then in a
cylindrical configuration such as the axon, a pulse can actively
propagate at full amplitude. The Hodgkin-Huxley Equations (1952)
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
3
Excitatory and Inhibitory Synapses
Dale's law states that each neuron releases a single
transmitter substance. (A “first approximation”)
This does not mean that the synapses made by a single neuron are
either all excitatory or all inhibitory.
Modern understanding: Channels which "open" and "close"provide the
mechanisms for the Hodgkin-Huxley equation, and this notion of
channels extends to synaptic transmission.
The action of a synapse depends on both transmitter released
presynaptically, and specialized receptors in the postsynaptic
membrane.
Moreover, neurons may secrete transmitters which act as
neuromodulators of the function of a circuit on some quite extended
time scale (cf. TMB2 Sections 6.1 and 8.1).
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
4
A Synapse
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
5
Down the Levels of Detail to Neuron and Synapse
A small piece of cerebellum
A Purkinje cell
Just a few of the synapses of
parallel fibres on a branch of
the dendritic tree
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
6
The Neurochemistry of Synaptic Plasticity:
The Grounding of Learning and Memory
Eric Kandel’s view of how molecular changes in a synapse may
produce "short term memory" and "long term memory" in
Aplysia.
[cf. TMB2 Section 8.1]
The Nobel Prize in Physiology or Medicine for 2000 was awarded jointly
to
Arvid Carlsson, Paul Greengard and Eric Kandel
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
7
Approximations are crucial
1011
104
neurons with
or more synapses,
each of which is a complex chemical machine
A general challenge:
 Using computer and mathematical analysis to validate
simplifications which allow us to find the right projection of the
complexity of units at one level of analysis to provide components
for large scale models at the next level of analysis.
Behavior
Organism
Brain Regions
Schemas
Neural
Circuitry
Neurons
Synapses
Neurochemistry
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
8
Multi-Level Modeling and Data Analysis
Behavior
Organism
Brain Regions
Schemas
McCulloch-Pitts Neural Networks
(Logic and computability theory)
Hodgkin-Huxley model of the action
potential (spike propagation)
(Nonlinear differential equations)
Neural
Circuitry
Neurons
Synapses
Neurochemistry
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
9
From Mind to Neural Networks
Warren McCulloch
“What is a man that he may know a number,
and a number that a man may know it?”
A philosophy-driven approach:
Inspired by Kant and Leibnitz, seeking to map
logic onto neural function
Mathematics: Mathematical logic
(propositional logic; Turing’s theory of
computability)
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
10
Warren McCulloch and Walter Pitts (1943)
A McCulloch-Pitts neuron operates on a discrete
time-scale, t = 0,1,2,3, ... with time tick equal to
one refractory period
x 1(t)
w1
x 2(t)

w2
w
xn(t)
axon
y(t+1)
n
At each time step, an input or output is
on or off — 1 or 0, respectively.
Each connection or synapse from the output of one neuron to the input
of another, has an attached weight.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
11
Excitatory and Inhibitory Synapses
We call a synapse
excitatory if wi > 0, and
inhibitory if wi < 0.
We also associate a threshold
 with each neuron
A neuron fires (i.e., has value 1 on its output line) at time t+1 if the
weighted sum of inputs at t reaches or passes :
y(t+1) = 1 if and only if
 wixi(t)  .
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
12
From Logical Neurons to Finite Automata
1
AND
1.5
1
Brains, Machines, and
Mathematics, 2nd Edition,
1987
Boolean Net
1
OR
X Y
0.5
1
X
NOT
Finite
Automaton
0
-1
Y
Q
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
13
Philosophically and Technologically Important
A major attack on dualism
 The “brain” of a Turing machine
A good global view of the input-output computational capacity of neural
networks
An important basis for the technology of artificial neural networks
 with the addition of learning rules
But not a neuron-by-neuron account of the brain’s functions:

Logic is a culturally late activity of large neural
populations, not a direct expression of neural function.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
14
Increasing the Realism of Neuron Models
The McCulloch-Pitts neuron of 1943 is important
as a basis for:

logical analysis of the neurally computable, and

current design of some neural devices (especially when
augmented by learning rules to adjust synaptic weights).
However, it is no longer considered a useful model for making contact
with neurophysiological data concerning real neurons.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
15
Alan Hodgkin and Andrew Huxley
“What are the dynamics of the axon potential (spike propagation
along the axon?)”
Mathematics: Ordinary differential equations
A data-driven approach:
Giant squid axon 
 massive data 
 curve fitting 
 differential equations that elegantly describe these curves

Then later mathematics and computer analysis explore the far-ranging
implications of the Hodgkin-Huxley equations
Hodgkin &
Huxley
Photos from http://www.nobel.se/medicine/laureates/1963/press.html
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
16
From Curve Fitting to Nonlinear Analysis
The Hodgkin-Huxley Equations: 4 coupled differential equations
C dV/dt = - gL(V - VL) + Ia - gNam3h(V - VNa) - gKn4(V - VK)
L: Leakage
Na: Sodium
K: Potassium
where
dm/dt = am(V) (1 - m) - bm(V) m
dn/dt = an(V) (1 - n) - bn(V) n
dh/dt = ah(V) (1 - h) - bh(V) h
with the a s and b s found by curve-fitting
and the forms m3h and n4 add a deep intuitive understanding that anticipates the
structure of channels.
Many properties of these 4-variable Hodgkin-Huxley equations can be explored
analytically or qualitatively by two-dimensional differential equations.
 spontaneous spiking
 spiking which follows the input stimulus
 bursting
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
17
From Hodgkin-Huxley to the Neuron
Compartmental Modeling

Hundreds of compartments per neuron
Dendrites, soma and cell bodies
Synapses
versus
 A very
few compartments that capture neural essentials
Traub and Miles on neurons in the hippocampus
Mainen and Sejnowski on pyramidal cells of cortex.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
19
Leaky Integrator Neuron
The simplest "realistic" neuron model is a
single-compartment continuous time model based on
the firing rate (e.g., the number of spikes traversing the axon in the
most recent 20 msec.)
as a continuously varying measure of the cell's activity
The state of the neuron is described by a single variable, the membrane
potential.
The firing rate is approximated by a sigmoid, function of membrane
potential.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
20
Leaky Integrator Model
t m(t) = - m(t) + h
has solution m(t) = e-t/t m(0) + (1 - e-t/t)h
 h for time constant t > 0.
We now add synaptic inputs to get the
Leaky Integrator Model:
t m(t) = - m(t) +  i wi Xi(t) + h
where Xi(t) is the firing rate at the ith input.
Excitatory input (wi > 0) will increase
m(t)
Inhibitory input (wi < 0) will have the opposite effect.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
21
Rall’s Motion
Detector Model
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
22
Alternative Models
Even at this simple level, there are alternative models.
There are inhibitory synapses which seem better described by shunting
inhibition which, applied at a given point on a dendrite, serves to
divide, rather than subtract from, the potential change passively
propagating from more distal synapses.
The "lumped frequency" model cannot model the subtle relative timing
effects crucial to our motion detector example — these might be
approximated by introducing appropriate delay terms
t m(t)
= - m(t) +  i wi xi(t - ti) + h.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
23
No modeling approach is automatically appropriate
Rather we seek to find the simplest model adequate to address the
complexity of a given range of problems.
The Neural Simulation Language NSLJ provides tools for modeling
complex neural systems - especially (but not only) when the neurons
are modeled as leaky integrator neurons.
Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 3 Networks of Neurons
24